There is one sentence I really took out from the years at university, it was at a database implementation course:
> If you have a trouble solving some problem, see if sorting the data first helps.
I feel that sorting data is the ultimate computer science hack. Many, many, classes of problems turn into O(log n) problems, once you sort your input in some way. It might not be the most effective way of solving the problem, but it's often a fairly good one.
So I'm really enjoying how good sorting algorithms are getting and how despite the O complexity remains mostly the same, the real computing efficiency is improving significantly.
... and it many cases comes at a very low cost as quite often an index enabling to satisfy the usual "quickly find something" standard need exists, and most of them let us immediately obtain a sorted list.
I find in practice that if the sorting process is too slow, you should begin thinking about different ways to attack the problem. Maintaining a total global order of things tends to only get more expensive over time as the scope of your idea/product/business expands. The computational complexity of the sort algorithm is irrelevant once we get into memory utilization.
This is why we have things like tournament selection. Randomly sampling from the population and running tournaments is way more scalable than scanning and ordering a global list each iteration. You can maintain things like an ELO score with very narrow views into memory. Nothing needs a global view yet you get global effects.
Could you give an example of reframing a problem from totally ordered complete data to a sampled tournament? I can imagine cases amenable to sampling, but since sampled data is smaller I'm not sure why I wouldn't just sort it.
I don't yet see how tournament selection could work here, could you explain?
Sometimes when you think you need to maintain a sorted array under item insertion, it turns out that you only ever need to continually read the next-smallest (or next-largest) item -- and in that case, it suffices to maintain a heap, which is much cheaper.
An example would be an evolutionary algorithm that relies on a large population to maintain diversity. As you get into 6-7 figure population size, ranking the whole set starts to take a really long time (relatively speaking) each iteration. This also requires a serialized phase of processing that halts all workers for the duration.
With tournament selection, you can randomly pick indexes from the population to build tournaments, which is effectively instant. There is no more requirement for a serialized processing phase. All processors can build their own random tournaments and perform updates of scores. There will be occasional conflict on score updates but the idea is that with enough samples/iterations it becomes very accurate.
Chromium recommends to use flat_map, a map-like interface based on a sorted array, for data structures facing GUI or similar when the number of items in the map is naturally bounded. It is faster and more compact compared with hash maps.
The efforts of developing better sorting algorithms like driftsort/ipnsort and better hash functions like foldhash make my life as developer so much simpler. No matter how clever I try to be, most often just using foldhash hashmap or a sort_unstable is the fastest option
Your "Branchless" approach could indeed be implemented very efficiently in a CPU with AVX2 (256-bit-wide vectors). With the current element in rax, the 4 valid values in ymm2, and the 4 running totals in ymm3 (initially zero), the inner loop would be just:
VPBROADCASTQ copies rax into each of the 4 lanes in ymm1. The VPCMPEQQ sets each qword there to all-0 ( = 0) or all-1 ( = -1) depending on the comparison result, so the VPADDQ will accumulate 4 running negative totals into ymm3, which can be negated afterwards.
I would still expect the perfect hash function approach to be faster, though -- a similar number of operations, but 25% of the memory movement.
“Memory movement”? None of the instructions you list involve memory.
I find the perfect hash implementation a bit weird; it seems to obfuscate that you simply look at the lowest two bits (since they differ between the four values). You can do the x + 3 and 3 - expr at the very end, once, instead of for every element.
It only makes sense to talk about premature optimization in the context of building a production system (or a standard library).
This is research or experimentation, designed to improve our understanding of the behavior of algorithms. Calling it premature optimization makes no sense.
I think the article basically had this conclusion. Think twice before optimising here because you may be able to squeeze something out for a very limited scenario but it can have ugly failure modes and it end up being slower in some cases. Plus it takes time and effort. And modern standard sorts are "unreasonably" fast anyway for many practical purposes.
Then again only thinking of fixing things when a customer complains is a way to end up with a leaning tower of hacks which eventually ossify and also the customer (or rather the users, who may not be the customer especially in business software) may be putting up with dozens of niggles and annoyances before they bother to actually report one bug because they can't work around it.
Also known as premature optimization. You had to literally invent new dataset just to show there is a difference. You are inventing problems, stop doing that!
Sometimes that is how useful jumps are made. Maybe someone will come along with a problem and the data they have just happens to have similar properties.
Rather than premature optimisation this sort of thing is pre-emptive research - better to do it now than when you hit a performance problem and need the solution PDQ. Many useful things have come out of what started as “I wonder what if …?” playing.
Efficiency, not effectiveness. They are all effective in the sense that they produce sorted results. Even the non-modern sort algorithms are effective in the sense that the results are correct. This should be about the efficiency with which they do it, right?
Agreed. Effectiveness would imply that some algorithms are more likely to sort the list correctly than others, or they sort a higher percentage of elements. Efficiency is about factors external to the correctness
"The Unreasonable Effectiveness of Mathematics in the Natural Sciences" is one of those titles that gets imitated a lot for some reason. Maybe even more than "Goto Considered Harmful".
There is one sentence I really took out from the years at university, it was at a database implementation course:
> If you have a trouble solving some problem, see if sorting the data first helps.
I feel that sorting data is the ultimate computer science hack. Many, many, classes of problems turn into O(log n) problems, once you sort your input in some way. It might not be the most effective way of solving the problem, but it's often a fairly good one.
So I'm really enjoying how good sorting algorithms are getting and how despite the O complexity remains mostly the same, the real computing efficiency is improving significantly.
On a side note, some languages still refer to computers as ‘sorting machines’ or just ‘sorters’
... and it many cases comes at a very low cost as quite often an index enabling to satisfy the usual "quickly find something" standard need exists, and most of them let us immediately obtain a sorted list.
I find in practice that if the sorting process is too slow, you should begin thinking about different ways to attack the problem. Maintaining a total global order of things tends to only get more expensive over time as the scope of your idea/product/business expands. The computational complexity of the sort algorithm is irrelevant once we get into memory utilization.
This is why we have things like tournament selection. Randomly sampling from the population and running tournaments is way more scalable than scanning and ordering a global list each iteration. You can maintain things like an ELO score with very narrow views into memory. Nothing needs a global view yet you get global effects.
Could you give an example of reframing a problem from totally ordered complete data to a sampled tournament? I can imagine cases amenable to sampling, but since sampled data is smaller I'm not sure why I wouldn't just sort it.
I don't yet see how tournament selection could work here, could you explain?
Sometimes when you think you need to maintain a sorted array under item insertion, it turns out that you only ever need to continually read the next-smallest (or next-largest) item -- and in that case, it suffices to maintain a heap, which is much cheaper.
An example would be an evolutionary algorithm that relies on a large population to maintain diversity. As you get into 6-7 figure population size, ranking the whole set starts to take a really long time (relatively speaking) each iteration. This also requires a serialized phase of processing that halts all workers for the duration.
With tournament selection, you can randomly pick indexes from the population to build tournaments, which is effectively instant. There is no more requirement for a serialized processing phase. All processors can build their own random tournaments and perform updates of scores. There will be occasional conflict on score updates but the idea is that with enough samples/iterations it becomes very accurate.
Another example: https://danluu.com/2choices-eviction/
Double jaw-drop. First when the (dynamic) match was slower than the hash map, second when sort_unstable was faster than the hash map!
Cool article. It's clear that all my theoretical algorithm-knowledge comes short when faced with real CPUs.
Chromium recommends to use flat_map, a map-like interface based on a sorted array, for data structures facing GUI or similar when the number of items in the map is naturally bounded. It is faster and more compact compared with hash maps.
The efforts of developing better sorting algorithms like driftsort/ipnsort and better hash functions like foldhash make my life as developer so much simpler. No matter how clever I try to be, most often just using foldhash hashmap or a sort_unstable is the fastest option
Good read. Reminds me of the 1 billion row aggregation challenge, especially the perfect hashing part, all the top entries all used it.
https://github.com/gunnarmorling/1brc
Your "Branchless" approach could indeed be implemented very efficiently in a CPU with AVX2 (256-bit-wide vectors). With the current element in rax, the 4 valid values in ymm2, and the 4 running totals in ymm3 (initially zero), the inner loop would be just:
VPBROADCASTQ copies rax into each of the 4 lanes in ymm1. The VPCMPEQQ sets each qword there to all-0 ( = 0) or all-1 ( = -1) depending on the comparison result, so the VPADDQ will accumulate 4 running negative totals into ymm3, which can be negated afterwards.I would still expect the perfect hash function approach to be faster, though -- a similar number of operations, but 25% of the memory movement.
“Memory movement”? None of the instructions you list involve memory.
I find the perfect hash implementation a bit weird; it seems to obfuscate that you simply look at the lowest two bits (since they differ between the four values). You can do the x + 3 and 3 - expr at the very end, once, instead of for every element.
Isn't this just another case of premature optimization? Shouldn't you be adjusting sorting algorithms only when customer complains?
It only makes sense to talk about premature optimization in the context of building a production system (or a standard library).
This is research or experimentation, designed to improve our understanding of the behavior of algorithms. Calling it premature optimization makes no sense.
I think the article basically had this conclusion. Think twice before optimising here because you may be able to squeeze something out for a very limited scenario but it can have ugly failure modes and it end up being slower in some cases. Plus it takes time and effort. And modern standard sorts are "unreasonably" fast anyway for many practical purposes.
Then again only thinking of fixing things when a customer complains is a way to end up with a leaning tower of hacks which eventually ossify and also the customer (or rather the users, who may not be the customer especially in business software) may be putting up with dozens of niggles and annoyances before they bother to actually report one bug because they can't work around it.
This is pushing the limits to identify the boundaries
Also known as premature optimization. You had to literally invent new dataset just to show there is a difference. You are inventing problems, stop doing that!
> You are inventing problems
Sometimes that is how useful jumps are made. Maybe someone will come along with a problem and the data they have just happens to have similar properties.
Rather than premature optimisation this sort of thing is pre-emptive research - better to do it now than when you hit a performance problem and need the solution PDQ. Many useful things have come out of what started as “I wonder what if …?” playing.
Efficiency, not effectiveness. They are all effective in the sense that they produce sorted results. Even the non-modern sort algorithms are effective in the sense that the results are correct. This should be about the efficiency with which they do it, right?
From TFA:
The title is an homage to Eugene Wigner's 1960 paper "The Unreasonable Effectiveness of Mathematics in the Natural Sciences".
Agreed. Effectiveness would imply that some algorithms are more likely to sort the list correctly than others, or they sort a higher percentage of elements. Efficiency is about factors external to the correctness
"The Unreasonable Effectiveness of Mathematics in the Natural Sciences" is one of those titles that gets imitated a lot for some reason. Maybe even more than "Goto Considered Harmful".
Coming next: “What we talk about when we talk about modern sort algorithms”
or "we need to talk about what we talk about when we talk about the unreasonable effectiveness of title memes are all you need considered harmful"