My first instinct for a poorly predicted branch would be to use a conditional move.
This isn't always a win, because you prevent the CPU from speculating down the wrong path, but you also prevent it from speculating the correct path.
If you really don't care about the failure path and really don't mind unmaintainable low-level hacks, I can think of a few ways to get creative.
First there's the whole array of anti uarch-speculation-exploit tricks in the Kernel that you can use as inspiration to control what the CPU is allowed to speculate. These little bits of assembly were reviewed by engineers from Intel and AMD, so these tricks can't stop working without also breaking the kernel with it.
Another idea is to take inspiration from anti-reverse engineering tricks. Make the failure path an actual exception. I don't mean software stack unwinding, I mean divide by your boolean and then call your send function unconditionally. If the boolean is true, it costs nothing because the result of the division is unused and we just speculate past it. If the boolean is false, the CPU will raise a divide by 0 exception, and this invisible branch will never be predicted by the CPU. Then your exception handler recovers and calls the cold path.
We could potentially use a conditional move and an unconditional jump to make the branch target predictor do the work instead - and flood it with a bunch of targets which are intended to mispredict. Eg, we could give 255 different paths for abandon and select one randomly:
Assuming no inherent bias in the low byte produced by `random`, there's only a ~1/255 chance that an abandon branch will correctly predict, though this is also true for the send branch. The conditional branch in send though should only mispredict 1/256 times (when random returns 0).
If we're sending significantly more often than 1/256 calls to resolve, it may be possible to train the BTP to prefer the send branch, as it will correctly predict this branch more often than the others which are chosen randomly - though this would depend on how the branch target predictor is implemented in the processor.
rand() doesn't do what you think it does. It's a multiply then an add, then return particular bits of the current seed. See "Linear congruential generator" for more information.
On GCC, it's multiply by 1103515245 then add 12345, and return low 30 bits. On MSVC, it's multiply by 214013 and add 2531011 then return bits 16...30.
I don't specifically mean stdlib `rand` (hence comment about assuming no inherent bias) - but just some arbitrary PRNG. Updated the code above to say `random` so as not confuse.
> I asked Claude if there is such a way to basically hard-code branch prediction rules into the machine code, and the answer was that there’s no way to do this on x86, but there is a way on ARM: the BEQP (predict branch taken) and BEQNP (predict branch not taken) instructions.
> Those ARM instructions are just hallucinated, and the reality is actually the other way around: ARM doesn’t have a way of hard-coding ‘predictions’, but x86 does.
I asked Claude once if there was a way to open a tun/tap on Windows without a driver. It hallucinated an entire supposedly undocumented NT kernel API that as far as I can tell has never existed, complete with parts of this hallucinated API being depreciated in favor of newer non-existent parts.
It was so detailed it makes me wonder if maybe it was in the training data somewhere. Maybe it ingested an internal MS doc for a proposed API or something. The case of the missing ARM instructions makes me wonder the same. Maybe someone on a forum proposed them and they were ingested.
I did actually verify on the OS that the calls do not exist in the kernel or any driver or DLL on the system.
LLM's generating text referencing nonexistent up API's or machine instructions seems to me to be exactly like LLMs generating text referencing nonexistent legal cases or generating fictional text referencing nonexistent people.
They've been fed enough real text in each of these various styles that they can parrot the shallow surface style without reference to the internals.
My first instinct, knowing less about this domain than maybe I should, would be to abuse the return address predictor. I believe CPUs will generally predict the target of a “ret” instruction using an internal stack of return addresses; some ARM flavours even make this explicit (https://developer.arm.com/documentation/den0042/0100/Unified...).
The way to abuse this would be to put send() on the normal return path and call abandon() by rewriting the return address. In code:
This isn’t exactly correct because it ignores control flow integrity (which you’d have to bypass), doesn’t work like this on every architecture, and abandon() would need to be written partly in assembly to deal with the fact that the stack is in a weird state post-return, but hopefully it conveys the idea anyway.
The if in predict() is implementable as a branchless conditional move. The return address predictor should guess that predict() will return to send(), but in most cases you’ll smash the return address to point at abandon() instead.
I love the `[[likely]]` and `[[unlikely]]` tags since they nicely encapsulate the Modern C++ philosophy.
1. They don't work that well.
2. The intended use case is that you'd label an unlikely branch that you want to speed up as `[[likely]]`, which is confusing.
They are certainly motivated by good intentions (like the HFT use-case as mentioned in TFA, I remember that talk too, I think I was a volunteer that year for CppCon), but even after knowing both of my points above _they were still added to the standard_.
Why not just make all the abandon transactions into fake discarded transactions, discard them at the send later. E.g. by poisoning the frame checksum or setting something invalid on them, so they get discarded.
Seems you'd be doing this anyway with the dummy transactions.
Then you have no branch, though may want to add dummy transactions anyway to keep the code in cache.
Make sure your global branch history is the same when "mistraining" and predicting with your BTB. You may end up in the wrong BTB entry and still mess up your prediction :).
Sounds like you want to react to the market when you see a certain price in the incoming stream.
The trigger doesn't happen often, you when it does, the naive implementation incurs a misprediction penalty.
I wonder if the solution here might be to avoid the branch altogether?
Write some boolean logic monster that isn't an if condition. It unconditionally creates the reaction (new/cancel order), but if the trigger was not present, the payload is invalid and downstream processing will not let it out of your system onto the network.
The branch taken hint (3EH) was re-added in Redwood Cove (2023), but it's only for static prediction where the branch predictor has not yet encountered the branch - ie, useful for things you would only use once or twice but would likely take the branch. Once the branch predictor has some information the static prediction hint is ignored, so it's best to omit it for anything that will eventually have dynamic branch prediction (ie, run many times), because you'll be consuming bytes in the i-cache which serve no purpose.
Interesting problem! Not a very satisfying solution but I can't think of anything better. Even if there were hints that were respected, you'd still have the problem of the code not being in icache, unless you actually execute it occasionally.
Do cpus really track that much about branches? I know JIT does but where does a cpu find the needed memory to store those counters - and them how does reading those not result in a different miss because the cpu can't speculate until it does the if prediction?
last time I checked a cpu documentation they had a simple rule that branches are always taken, that would be easy for the compiler to code order first. However I don't recall which CPU that was. Still this whole thing feels like a citation needed with me being suspicious it is false. CPU designers know this matters and sometimes compilers have information they don't that users care about: they document how it works. (This is the CPU not the instruction set)
Maybe I'm misinterpreting you, but I think you are vastly underestimating both the accuracy and benefit of branch prediction. Yes, CPU's track that much about branches. Your suspicion that this is false is misguided. Here's Dan Luu's 2017 overview: https://danluu.com/branch-prediction/.
> last time I checked a cpu documentation they had a simple rule that branches are always taken, that would be easy for the compiler to code order first.
Was that in the 80s? Modern performant CPUs all use dynamic branch prediction.
I’m not really sure I understand the “where does it get the memory” point. Yes, this requires some amount of memory per tracked branch. This memory is hardwired into the CPU, just like all the other memory a CPU needs to do its job (registers, L1 cache, TLB, various in-flight instruction state, etc.)
over the 15 million lines of code I maintain there are a lot of branches. the cpu can track the most common ones but as soon as the code spills out of cache where does that memory come from?
It doesn’t track all of them, it’s a cache. The ones that were hit long enough ago get evicted, and then if you ever hit them again the processor will indeed have to fall back to a naive static prediction strategy with a high miss probability.
My first instinct for a poorly predicted branch would be to use a conditional move.
This isn't always a win, because you prevent the CPU from speculating down the wrong path, but you also prevent it from speculating the correct path.
If you really don't care about the failure path and really don't mind unmaintainable low-level hacks, I can think of a few ways to get creative.
First there's the whole array of anti uarch-speculation-exploit tricks in the Kernel that you can use as inspiration to control what the CPU is allowed to speculate. These little bits of assembly were reviewed by engineers from Intel and AMD, so these tricks can't stop working without also breaking the kernel with it.
Another idea is to take inspiration from anti-reverse engineering tricks. Make the failure path an actual exception. I don't mean software stack unwinding, I mean divide by your boolean and then call your send function unconditionally. If the boolean is true, it costs nothing because the result of the division is unused and we just speculate past it. If the boolean is false, the CPU will raise a divide by 0 exception, and this invisible branch will never be predicted by the CPU. Then your exception handler recovers and calls the cold path.
We could potentially use a conditional move and an unconditional jump to make the branch target predictor do the work instead - and flood it with a bunch of targets which are intended to mispredict. Eg, we could give 255 different paths for abandon and select one randomly:
Assuming no inherent bias in the low byte produced by `random`, there's only a ~1/255 chance that an abandon branch will correctly predict, though this is also true for the send branch. The conditional branch in send though should only mispredict 1/256 times (when random returns 0).If we're sending significantly more often than 1/256 calls to resolve, it may be possible to train the BTP to prefer the send branch, as it will correctly predict this branch more often than the others which are chosen randomly - though this would depend on how the branch target predictor is implemented in the processor.
rand() doesn't do what you think it does. It's a multiply then an add, then return particular bits of the current seed. See "Linear congruential generator" for more information.
On GCC, it's multiply by 1103515245 then add 12345, and return low 30 bits. On MSVC, it's multiply by 214013 and add 2531011 then return bits 16...30.
I don't specifically mean stdlib `rand` (hence comment about assuming no inherent bias) - but just some arbitrary PRNG. Updated the code above to say `random` so as not confuse.
> I asked Claude if there is such a way to basically hard-code branch prediction rules into the machine code, and the answer was that there’s no way to do this on x86, but there is a way on ARM: the BEQP (predict branch taken) and BEQNP (predict branch not taken) instructions.
> Those ARM instructions are just hallucinated, and the reality is actually the other way around: ARM doesn’t have a way of hard-coding ‘predictions’, but x86 does.
This made me chuckle. Thanks.
To be fair, on the x86 side those branch prediction hint prefixes have been functionally ignored by pretty much all cores for about two decades.
If a human wrote that here (on HN) someone would note the error and the poster would reply:
Yes, sorry, you’re correct. I’ve usually had 97 more double ristrettos by this time in the morning.
Some schools of though suggest this has already happened.
LLMs are just what happens when we hit the singularity of caffeine consumption?
I asked Claude once if there was a way to open a tun/tap on Windows without a driver. It hallucinated an entire supposedly undocumented NT kernel API that as far as I can tell has never existed, complete with parts of this hallucinated API being depreciated in favor of newer non-existent parts.
It was so detailed it makes me wonder if maybe it was in the training data somewhere. Maybe it ingested an internal MS doc for a proposed API or something. The case of the missing ARM instructions makes me wonder the same. Maybe someone on a forum proposed them and they were ingested.
I did actually verify on the OS that the calls do not exist in the kernel or any driver or DLL on the system.
LLM's generating text referencing nonexistent up API's or machine instructions seems to me to be exactly like LLMs generating text referencing nonexistent legal cases or generating fictional text referencing nonexistent people.
They've been fed enough real text in each of these various styles that they can parrot the shallow surface style without reference to the internals.
What a fun problem to think about.
My first instinct, knowing less about this domain than maybe I should, would be to abuse the return address predictor. I believe CPUs will generally predict the target of a “ret” instruction using an internal stack of return addresses; some ARM flavours even make this explicit (https://developer.arm.com/documentation/den0042/0100/Unified...).
The way to abuse this would be to put send() on the normal return path and call abandon() by rewriting the return address. In code:
This isn’t exactly correct because it ignores control flow integrity (which you’d have to bypass), doesn’t work like this on every architecture, and abandon() would need to be written partly in assembly to deal with the fact that the stack is in a weird state post-return, but hopefully it conveys the idea anyway.The if in predict() is implementable as a branchless conditional move. The return address predictor should guess that predict() will return to send(), but in most cases you’ll smash the return address to point at abandon() instead.
I love the `[[likely]]` and `[[unlikely]]` tags since they nicely encapsulate the Modern C++ philosophy.
1. They don't work that well.
2. The intended use case is that you'd label an unlikely branch that you want to speed up as `[[likely]]`, which is confusing.
They are certainly motivated by good intentions (like the HFT use-case as mentioned in TFA, I remember that talk too, I think I was a volunteer that year for CppCon), but even after knowing both of my points above _they were still added to the standard_.
Why not just make all the abandon transactions into fake discarded transactions, discard them at the send later. E.g. by poisoning the frame checksum or setting something invalid on them, so they get discarded.
Seems you'd be doing this anyway with the dummy transactions.
Then you have no branch, though may want to add dummy transactions anyway to keep the code in cache.
This is literally what is says in TFA lol
Make sure your global branch history is the same when "mistraining" and predicting with your BTB. You may end up in the wrong BTB entry and still mess up your prediction :).
Sounds like you want to react to the market when you see a certain price in the incoming stream.
The trigger doesn't happen often, you when it does, the naive implementation incurs a misprediction penalty.
I wonder if the solution here might be to avoid the branch altogether?
Write some boolean logic monster that isn't an if condition. It unconditionally creates the reaction (new/cancel order), but if the trigger was not present, the payload is invalid and downstream processing will not let it out of your system onto the network.
> On modern x86 processors, those instruction prefixes are simply ignored
This sucks.
The branch taken hint (3EH) was re-added in Redwood Cove (2023), but it's only for static prediction where the branch predictor has not yet encountered the branch - ie, useful for things you would only use once or twice but would likely take the branch. Once the branch predictor has some information the static prediction hint is ignored, so it's best to omit it for anything that will eventually have dynamic branch prediction (ie, run many times), because you'll be consuming bytes in the i-cache which serve no purpose.
if the branch is only taken once how can you realize a significant performance benefit more than a few ns?
Interesting problem! Not a very satisfying solution but I can't think of anything better. Even if there were hints that were respected, you'd still have the problem of the code not being in icache, unless you actually execute it occasionally.
Do cpus really track that much about branches? I know JIT does but where does a cpu find the needed memory to store those counters - and them how does reading those not result in a different miss because the cpu can't speculate until it does the if prediction?
last time I checked a cpu documentation they had a simple rule that branches are always taken, that would be easy for the compiler to code order first. However I don't recall which CPU that was. Still this whole thing feels like a citation needed with me being suspicious it is false. CPU designers know this matters and sometimes compilers have information they don't that users care about: they document how it works. (This is the CPU not the instruction set)
Maybe I'm misinterpreting you, but I think you are vastly underestimating both the accuracy and benefit of branch prediction. Yes, CPU's track that much about branches. Your suspicion that this is false is misguided. Here's Dan Luu's 2017 overview: https://danluu.com/branch-prediction/.
Modern cores can have more sram for branch prediction than they do for L1I$.
> last time I checked a cpu documentation they had a simple rule that branches are always taken, that would be easy for the compiler to code order first.
Was that in the 80s? Modern performant CPUs all use dynamic branch prediction.
I’m not really sure I understand the “where does it get the memory” point. Yes, this requires some amount of memory per tracked branch. This memory is hardwired into the CPU, just like all the other memory a CPU needs to do its job (registers, L1 cache, TLB, various in-flight instruction state, etc.)
over the 15 million lines of code I maintain there are a lot of branches. the cpu can track the most common ones but as soon as the code spills out of cache where does that memory come from?
It doesn’t track all of them, it’s a cache. The ones that were hit long enough ago get evicted, and then if you ever hit them again the processor will indeed have to fall back to a naive static prediction strategy with a high miss probability.