Does anyone know whether the cache is segregated by user/API key for the big providers?
Was looking at modifying outgoing requests via proxy and wondering whether that's harming caching. Common coding tools presumably have a shared prompt across all their installs so universal cache would save a lot
I was wondering about this when I was reading around the topic. I can’t personally think of a reason you would need to segregate, though it wouldn’t surprise me if they do for some sort of compliance reasons. I’m not sure though, would love to hear something first-party.
The only thing that comes to mind is some kind of timing attack. Send loads of requests specific to a company you’re trying to spy on and if it comes back cached you know someone has sent that prompt recently. Expensive attack, though, with a large search space.
So if I were running a provider I would be caching popular prefixes for questions across all users. There must be so many questions that start 'what is' or 'who was' etc?
Also, can subsequences in the prompt be cached and reused? Or is it only prefixes? I mean, can you cache popular phrases that might appear in the middle of the prompt and reuse that somehow rather than needing to iterate through them token by token? E.g. must be lots of times that "and then tell me what" appears in the middle of a prompt?
Really only prefixes, without a significant loss in accuracy. The point is that because later tokens can't influence earlier ones, the post-attention embeddings for those first tokens can't change. But the post-attention embeddings for "and then tell me what" would be wildly different for every prompt, because the embeddings for those tokens are affected by what came earlier.
My favorite not-super-accurate mental model of what's going on with attention is that the model is sort of compressing the whole preceding context into each token. So the word "tell" would include a representation not just of the concept of telling, but also of what it is that's supposed to be told. That's explicitly what you don't want to cache.
> So if I were running a provider I would be caching popular prefixes for questions across all users
Unless you're injecting user context before the question. You can have a pre baked cache with the base system prompt, but not beyond that. Imagine that the prompt always starts with "SYSTEM: You are ChatGPT, a helpful assistant. The time is 6:51 ET on December 19, 2025. The user's name is John Smith. USER: Hi, I was wondering..." You can't cache the "Hi, I was wondering" part because it comes after a high-entropy component (timestamp and user name).
With KV caching as it’s described there it has to be a prefix match. OpenAI state in their docs they don’t cache anything below 1024 tokens long, and I’m sure I read somewhere that they only cache in 1024 token blocks (so 1024, 2048, 3072, etc) but I can’t find it now.
There’s been some research into how to cache chunks in the middle, but I don’t think any of the providers are doing it yet because it needs the prompt to be structured in a very specific way.
It’s funny, I didn’t set out for that to be the case. When I pitched the idea internally, I wanted to scratch my own itch (what on earth is a cached token?) and produce a good post. But then I realised I had to go deeper and deeper to get to my answer and accidentally made a very long explainer.
The product has grown a lot since the mid 2010s. Still got free localhost tunnelling, but we also have a whole bunch of production-grade API gateway tooling and, as of recently, AI gateway stuff too.
Link seems to be broken: content briefly loads then is replaced with "Something Went Wrong" then "D is not a function". Stays broken with adblock disabled.
Another person had this problem as well and we couldn’t figure out what causes it. We suspect something to do with WebGL support. What browser/device are you using? Does it still break if you disable all extensions? I’d love to fix this.
Does anyone know whether the cache is segregated by user/API key for the big providers?
Was looking at modifying outgoing requests via proxy and wondering whether that's harming caching. Common coding tools presumably have a shared prompt across all their installs so universal cache would save a lot
For ChatGPT:
> Prompt caches are not shared between organizations. Only members of the same organization can access caches of identical prompts.
https://platform.openai.com/docs/guides/prompt-caching#frequ...
I was wondering about this when I was reading around the topic. I can’t personally think of a reason you would need to segregate, though it wouldn’t surprise me if they do for some sort of compliance reasons. I’m not sure though, would love to hear something first-party.
The only thing that comes to mind is some kind of timing attack. Send loads of requests specific to a company you’re trying to spy on and if it comes back cached you know someone has sent that prompt recently. Expensive attack, though, with a large search space.
A really clear explanation!
So if I were running a provider I would be caching popular prefixes for questions across all users. There must be so many questions that start 'what is' or 'who was' etc?
Also, can subsequences in the prompt be cached and reused? Or is it only prefixes? I mean, can you cache popular phrases that might appear in the middle of the prompt and reuse that somehow rather than needing to iterate through them token by token? E.g. must be lots of times that "and then tell me what" appears in the middle of a prompt?
Really only prefixes, without a significant loss in accuracy. The point is that because later tokens can't influence earlier ones, the post-attention embeddings for those first tokens can't change. But the post-attention embeddings for "and then tell me what" would be wildly different for every prompt, because the embeddings for those tokens are affected by what came earlier.
My favorite not-super-accurate mental model of what's going on with attention is that the model is sort of compressing the whole preceding context into each token. So the word "tell" would include a representation not just of the concept of telling, but also of what it is that's supposed to be told. That's explicitly what you don't want to cache.
> So if I were running a provider I would be caching popular prefixes for questions across all users
Unless you're injecting user context before the question. You can have a pre baked cache with the base system prompt, but not beyond that. Imagine that the prompt always starts with "SYSTEM: You are ChatGPT, a helpful assistant. The time is 6:51 ET on December 19, 2025. The user's name is John Smith. USER: Hi, I was wondering..." You can't cache the "Hi, I was wondering" part because it comes after a high-entropy component (timestamp and user name).
With KV caching as it’s described there it has to be a prefix match. OpenAI state in their docs they don’t cache anything below 1024 tokens long, and I’m sure I read somewhere that they only cache in 1024 token blocks (so 1024, 2048, 3072, etc) but I can’t find it now.
There’s been some research into how to cache chunks in the middle, but I don’t think any of the providers are doing it yet because it needs the prompt to be structured in a very specific way.
https://platform.openai.com/docs/guides/prompt-caching#requi...
> Caching is available for prompts containing 1024 tokens or more.
No mention of caching being in blocks of 1024 tokens thereafter.
This is a surprising good read of how LLM works in general.
It’s funny, I didn’t set out for that to be the case. When I pitched the idea internally, I wanted to scratch my own itch (what on earth is a cached token?) and produce a good post. But then I realised I had to go deeper and deeper to get to my answer and accidentally made a very long explainer.
It was a real facepalm moment when I realised we were busting the cache on every request by including date time near the top of the main prompt.
Even just moving it to the bottom helped move a lot of our usage into cache.
Probably went from something like 30-50% cached tokens to 50-70%.
Took me a minute to see it is same Ngrok which provided freemium tunnels to localhost. How did they adapt to the AI revolution?
It is the same ngrok!
The product has grown a lot since the mid 2010s. Still got free localhost tunnelling, but we also have a whole bunch of production-grade API gateway tooling and, as of recently, AI gateway stuff too.
[under-the-rug stub]
[see https://news.ycombinator.com/item?id=45988611 for explanation]
Blog starts loading and then gives "Something Went Wrong. D is not a function" error displayed
Could you tell me what browser/OS/device you’re using? A few people have said this and I haven’t been able to reproduce it.
You should upgrade IE6. It has been out of support for a while...
Link seems to be broken: content briefly loads then is replaced with "Something Went Wrong" then "D is not a function". Stays broken with adblock disabled.
Another person had this problem as well and we couldn’t figure out what causes it. We suspect something to do with WebGL support. What browser/device are you using? Does it still break if you disable all extensions? I’d love to fix this.