The real issue isn’t that Claude is down, it can happen. The problem is that the status page doesn’t report anything, even if it has been impossible to log in during the past hour. Status pages should be trusted, connected to real metrics and not fake pr stuff :/
EDIT: Now they show the issue, kudos to them! Transparency is the key to build trust. No body expects a perfect service, thanks Claude team for your efforts.
This has consistently pissed me off. It seems like we all just accepted that whatever they define as "functioning"/"OK" is suitable. I see the status now shows, but there should be a very loud third party ruthlessly running continuous tests against all of them. Ideally it would also provide proof of the degradation we seem to all agree happens (looking at you Gemini). Like a leaderboard focused on actual live performance. Of course they'd probably quickly game that too. But something showing time to first response, "global capacity reached" etc, effective throttling metric, a intelligence metric. Maybe like crowdsourced stats so they can't focus on improving the metrics for just the IPs associated with this hypothetical third party performance watchdog.
The one that pissed me off the most was Gemini API displaying very clearly 1) user cancelled request in Gemini chat app 2) API showing "user quota reached". Both were blatant lies. In the latter case, you could find the actual global quota cause later in the error message. I don't know why there isn't more outrage. I'm guessing this sort of behavior is not new, but it's never been so visible to me.
> The problem is that the status page doesn’t report anything, even if it has been impossible to log in during the past hour.
When Claude took an extra day off, he forgot to report his hours to the dashboard when he will be unavailable / unresponsive and this is probably why people here are complaining about no status update.
I'm finding qwen 27b is comparable to sonnet but my self hosting has about 5 more 9s than whatever Anthropic's vibe coding. I also don't have to worry about the quality of the model I'm being served from day to day.
Probably the most damning fact about LLMs is just how poorly written their parent companies' systems are.
> Probably the most damning fact about LLMs is just how poorly written their parent companies' systems are
I have been working on some work related to MCP and found some gaps in implementation in Claude and Codex. This is a relatively simple, well-defined spec and both Claude Code and Codex CLI have incomplete/incorrect implementations.
During this process of investigation, I checked the CC repo and noticed they had 5000+ issues open. Out of curiosity, I skimmed through them and many point to regressions, real bugs, simple changes, etc. Maybe they have some internal tracker they are using, but you would think that a company with functionally unlimited tokens and access to the best models would be able to use those tokens to get their own house in order.
My sense now is that there is a need for the industry to create a lot of hype right now so we see showmanship like the kernel compiler and the agent swarms building a semi-functional browser, etc....yet their own tooling has not fully implemented their own protocol (MCP) correctly. They need all of us to believe that these agents are more capable than they actually are; the more piles of tangled code you write and the more discipline you cede to their LLMs, the more dependent you are on those LLMs to even know what the code is doing. At some point, teams become incapable of teasing the code apart anymore because no one will understand it.
Peeking at the issues in the repos and seeing big gaps in functionality like Codex's missing support for MCP prompts and resources is like looking behind the curtain at reality.
But do you actually treat LLMs as glorified autocomplete or treat them as puzzle solvers where you give them difficult tasks beyond your own intellect?
Recently I wrote a data transformation pipeline and I added a note that the whole pipeline should be idempotent. I asked Claude to prove it or find a counterexample. It found one after 25 minutes of thinking; I reasonably estimate that it would take me far longer, perhaps one whole day. I couldn’t care less about using Claude to type code I already knew.
I've tried a few models and some are decent, including Qwens models. I've tried a few harnesses like Roo Code in VSCode to put things together that in theory emulate the experience I get from VSCode + Claude or Copilot, but I generally find the experience extremely limited and frustrating.
How have you set things up to have a good experience?
Just to make one obvious critique your costs per token are probably about 1000x higher than the ones they provide.
I'm pretty sympathetic to Anthropic/OpenAI just because they are scaling a pretty new technology by 10x every year. It is too bad Google isn't trying to compete on coding models though, I feel like they'd do way better on the infra and stability side.
> Probably the most damning fact about LLMs is just how poorly written their parent companies' systems are.
This seems like a popular take, but I think it's the other way around. Them dogfooding cc with cc is proof that it can work, and that "code quality" doesn't really matter in the end.
Before cc claude.ai (equivalent of chatgpt) was meh. They were behind in features, behind in users, behind in mindshare. cc took them from "weirdos who use AI for coding" to "wait, you're NOT using cc? you freak" in ~1 year. And cc is a very big part of them reaching 1-2B$ monthly revenue.
Yes, it's buggy. Yes, the code is a mess (as per the leak, etc). But they're also the most used coding harness. And, on a technical side, having had cc as early as they did, helped them immensely on having users, having real-usage data, real-usage signals and so on. They trained the models on that data, and trained the models in sync with the harness. And it shows, their models are consistently the highest ranked both on benchmarks and on "vibes" from coders. Had they not have that, they would have lacked that real-world data.
And if you look at the competition it's even more clear. Goog is kidna nowhere with their gemini-cli, is all over the place with their antigravity-ex-windsurf, and while having really good generalist models, the general mindshare is just not there for coding. Same for oAI. They have an open-source, rust-based, "solid" cli, they have solid models (esp in code review, planning, architecture, bug fixing, etc) but they are not #1. Claude is with their cc.
So yeah, I think it's really the other way around. Having a vibe-coded, buggy, bad code solution, but being the first to have it, the first to push it, and the first to keep iterating on it is really what sets them apart. Food for thought on the future, and where coding is headed.
QWEN3.5-Next-Coder does wonders. It's drawbacks are time to first token is 30 seconds to load the model and OpenCode has an unsolved timeout issue on this load, but otherwise once it's warmed up, it's entirely serviceable.
I've got a AMD395+ with 128GB, so running a ~46GB model gives me about 85k tokens, which gives me easily copy/paste/find/replace behavior; it mocks up new components; it can wire in some functionality, but that's usually at it's limits and requires more debugging.
I've been looking at how to schedule it using systemd to keep a wiki up to date with a long loaded project and breaks the "blank page" issue with extending behaviors in a side project.
I understand some of these larger models can do things faster and smarter, but I don't see how they can implement novel functionality required for the type of app I'm concerned with. If I just wanted to make endless CRUD or TODO apps, I'm betting I could figure out a loop that's mostly hands off.
I am a believer that everyone should have their main flow be model/provider agnostic at a high level. I often run out of claude tokens and use GLM-5 as backup.
I tried the agnostic thing for a while, but there are enough quirks between the providers that I gave up trying to normalize it. GPT5.x wipes the floor with other models for my specific tool calling scenarios. I am not going to waste time trying to bridge arbitrary and evolving gaps between providers.
I put my Amex details into OAI, I get tokens, it just works. I really don't understand what the hell is going on with Claude. The $200/m thing is so confusing to me. I'd rather just go buy however many tokens I plan to use. $200 worth of OAI tokens would go a really long way for me (much longer than a month), but perhaps I am holding it wrong.
Being model and provider agnostic are orthogonal concerns.
e.g. you can run Claude models on AWS Bedrock giving you provider choice for the same model. Whether or not you need model agnosticism at that point seems like a very different question.
This is pretty much every monday morning, so it's either scale issues with the busiest window of time (people getting started at work on monday) or it's intentional "outage" that only affects some % of people to take load off the system so that API users (who pay more) can be served during the heaviest usage time of the week.
Yup displays as an "auth" issue to me. Just a nice reminder that my original plan was to be provider agnostic but everything was working so well with cc I lost sight lol.
Here's hoping they can get it sorted quick. Hopefully these are just growing pains and not indicative of a GitHub style inability to achieve stability.
Seems to be good now. Just logged in successfully. Can't live without Claude nowadays is the life learning I realized in the downtime retro to myself lol.
The downtime forces me to relook at my utterly dependent relationship with agentic assistance. The inertia to begin engaging with my code is higher than it has ever been.
The real issue isn’t that Claude is down, it can happen. The problem is that the status page doesn’t report anything, even if it has been impossible to log in during the past hour. Status pages should be trusted, connected to real metrics and not fake pr stuff :/
EDIT: Now they show the issue, kudos to them! Transparency is the key to build trust. No body expects a perfect service, thanks Claude team for your efforts.
This has consistently pissed me off. It seems like we all just accepted that whatever they define as "functioning"/"OK" is suitable. I see the status now shows, but there should be a very loud third party ruthlessly running continuous tests against all of them. Ideally it would also provide proof of the degradation we seem to all agree happens (looking at you Gemini). Like a leaderboard focused on actual live performance. Of course they'd probably quickly game that too. But something showing time to first response, "global capacity reached" etc, effective throttling metric, a intelligence metric. Maybe like crowdsourced stats so they can't focus on improving the metrics for just the IPs associated with this hypothetical third party performance watchdog.
The one that pissed me off the most was Gemini API displaying very clearly 1) user cancelled request in Gemini chat app 2) API showing "user quota reached". Both were blatant lies. In the latter case, you could find the actual global quota cause later in the error message. I don't know why there isn't more outrage. I'm guessing this sort of behavior is not new, but it's never been so visible to me.
it does show issues. https://status.claude.com/
You need to use a user-reported status page; the incentives are broken for self reporting.
Interesting. I just fixed something using Claude Code. But I am located in Central Europe.
> The problem is that the status page doesn’t report anything, even if it has been impossible to log in during the past hour.
When Claude took an extra day off, he forgot to report his hours to the dashboard when he will be unavailable / unresponsive and this is probably why people here are complaining about no status update.
Wonder where I have seen that before?
I'm finding qwen 27b is comparable to sonnet but my self hosting has about 5 more 9s than whatever Anthropic's vibe coding. I also don't have to worry about the quality of the model I'm being served from day to day.
Probably the most damning fact about LLMs is just how poorly written their parent companies' systems are.
During this process of investigation, I checked the CC repo and noticed they had 5000+ issues open. Out of curiosity, I skimmed through them and many point to regressions, real bugs, simple changes, etc. Maybe they have some internal tracker they are using, but you would think that a company with functionally unlimited tokens and access to the best models would be able to use those tokens to get their own house in order.
My sense now is that there is a need for the industry to create a lot of hype right now so we see showmanship like the kernel compiler and the agent swarms building a semi-functional browser, etc....yet their own tooling has not fully implemented their own protocol (MCP) correctly. They need all of us to believe that these agents are more capable than they actually are; the more piles of tangled code you write and the more discipline you cede to their LLMs, the more dependent you are on those LLMs to even know what the code is doing. At some point, teams become incapable of teasing the code apart anymore because no one will understand it.
Peeking at the issues in the repos and seeing big gaps in functionality like Codex's missing support for MCP prompts and resources is like looking behind the curtain at reality.
But do you actually treat LLMs as glorified autocomplete or treat them as puzzle solvers where you give them difficult tasks beyond your own intellect?
Recently I wrote a data transformation pipeline and I added a note that the whole pipeline should be idempotent. I asked Claude to prove it or find a counterexample. It found one after 25 minutes of thinking; I reasonably estimate that it would take me far longer, perhaps one whole day. I couldn’t care less about using Claude to type code I already knew.
I've tried a few models and some are decent, including Qwens models. I've tried a few harnesses like Roo Code in VSCode to put things together that in theory emulate the experience I get from VSCode + Claude or Copilot, but I generally find the experience extremely limited and frustrating.
How have you set things up to have a good experience?
Just to make one obvious critique your costs per token are probably about 1000x higher than the ones they provide.
I'm pretty sympathetic to Anthropic/OpenAI just because they are scaling a pretty new technology by 10x every year. It is too bad Google isn't trying to compete on coding models though, I feel like they'd do way better on the infra and stability side.
People keep saying this and idk what I'm doing wrong. Using q8_0 on all the latest and greatest local models and they just don't come close to sonnet.
I've tried different harnesses, building my own etc.
They are reasonably close to haiku? Maybe?
What do you run it on? And even then, I'm guessing your tokens per second are not great?
> Probably the most damning fact about LLMs is just how poorly written their parent companies' systems are.
This seems like a popular take, but I think it's the other way around. Them dogfooding cc with cc is proof that it can work, and that "code quality" doesn't really matter in the end.
Before cc claude.ai (equivalent of chatgpt) was meh. They were behind in features, behind in users, behind in mindshare. cc took them from "weirdos who use AI for coding" to "wait, you're NOT using cc? you freak" in ~1 year. And cc is a very big part of them reaching 1-2B$ monthly revenue.
Yes, it's buggy. Yes, the code is a mess (as per the leak, etc). But they're also the most used coding harness. And, on a technical side, having had cc as early as they did, helped them immensely on having users, having real-usage data, real-usage signals and so on. They trained the models on that data, and trained the models in sync with the harness. And it shows, their models are consistently the highest ranked both on benchmarks and on "vibes" from coders. Had they not have that, they would have lacked that real-world data.
And if you look at the competition it's even more clear. Goog is kidna nowhere with their gemini-cli, is all over the place with their antigravity-ex-windsurf, and while having really good generalist models, the general mindshare is just not there for coding. Same for oAI. They have an open-source, rust-based, "solid" cli, they have solid models (esp in code review, planning, architecture, bug fixing, etc) but they are not #1. Claude is with their cc.
So yeah, I think it's really the other way around. Having a vibe-coded, buggy, bad code solution, but being the first to have it, the first to push it, and the first to keep iterating on it is really what sets them apart. Food for thought on the future, and where coding is headed.
QWEN3.5-Next-Coder does wonders. It's drawbacks are time to first token is 30 seconds to load the model and OpenCode has an unsolved timeout issue on this load, but otherwise once it's warmed up, it's entirely serviceable.
I've got a AMD395+ with 128GB, so running a ~46GB model gives me about 85k tokens, which gives me easily copy/paste/find/replace behavior; it mocks up new components; it can wire in some functionality, but that's usually at it's limits and requires more debugging.
I've been looking at how to schedule it using systemd to keep a wiki up to date with a long loaded project and breaks the "blank page" issue with extending behaviors in a side project.
I understand some of these larger models can do things faster and smarter, but I don't see how they can implement novel functionality required for the type of app I'm concerned with. If I just wanted to make endless CRUD or TODO apps, I'm betting I could figure out a loop that's mostly hands off.
I am a believer that everyone should have their main flow be model/provider agnostic at a high level. I often run out of claude tokens and use GLM-5 as backup.
https://gist.github.com/ManveerBhullar/7ed5c01a0850d59188632...
simple script i use to toggle which backend my claude code is using
I tried the agnostic thing for a while, but there are enough quirks between the providers that I gave up trying to normalize it. GPT5.x wipes the floor with other models for my specific tool calling scenarios. I am not going to waste time trying to bridge arbitrary and evolving gaps between providers.
I put my Amex details into OAI, I get tokens, it just works. I really don't understand what the hell is going on with Claude. The $200/m thing is so confusing to me. I'd rather just go buy however many tokens I plan to use. $200 worth of OAI tokens would go a really long way for me (much longer than a month), but perhaps I am holding it wrong.
Being model and provider agnostic are orthogonal concerns.
e.g. you can run Claude models on AWS Bedrock giving you provider choice for the same model. Whether or not you need model agnosticism at that point seems like a very different question.
Interesting; do you find they actually react the same way to the harness?
For older programmers: this is like when Stack Overflow would go down.
For really old programmers: this is like when Computer Literacy bookstore was closed.
This is pretty much every monday morning, so it's either scale issues with the busiest window of time (people getting started at work on monday) or it's intentional "outage" that only affects some % of people to take load off the system so that API users (who pay more) can be served during the heaviest usage time of the week.
OAuth is failing, I can't login via claude code.
Same here. Usage limits are still pretty insane too.
Same here.
If you still need access we balance across Claude and AWS via https://kilo.ai/docs/gateway - and you can BYOK for many providers
I am currently - and (this post is up 25 minutes as of now) have been using it without noticeable degradation over the last few hours.
Edit: But the status page - at least as of now - is clearly communicating elevated error rates.
Yup displays as an "auth" issue to me. Just a nice reminder that my original plan was to be provider agnostic but everything was working so well with cc I lost sight lol.
Here's hoping they can get it sorted quick. Hopefully these are just growing pains and not indicative of a GitHub style inability to achieve stability.
Not that it is the best indicator, but downdector is showing many services with spikes at exactly the same time as Claude Codes issues began.
downdector always shows spikes when you go to look at it, and then they remove them later retroactively if the spikes are fake.
Still down for me. (And still nothing on the status page!)
Seems to be good now. Just logged in successfully. Can't live without Claude nowadays is the life learning I realized in the downtime retro to myself lol.
Failures all over Code and Chat here too (London UK) and status is showing all green
Here in my corner of Europe it seems to be working fine.
How much is remaining until the last 9 is gone too?
What are decent alternatives to ClaudeCode?
Codex is great.
I've found minimax to be quite good
Probably OpenCode, also works with Claude.
A keyboard /s
The downtime forces me to relook at my utterly dependent relationship with agentic assistance. The inertia to begin engaging with my code is higher than it has ever been.
Yeah. It's actually starting to make me anxious. I think I got addicted to these agents.
They banned all third party.
Loads of people cancelled their subscriptions.
Should be the least load they have been under in months. Yet unreliable.
Crazy that people are going with their benchmaxxed models.
Link for up top: https://status.claude.com/incidents/vfjv5x6qkd4j
I don't have any issues
Yup for me too - VSC Claude is def down and not working
Def down, keeps saying internal server error
Claude Code inside the desktop app works for me.
Claude isn't down. He's on vacation for today and took an extra day off after the weekend.
He'll be back to work by tomorrow.
Wtf. Was this just scrubbed/pushed down from frontpage?
IIRC threads that are just "yup, seeing this too" are not seen as being valuable here. There isn't (or at least wasn't) much discussion happening.