Are employees from Anthropic botting this post now? This should be one of the top most voted posts in this website but it's nowhere on the first 3 pages.
Also remember, using claude to code might make the company you're working for richer. But you are forgetting your skills (seen it first hand), and you're not learning anything new. Professionally you are downgrading. Your next interview won't be testing your AI skills.
> Your next interview won't be testing your AI skills
Not that I disagree with your overall point, but have you interviewed recently? 90% of companies I interacted with required (!) AI skills, and me telling them how exactly I "leverage" it to increase my productivity.
AWS actually hosts the models. Security & isolation is part of the proposed value proposition for people and organizations that need to care about that sort of stuff.
It also allows for consolidated billing, more control over usage, being able to switch between providers and models easily, and more.
I typically don’t use Bedrock, but when I have it’s been fine. You can even use Claude Code with a Bedrock API key if you prefer
I’ve been using Claude Code w/ bedrock for the last few weeks and it’s been pretty seamless. Only real friction is authenticating with AWS prior to a session.
Bedrock runs all their stuff in house and doesn’t send any data elsewhere or train on it which is great for organizations who already have data governance sign off with AWS.
I switched from OpenAI to Anthropic over the weekend due to the OpenAI fiasco.
I haven't been using the service long enough to comment on the quality of the responses/code generation, although the outages are really quite impactful.
I feel like half of my attempted times using Claude have been met with an Error or Outage, meanwhile the usage limits seem quite intense on Claude Code. I asked Claude to make a website to search a database. It took about 6 minutes for Claude to make it, meanwhile it used 60% of my 4h quota window. I wasn't able to re-find it past asking it to make some basic font changes until I became limited. Under 30 minutes and my entire 4 hour window was used up.
Meanwhile with ChatGPT Codex, a multi-hour coding session would still have 20%+ available at the end of the 4/5 hour window.
I have been using anthropic almost exclusively for a year, while trying other models, and this has literally never happened. I have NEVER experienced a downtime event. At most a random error in a chat but that is immediately solved on the subsequent request. I use the desktop app, the mobile app, the api with several apps in production that I monitor and reliability has never been an issue.
I pay about $1500 per month on personal api use fyi.
I’ve had semi regular downtime since I stayed using Claude about two months ago. I love it but I find it less reliable than alternatives. This is evidenced on their status page (regularly showing red bars).
You're not wrong, for sufficient simple cases it's at a disadvantage. But once things get complicated, it wins by being the only thing that you can get to work without going insane.
And yeah, any serious use completely assumes a Max sub.
I hope they improve their incident response comms in the future. 2.5 hours with nothing more than "We are continuing to investigate this issue" is pretty poor form.
Their past history of incident handling looks just as bad.
I cannot imagine how you can properly supervise an LLM agent if you can't effectively do the work yourself, maybe slightly slower. If the agent is going a significant amount faster than you could do it, you're probably not actually supervising it, and all kinds of weird crap could sneak in.
Like, I can see how it can be a bit quicker for generating some boilerplate, or iterating on some uninteresting API weirdness that's tedious to do by hand. But if you're fundamentally going so much faster with the agent than by hand, you're not properly supervising it.
So yeah, just go back to coding by hand. You should be doing tha probably ~20% of the time anyhow just to keep in practice.
Yeah, the influx of people is disrupting my work, but it brings me joy to witness OpenAI’s decline in consumer support. So much for their Jonny Ive product, whatever it was.
I was having an extended incognito chat with claude.ai, and then it stopped responding. I saved the transcript in a notepad and checked in another tab whether it was down. i wonder if the incognito session is gone, and whether by reposting it i can resurrect it. I have done so with Gemini but there it has codes like "Gemini said", which I do not see here. If anyone knows that, appreciate a solution.
Never noticed it being outright down like this except for today (and yesterday), never had actual downtime except for few failed requests that worked after a retry which coincides with AWS datacenters going offline.
well there has been pretty large deals going on in UAE especially when it comes to AI since they can get any power capacity with a flick of their fingers for an unbeatable price and the latency in AI doesn't really matter since the first token is usually seconds anyway. And it's not just AWS it's the entire region.
Are employees from Anthropic botting this post now? This should be one of the top most voted posts in this website but it's nowhere on the first 3 pages.
Also remember, using claude to code might make the company you're working for richer. But you are forgetting your skills (seen it first hand), and you're not learning anything new. Professionally you are downgrading. Your next interview won't be testing your AI skills.
> Your next interview won't be testing your AI skills
Not that I disagree with your overall point, but have you interviewed recently? 90% of companies I interacted with required (!) AI skills, and me telling them how exactly I "leverage" it to increase my productivity.
Are they just looking for AI skills? If so that's terrifying.
Probably, I think hand coding is going the way of the dodo and the ox cart.
They need to keep an emergency backup Claude to fix the production Claude when it goes down.
(More seriously I wonder if they'd consider using Openai or Gemini for this purpose)
Opus and Sonnet are still working fine in AWS Bedrock (and probably Google Vertex), so they genuinely do have an emergency backup Claude they can use.
Isnt bedrock and vertex pass thru to anthropic servers ? I didnt know aws/google are deploying the actual models
AWS actually hosts the models. Security & isolation is part of the proposed value proposition for people and organizations that need to care about that sort of stuff.
It also allows for consolidated billing, more control over usage, being able to switch between providers and models easily, and more.
I typically don’t use Bedrock, but when I have it’s been fine. You can even use Claude Code with a Bedrock API key if you prefer
https://docs.aws.amazon.com/bedrock/latest/userguide/what-is...
https://code.claude.com/docs/en/amazon-bedrock
(I am not affiliated with AWS in any way. I’m just a user stuck in their ecosystem!)
I’ve been using Claude Code w/ bedrock for the last few weeks and it’s been pretty seamless. Only real friction is authenticating with AWS prior to a session.
Bedrock runs all their stuff in house and doesn’t send any data elsewhere or train on it which is great for organizations who already have data governance sign off with AWS.
Maybe they can use the ultimate backup...human programmers!
I switched from OpenAI to Anthropic over the weekend due to the OpenAI fiasco.
I haven't been using the service long enough to comment on the quality of the responses/code generation, although the outages are really quite impactful.
I feel like half of my attempted times using Claude have been met with an Error or Outage, meanwhile the usage limits seem quite intense on Claude Code. I asked Claude to make a website to search a database. It took about 6 minutes for Claude to make it, meanwhile it used 60% of my 4h quota window. I wasn't able to re-find it past asking it to make some basic font changes until I became limited. Under 30 minutes and my entire 4 hour window was used up.
Meanwhile with ChatGPT Codex, a multi-hour coding session would still have 20%+ available at the end of the 4/5 hour window.
I have been using anthropic almost exclusively for a year, while trying other models, and this has literally never happened. I have NEVER experienced a downtime event. At most a random error in a chat but that is immediately solved on the subsequent request. I use the desktop app, the mobile app, the api with several apps in production that I monitor and reliability has never been an issue.
I pay about $1500 per month on personal api use fyi.
I’ve had semi regular downtime since I stayed using Claude about two months ago. I love it but I find it less reliable than alternatives. This is evidenced on their status page (regularly showing red bars).
You're not wrong, for sufficient simple cases it's at a disadvantage. But once things get complicated, it wins by being the only thing that you can get to work without going insane.
And yeah, any serious use completely assumes a Max sub.
Jarred (from Bun) said that a lot of the errors are being of how much they've scaled in users recently (i.e., the flock that came from OpenAI)
The first scaling event was after their highly successful Super Bowl ad and the second was being on the right side of history over the weekend.
this has been an issue for years at this point... other labs are hardly any better tho
I hope they improve their incident response comms in the future. 2.5 hours with nothing more than "We are continuing to investigate this issue" is pretty poor form. Their past history of incident handling looks just as bad.
keeps going down. One more time and I'm moving to Codex. Or hell, I better go back to using my actual brain and coding, god forbid. Fml.
Please relearn to use your brain.
I cannot imagine how you can properly supervise an LLM agent if you can't effectively do the work yourself, maybe slightly slower. If the agent is going a significant amount faster than you could do it, you're probably not actually supervising it, and all kinds of weird crap could sneak in.
Like, I can see how it can be a bit quicker for generating some boilerplate, or iterating on some uninteresting API weirdness that's tedious to do by hand. But if you're fundamentally going so much faster with the agent than by hand, you're not properly supervising it.
So yeah, just go back to coding by hand. You should be doing tha probably ~20% of the time anyhow just to keep in practice.
You'll be back :)
Yeah, the influx of people is disrupting my work, but it brings me joy to witness OpenAI’s decline in consumer support. So much for their Jonny Ive product, whatever it was.
I am so baffled that someone with the stature of Jony Ive fell prey to scam Altman empty promises. I would have expected much more of him.
Altman put all of his attribute points on lying.
The service has been inconsistent and/or down for the last 12 hours..
I'm basing my next projects on the ability of Claude code to write code for me. This disruptions are scary.
I was having an extended incognito chat with claude.ai, and then it stopped responding. I saved the transcript in a notepad and checked in another tab whether it was down. i wonder if the incognito session is gone, and whether by reposting it i can resurrect it. I have done so with Gemini but there it has codes like "Gemini said", which I do not see here. If anyone knows that, appreciate a solution.
This right now today is making the case for OSS AI and local inference. 200$/m to get rate limited makes a RTX 6000 Pro look cheap.
Seems to be the biggest outage yet. Might be related to power loss events in UAE timing is suspicious as more datacenters appear to be hit.
If you look at their status page, something has been bubbling for the past week
https://status.claude.com
Never noticed it being outright down like this except for today (and yesterday), never had actual downtime except for few failed requests that worked after a retry which coincides with AWS datacenters going offline.
> Might be related to power loss events in UAE timing is suspicious as more datacenters appear to be hit.
More datacenters? I thought it was just one.
The strikes are actually still ongoing afaik.
A not particularly large AWS region on the other side of the world? Doubt it.
well there has been pretty large deals going on in UAE especially when it comes to AI since they can get any power capacity with a flick of their fingers for an unbeatable price and the latency in AI doesn't really matter since the first token is usually seconds anyway. And it's not just AWS it's the entire region.
Anyone else find this timing odd given the DoD ban?
Who fixes the Ai when the Ai is down? Semi serious since they're pretty big on not writing code?
The same guy who used to fix stack overflow, presumably
Most ops fixes don’t involve writing code though.
Already made the switch back to Codex :-)
I won't hate you for downvoting me, but this is heroin-grade schadenfreude.
“98.92 % uptime” is horrendous and unacceptable.
Only one 9 of availability means you are seriously unreliable.
There are 2 9s in 98.92.
well actually since 1 == 0.999999… and 98.82 is 98.91999999… there are an infinite number of 9s
“Wait you mean sequential 9s!? Here I was waiting for just the right time to turn it back on…”
I’m very proud of our 0.999999% uptime. Six nines!
underrated...
Oh come on guys, this one is at least funny.
But code is solved?
Why do you assume this is a code issue? They were literally banned by DoD and then suddenly go down? There is at least a question to ask there, no?