More like they realized how much money they were wasting letting the proles generate slop and vibe code the same CRUD app they rewrote in 5 different JavaScript frameworks a few years back.
The money is in enterprise and government. The consumer market doesn’t remotely pay enough. It’s just the same story with Microsoft purposely making Windows an unusable mess because that’s not where they make their money. It was good to establish themselves, but that market is getting dumped.
I love that in the era of having LLMs summarize everything all of these companies have opted for what I call the “YouTube streamer apology video” tone and length for these announcements.
These feels more or less like a way to get in the news after Anthropic's Mythos announcement by removing some guardrails. I’m still signing up though.
A 5.4 spin with slightly different guardrails is not "access to the latest models". We know this to be true from the article because they have a section entitled "Looking ahead to our upcoming model release and beyond". I wonder if they didn't just feel like they were caught out by Mythos.
>partner with a limited set of organizations for more cyber-permissive models.
I get where they're going with this, but still rather hilarious how they had to get a corporate speak expert pull of the mental gymnastics needed for the announcement
Too little too late. OpenAI's shit was nearly worthless for cybersec for what, a year already?
ChatGPT 5.x just tries to deny everything remotely cybersecurity-related - to the point that it would at times rather deny vulnerabilities exist than go poke at them. Unless you get real creative with prompting and basically jailbreak it. And it was this bad BEFORE they started messing around with 5.4 access specifically.
And that was ChatGPT 5.4. A model that, by all metrics and all vibes, doesn't even have a decisive advantage over Opus 4.6 - which just does whatever the fuck you want out of the box.
What's I'm afraid the most of is that Anthropic is going to snort whatever it is that OpenAI is high on, and lock down Mythos the way OpenAI is locking down everything.
Yes. But "perform a humiliation ritual of KYC to access the actual model instead of the nerfed version of it that's so neurotic about cybersec you have to sink 400 tokens into getting it to a usable baseline" does not inspire any confidence at all.
> Ultimately, we aim to make advanced defensive capabilities available to legitimate actors large and small, including those responsible for protecting critical infrastructure, public services, and the digital systems people depend on every day.
Translation: we aim to make defensive capabilities available to US and their vassals so they can protect critical infrastructure, while ensuring countries that are independent can't protect against US attacking their critical infrastructure.
Fortunately, this plan will backfire - the model capability is exaggerated and these "safeguards" don't reliably work.
This approach means only a tiny portion of the population will every qualify. Doesn't that make everyone else beholden to those few, who are beholden to OpenAI?
Another solution is to make software makers responsible and liable for the output of their products. It's long been a problem that there is little legal responsibility, but we shouldn't just accept it. If Ford makes exploding cars, they are liable. If OpenAI makes software that endangers people, it should be the same.
> Democratized access: Our goal is to make these tools as widely available as possible while preventing misuse. We design mechanisms which avoid arbitrarily deciding who gets access for legitimate use and who doesn’t. That means using clear, objective criteria and methods – such as strong KYC and identity verification – to guide who can access more advanced capabilities and automating these processes over time.
KYC isn't democratic and doesn't prevent arbitrary favoritism, it's the opposite: It's used to control people and to favor friends and exclude enemies.
> Another solution is to make software makers responsible and liable for the output of their products. It's long been a problem that there is little legal responsibility, but we shouldn't just accept it. If Ford makes exploding cars, they are liable. If OpenAI makes software that endangers people, it should be the same.
That kind of thinking is exactly why LLMs are so censored, because people think OAI should be liable if someone uses chatgpt to commit cyber crimes
How about cyber crimes are already illegal and we just punish whoever uses the new tools to commit crimes instead of holding the tool maker liable
This gets complex if LLMs enable children to commit complex crimes but that's different from just outright restricting the tool for everyone because someone might misuse it
There's always some wedge issue that means "don't punish the toolkmaker" is not politically viable. You can pick from guns to legal drugs to illegal drugs to all kinds of emotive things.
And once the wedge is in and the concept of maker responsibility is planted, it expands to people's pet issues, obviously.
The actual line of who gets punished just ends up at some equilibrium in the middle. Largely arbitrarily.
That's a lot of waffle to try and say 'we've got a really scary next model coming too real soon, promise!'
More like they realized how much money they were wasting letting the proles generate slop and vibe code the same CRUD app they rewrote in 5 different JavaScript frameworks a few years back.
The money is in enterprise and government. The consumer market doesn’t remotely pay enough. It’s just the same story with Microsoft purposely making Windows an unusable mess because that’s not where they make their money. It was good to establish themselves, but that market is getting dumped.
Wait six months, get the Chinese version.
Changes as we speak, z.ai is the first one to show differential pricing
I love that in the era of having LLMs summarize everything all of these companies have opted for what I call the “YouTube streamer apology video” tone and length for these announcements.
These feels more or less like a way to get in the news after Anthropic's Mythos announcement by removing some guardrails. I’m still signing up though.
Requiring verified access is a good idea to mitigate risks from hacking while still giving people access to the latest models. Take notes, Anthropic.
A 5.4 spin with slightly different guardrails is not "access to the latest models". We know this to be true from the article because they have a section entitled "Looking ahead to our upcoming model release and beyond". I wonder if they didn't just feel like they were caught out by Mythos.
It seems like local LLMs will get popular for cybersecurity if this trend of locking access to models continues.
I completed the "Trusted Access" verification, but it seems to have unlocked nothing in the OpenAI API or Codex models.
Just FYI for others.
"trusted" + openai just simply doesn't compute for me any more
>democratized access
>partner with a limited set of organizations for more cyber-permissive models.
I get where they're going with this, but still rather hilarious how they had to get a corporate speak expert pull of the mental gymnastics needed for the announcement
It must be representative democracy! And our representative is... Larry Ellison. Oh no.
Too little too late. OpenAI's shit was nearly worthless for cybersec for what, a year already?
ChatGPT 5.x just tries to deny everything remotely cybersecurity-related - to the point that it would at times rather deny vulnerabilities exist than go poke at them. Unless you get real creative with prompting and basically jailbreak it. And it was this bad BEFORE they started messing around with 5.4 access specifically.
And that was ChatGPT 5.4. A model that, by all metrics and all vibes, doesn't even have a decisive advantage over Opus 4.6 - which just does whatever the fuck you want out of the box.
What's I'm afraid the most of is that Anthropic is going to snort whatever it is that OpenAI is high on, and lock down Mythos the way OpenAI is locking down everything.
That’s the whole point of this variant of the model, it won’t have those guardrails.
Yes. But "perform a humiliation ritual of KYC to access the actual model instead of the nerfed version of it that's so neurotic about cybersec you have to sink 400 tokens into getting it to a usable baseline" does not inspire any confidence at all.
> Ultimately, we aim to make advanced defensive capabilities available to legitimate actors large and small, including those responsible for protecting critical infrastructure, public services, and the digital systems people depend on every day.
Translation: we aim to make defensive capabilities available to US and their vassals so they can protect critical infrastructure, while ensuring countries that are independent can't protect against US attacking their critical infrastructure.
Fortunately, this plan will backfire - the model capability is exaggerated and these "safeguards" don't reliably work.
Sounds totally reasonable to trust OpenAI and the sociopath sama.
This approach means only a tiny portion of the population will every qualify. Doesn't that make everyone else beholden to those few, who are beholden to OpenAI?
Another solution is to make software makers responsible and liable for the output of their products. It's long been a problem that there is little legal responsibility, but we shouldn't just accept it. If Ford makes exploding cars, they are liable. If OpenAI makes software that endangers people, it should be the same.
> Democratized access: Our goal is to make these tools as widely available as possible while preventing misuse. We design mechanisms which avoid arbitrarily deciding who gets access for legitimate use and who doesn’t. That means using clear, objective criteria and methods – such as strong KYC and identity verification – to guide who can access more advanced capabilities and automating these processes over time.
KYC isn't democratic and doesn't prevent arbitrary favoritism, it's the opposite: It's used to control people and to favor friends and exclude enemies.
> Another solution is to make software makers responsible and liable for the output of their products. It's long been a problem that there is little legal responsibility, but we shouldn't just accept it. If Ford makes exploding cars, they are liable. If OpenAI makes software that endangers people, it should be the same.
That kind of thinking is exactly why LLMs are so censored, because people think OAI should be liable if someone uses chatgpt to commit cyber crimes
How about cyber crimes are already illegal and we just punish whoever uses the new tools to commit crimes instead of holding the tool maker liable
This gets complex if LLMs enable children to commit complex crimes but that's different from just outright restricting the tool for everyone because someone might misuse it
There's always some wedge issue that means "don't punish the toolkmaker" is not politically viable. You can pick from guns to legal drugs to illegal drugs to all kinds of emotive things.
And once the wedge is in and the concept of maker responsibility is planted, it expands to people's pet issues, obviously.
The actual line of who gets punished just ends up at some equilibrium in the middle. Largely arbitrarily.
So who is at fault in your solution, the org who created and shipped the software bug, or the company that discovered it?
I don't see how OpenAI is Ford in your analogy as OpenAI didn't make the software that blew up.