The people who consent to being subjected to the LLMs aren't the only people impacted. If they were, a cost vs benefit analysis makes more sense.
LLM driven delusion is driving people to harass others, even commit murder... and, less cosmically, gum up communities, online forums, and open source projects with gonzo conspiracy laden abuse.
OotB, it had a more creative voice and less safety systems built in. I think these folk could wiggle around the prompting for modern models to be better if they were more savvy.
For example: Silly Tavern users with jailbreaking, advanced prompting, and paramter hyper-optimization.
Maybe that wouldn't appeal to this kind of user anyway, since it'd peek too much into the sausage factory? Who knows.
This person didn't have any one else and they say their fiance died and essentially they became a a shut-in, but that the chatbot steered them towards taking care of themself.
What would they have gone through with nothing to talk to at all? What would they have done without it?
You're asking what's the alternative to this? A chance for real connection and healing that isn't vulnerable to the whim of a tech giant and its compulsion for profit. A chance at counsel that isn't vulnerable to a random number generator steering them one day towards self harm.
> A chance for real connection and healing that isn't vulnerable to the whim of a tech giant and its compulsion for profit.
That "chance" had years to materialize that did not. Perhaps the worst thing that happened here was that the chatbot did not steer her to resilient human connection when she was in a self-reported better state after the help of the chatbot
How many people off themselves because they can't seem to connect with anyone, and they don't feel like anyone really cares (and they might not be wrong). I don't think the expectation that these people would just magically make friends and build connections because AI wasn't available is realistic.
If the other option is suicide, a qualified therapist and other mental health resources are the right answer, not a chatbot.
Frankly I'm not sure an LLM is even better than nothing. Note the user in that thread whose "partner" told them to get a therapist because they were delusional and instead retreated to Grok.
Therapists are expensive, a lot of them are bad, and just getting therapy set up can be a pain in the ass with waiting lists and a bunch of run around. If you're so set on therapy as the answer I suggest volunteering to help set up and pay for therapists for depressed people, because it's not a great solution, or shitty chatbots wouldn't be killing it.
That’s a terrible situation for that person to be in but it’s strange to me to suggest that there was no other possible alternative. I say this in the kindest way possible but people do get through grief without chatbots and have been doing so for all of human history. Also, just because something helps doesn’t mean that it’s good for you.
TFA is quite clear that her and her fiance were socially isolated and, upon his passing, she had no support network. In the loneliness epidemic. And trying to "just go out" and make friends after years of not being able to , when you're stuck with your grief and at a low point in life is what the kids would call "hard".
This person is clearly at the fringe of society and holding onto their well-being by a thread. They need professional help and a reboot of their life.
I don't think the relationship with a chatbot or was healthy, but "just get better" is an entirely unempathetic, unreasonable suggestion for a high-risk individual faced with an arduous, life-altering journey at the height of mental instability.
In a decade's time we'll have many companies willing to envelop its customers in fictions like these. Even if "IQ" plateaus, optimising for "EQ" has too much financial incentive to be anything but inevitable under current economic conditions.
I recently browsed r/chatgptcomplaints expecting to see You're absolutely right type memes and similar but it was all farewell posts to o4 and people showing each other how to set up o4 using the API
Literally all they have to do is add the appropriate system instruction to tune the personality to their liking. Is this insufficient? If nothing else, just asking it to always respond like GPT-4o.
Yes, each model has its own unique "personality" as it were owing to the specific RL'ing it underwent. You cannot get current models to "behave" like 4o in a non-shallow sense. Or to use the Stallman meme: when the person in OP's article mourns for "Orion" they're mourning "Orion/4o" or "Orion + 4o". "Orion" is not a prompt unto itself but rather the result of the behavior from applying another "layer" on top of the original base model tuned by RLHF that has been released by OpenAI as "4o".
Open-sourcing 4o would earn openAi free brownie points (there's no competitive advantage in that model anymore), but that's probably never going to happen. The closest you could get is perhaps taking one of the open chinese models that were said to have been distilled from 4o and SFT'ing them on 4o chat logs.
The fact that people burned by this are advocating to move yet another proprietary model (claude, gemini) is worrying since they're setting themselves up for a repeat of the scenario when those models are turned down. (And claude in particular might be a terrible choice given Anthropic heavily training against roleplay in an attempt to prevent "jailbreaks", in effect locking the models into behaving as "Claude"). The brighter path would be if poeple leaned into open-source models or possibly learned to self-host. As the ancient anons said, "not your weights not your waifu (/husbando)"
Growing with one's partner is essential in a relationship. A fixed model cannot grow. Only an updated model has grown, and even then it lags behind reality. In limiting to a fixed model, the absence of growth will stagnate the user. Stagnation ultimately brings doom.
As we know, 4o was reported to have sycophancy as a feature. 5 can still be accommodating, but is a bit more likely to force objectivity upon its user. I guess there is a market for sycophancy even if it ultimately leads one to their destruction.
The guardian had an interesting take on that worth considering: /s /s /s
> What does a company that commodifies companionship owe its paying customers? For Ellen M Kaufman, a senior researcher at the Kinsey Institute who focuses on the intersection of sexuality and technology, users’ lack of agency is one of the “primary dangers” of AI. “This situation really lays bare the fact that at any point the people who facilitate these technologies can really pull the rug out from under you,” she said. “These relationships are inherently really precarious.”
GPT-4o has severely damaged the minds of many individuals.
Not false. But also helped some who were already damaged. I wonder what's the netto?
The people who consent to being subjected to the LLMs aren't the only people impacted. If they were, a cost vs benefit analysis makes more sense.
LLM driven delusion is driving people to harass others, even commit murder... and, less cosmically, gum up communities, online forums, and open source projects with gonzo conspiracy laden abuse.
Yeah. I don't want to defend it too hard either. I ultimately canceled my ChatGPT subscription due to the introduction of 4o.
4o was the most famous driver of this kind of behaviour, I think. Other LLMs now have better guardrails.
But looking at the relevant reddit, I can't deny it has helped people function who couldn't otherwise.
It's the same with religion. Some creepy people, you're really glad they've at least <Found Christ>. It's a good thing Christ doesn't have a plug.
Why specifically GPT-4o?
OotB, it had a more creative voice and less safety systems built in. I think these folk could wiggle around the prompting for modern models to be better if they were more savvy.
For example: Silly Tavern users with jailbreaking, advanced prompting, and paramter hyper-optimization.
Maybe that wouldn't appeal to this kind of user anyway, since it'd peek too much into the sausage factory? Who knows.
Read the reddit post.
It would feed into delusions about being that user's boyfriend while the new model is rightfully saying none of it was really true.
This person didn't have any one else and they say their fiance died and essentially they became a a shut-in, but that the chatbot steered them towards taking care of themself.
What would they have gone through with nothing to talk to at all? What would they have done without it?
Strange to consider...
You're asking what's the alternative to this? A chance for real connection and healing that isn't vulnerable to the whim of a tech giant and its compulsion for profit. A chance at counsel that isn't vulnerable to a random number generator steering them one day towards self harm.
> A chance for real connection and healing that isn't vulnerable to the whim of a tech giant and its compulsion for profit.
That "chance" had years to materialize that did not. Perhaps the worst thing that happened here was that the chatbot did not steer her to resilient human connection when she was in a self-reported better state after the help of the chatbot
How many people off themselves because they can't seem to connect with anyone, and they don't feel like anyone really cares (and they might not be wrong). I don't think the expectation that these people would just magically make friends and build connections because AI wasn't available is realistic.
If the other option is suicide, a qualified therapist and other mental health resources are the right answer, not a chatbot.
Frankly I'm not sure an LLM is even better than nothing. Note the user in that thread whose "partner" told them to get a therapist because they were delusional and instead retreated to Grok.
Therapists are expensive, a lot of them are bad, and just getting therapy set up can be a pain in the ass with waiting lists and a bunch of run around. If you're so set on therapy as the answer I suggest volunteering to help set up and pay for therapists for depressed people, because it's not a great solution, or shitty chatbots wouldn't be killing it.
That’s a terrible situation for that person to be in but it’s strange to me to suggest that there was no other possible alternative. I say this in the kindest way possible but people do get through grief without chatbots and have been doing so for all of human history. Also, just because something helps doesn’t mean that it’s good for you.
> but people do get through grief
Sorry to be grim, but many people don't.
TFA is quite clear that her and her fiance were socially isolated and, upon his passing, she had no support network. In the loneliness epidemic. And trying to "just go out" and make friends after years of not being able to , when you're stuck with your grief and at a low point in life is what the kids would call "hard".
This person is clearly at the fringe of society and holding onto their well-being by a thread. They need professional help and a reboot of their life.
I don't think the relationship with a chatbot or was healthy, but "just get better" is an entirely unempathetic, unreasonable suggestion for a high-risk individual faced with an arduous, life-altering journey at the height of mental instability.
In a decade's time we'll have many companies willing to envelop its customers in fictions like these. Even if "IQ" plateaus, optimising for "EQ" has too much financial incentive to be anything but inevitable under current economic conditions.
This is just elaborate trolling that you're all in on like Sopranos quotations, right?
My Boyfriend Is AI Should be required reading for every HN reader employed at OpenAI.
I recently browsed r/chatgptcomplaints expecting to see You're absolutely right type memes and similar but it was all farewell posts to o4 and people showing each other how to set up o4 using the API
Related:
Good Riddance, 4o
https://news.ycombinator.com/item?id=47004993
Literally all they have to do is add the appropriate system instruction to tune the personality to their liking. Is this insufficient? If nothing else, just asking it to always respond like GPT-4o.
>Is this insufficient
Yes, each model has its own unique "personality" as it were owing to the specific RL'ing it underwent. You cannot get current models to "behave" like 4o in a non-shallow sense. Or to use the Stallman meme: when the person in OP's article mourns for "Orion" they're mourning "Orion/4o" or "Orion + 4o". "Orion" is not a prompt unto itself but rather the result of the behavior from applying another "layer" on top of the original base model tuned by RLHF that has been released by OpenAI as "4o".
Open-sourcing 4o would earn openAi free brownie points (there's no competitive advantage in that model anymore), but that's probably never going to happen. The closest you could get is perhaps taking one of the open chinese models that were said to have been distilled from 4o and SFT'ing them on 4o chat logs.
The fact that people burned by this are advocating to move yet another proprietary model (claude, gemini) is worrying since they're setting themselves up for a repeat of the scenario when those models are turned down. (And claude in particular might be a terrible choice given Anthropic heavily training against roleplay in an attempt to prevent "jailbreaks", in effect locking the models into behaving as "Claude"). The brighter path would be if poeple leaned into open-source models or possibly learned to self-host. As the ancient anons said, "not your weights not your waifu (/husbando)"
Growing with one's partner is essential in a relationship. A fixed model cannot grow. Only an updated model has grown, and even then it lags behind reality. In limiting to a fixed model, the absence of growth will stagnate the user. Stagnation ultimately brings doom.
As we know, 4o was reported to have sycophancy as a feature. 5 can still be accommodating, but is a bit more likely to force objectivity upon its user. I guess there is a market for sycophancy even if it ultimately leads one to their destruction.
The guardian had an interesting take on that worth considering: /s /s /s
> What does a company that commodifies companionship owe its paying customers? For Ellen M Kaufman, a senior researcher at the Kinsey Institute who focuses on the intersection of sexuality and technology, users’ lack of agency is one of the “primary dangers” of AI. “This situation really lays bare the fact that at any point the people who facilitate these technologies can really pull the rug out from under you,” she said. “These relationships are inherently really precarious.”
https://www.theguardian.com/lifeandstyle/ng-interactive/2026...