I'm not much on X anymore due to the vitriol, and visiting now kinda proved it. Beneath almost every trending post made by a female is someone using grok to sexualize a picture of them.
(And whatever my timeline has become now is why I don't visit more often, wtf, used to only be cycling related)
I left when they started putting verified (paid) comments at the top of every conversation. Having the worst nazi views front and center on every comment isn't really a great experience.
I've got to imagine that Musk fired literally all of the product people. Pay-for-attention was just such an obviously bad idea, with a very long history of destroying social websites.
To be fair, as someone who used to manage an X account for a very small startup as part of my role (glad that's no longer the case), for a long time (probably still the case) posting direct links would penalize your reach. So making a helpful, self-contained post your followers might find useful was algorithmically discouraged.
Everything that is awful in the diff between X and Twitter is there entirely by decision and design.
Vagueposting is a different beast. There’s almost never any intention of informing etc; it’s just: QT a trending semi-controversial topic, tack on something like “imagine not knowing the real reason behind this”, and the replies are jammed full of competitive theories as to what the OP was implying.
It’s fundamentally just another way of boosting account engagement metrics by encouraging repliers to signal that they are smart and clued-in. But it seems to work exceptionally well because it’s inescapable at the moment.
Vague posting is as old as social networks. I had loads of fun back in the day responding to all the "you know who you are" posts on facebook, when it's clearly not aimed at me.
They also don’t take down overt Nazi content anymore. Accounts with all the standard unambiguous Nazi symbologies and hate content about their typical targets with associated slurs. With imagery of Hitler and praises of his policies. And calls for exterminating their perceived enemies and dehumanizing them as subhuman vermin. I’ve tried reporting many accounts and posts. It’s all protected now and boosted via payment.
Inviting a debate about what it was or wasn't only leads to a complete distractions over interpretation of a gesture when the dude already digs his own hole more deeply and more clearly in his feed anyways.
There's no debate. It was the most obvious nazi salute you could do. The only people who says it's not are nazis themselves, who of course delight in lying. (see Sartre quote.)
My comment was in response to the debate already starting so it's quite bold to claim no debate will be had (i.e. "debate" does not mean "something I personally am on the fence about", it's something other people will hold in response to your views). Whether there will or won't be debate about something is (thankfully) not something you or I get to declare. It just happens or doesn't, and it had already - and so it remains.
I'm sure "The only people who say it's not are <x>" is an abominable thought pattern Nazis and similar types would love everyone to have. It makes for a great excuse to never weigh things on their merits, so I'm not sure why you feel the need to invoke it when the merits are already in your court. I can't look at these numbers https://i.imgur.com/hwm2bI5.png and conclude most Americans are Nazi's instead of being willing to accept perhaps not everyone sees it the same way I do even if they don't like Nazis either.
To any actual Nazi supporters out there: To hell with you
To anybody who thinks either everyone agrees with what they see 100% of the time or they are a literal Nazi: To hell with you as well
The majority of people who had an opinion (32%) said it was either a Roman salute or a Nazi salute (which are the same thing). Lots of people had no idea (probably cuz they didn't pay attention). Only 19% said it was a "gesture from the heart", which is just parroting what Elon claimed, and I discount those folks as they are almost certainly crypto-Nazis.
So yeah, I believe there are a LOT of Nazi-adjacent folks in this country: they're the ones who voted for Trump 3 times even after they knew he was a fascist piece of garbage.
A few minor cleanups - I personally don't think they change anything (really, it's these stats themselves that lack the ability to do that anyways) but want to note because this is the exact kind of Pandora's box opened with focusing on this specific incident:
- Even assuming all who weren't sure (13%) should just be discounted as not having an opinion, like those who had not heard about it (22%), 32% is still not a majority of the remaining (100%-13%-22%) = 65%. 32% could have been a plurality of those with an opinion, but since you insisted on lumping things into 3 buckets of 32%, 35%, and remaining %, the remaining % of 33% would actually get the plurality of those who responded with opinions by this definition.
N.b. If just read straight from the sheet, "A Nazi salute" would have already had a plurality. Though grouping like this is probably the more correct thing to do, it actually ends up significantly weakening the overall position of "more people agree than not" rather than strengthening it.
- But, thankfully, "A Nazi Salute" + "A Roman Salute" would actually have been 32+2=34%, so plurality is at least restored by more than one whole percentage point (if you excluded the unsure or unknowing)!
- However, a "Roman salute" (which is a bit of a farce of a name really) can't really be assumed to be fungible with the first option in this poll. If it were fully fungible, it could have been combined into that option. I.e. there's no way to tell which adults responding "A Roman salute" meant to be counted as "a general fascist salute, as the Nazis later adopted" or meant to be counted as "a non-fascist meaning of the salute, like the Bellamy salute was before WWII". So whichever wins this game of eeking out percentage points comes down to how each person wants to group these 2 percentage points. Shucks!
- In reality, between error margins and bogus responses, this is about as close as one could expect to get for an equal 3 way split between "it was", "it wasn't", and "dunno/don't care", and pulling ahead a percentage point or two is really quite irrelevant beyond that it is, blatantly, not actually a majority that agree it was a Nazi-style salute.
Even though I'm one who agrees with you Elon exhibits neo-nazi tendencies, the above just shows how we go from "Elon replies directly supporting someone in a thread about Hitler being right about the Jewish community" and similar things constantly for years to debating individual percentage points to try to claim our favorite sub-majority says he likely made a one off hand gesture 3 years ago. Now imagine I was actually a Nazi supporter walking into the thread - suddenly we've gone from talking about direct pro-Nazi statements and retweets constantly in his feed to a chance for me to debate with you whether the majority think he made a one off hand gesture 3 years ago? Anyone concerned with Musk's behavior should want to avoid this topic with a 20 foot pole so they can get straight to the real stuff.
Also... I've run across a fair share of crypto lovers who turn out to be neo-nazish, but I'm not sure how you're piecing together that such a large portion of the population is a "crypto-Nazi" when something like only 28% of the population has crypto at all, let alone is a Nazi too. At least we're past "anyone who disagrees with my interpretations can only be doing so as a Nazi" though.
Ah, you're almost certainly correct here! Akin to crypto-fascist, perhaps I'd seen too many articles talking about the negatives of crypto to see the obvious there.
I imagine I'm not the only one using HN less because both articles like this and comments like this are clearly being downvoted and/or flagged by a subset of users motivated by politics and the HN admin team seemingly doesn't consider that much of a problem. This story is incredibly relevant to a tech audience and this comment is objectively true and yet both are met with downvotes/flags.
Whether HN wants to endorse a political ideology or not, their approach to handling these issues is a material support of the ideologies these stories and comments are criticizing.
Yeah this was my first reaction this article is about tech regulation that is relavent and on topic. If Grok causes extra legislation to be passed because its lack of comment dececeny in the pursuit of money that is relavent. This is the entire argument around we can't have accountability for tools just people which is ridicuously. The result of pretending that this type of thing doesn't happen is legislative responses.
PG and Garry Tan have both been disturbingly effusive in praising Musk and his various fuckeries.
Like, the entirety of DOGE was such an obviously terrible series of events, but for whatever reason, the above were both big cheerleaders on Twitter.
And yeah the moderation team here have been clearly letting everything Musk-related be flagged even after pushback. It's absolutely vile. I've seen many people try to make posts about the false flagging issue here, only to have those posts flagged as well (unapologetically, on purpose, by the mods themselves).
Anecdotally I think that moderation has been a lot more lenient when it comes to political content in the last year than in years prior. I have no hard evidence that this is actually the case, but I think especially pre-2020 I'd see very little political content on HN and now I see much more. It's also probably true that both liberals and conservatives have become even more polarized, leading to bad-faith flagging and downvoting, but I'm actually not sure what could be done about that, seems similar to anti-botting protections which is an arms race
I'm late to this, but I'm doubtful that that perception is correct. It's true there are fluctuations, as with anything on HN, but the baseline is pretty stable. But the perception that HN has gotten-more-political-lately is about as old as the site itself. In fact, it's so common that about 8 years ago I took a couple hours to track down the history of it: https://news.ycombinator.com/item?id=17014869.
Any thoughts about the issues raised up thread? This article being flagged looks to me to be a clear indication of abuse of the HN flagging system. Or do you think there are justifiable reasons why this article shouldn't be linked on HN?
My thoughts are just the usual ones about this: flags of stories like this on HN are a kind of coalition between some flaggers who are agenda-motivated (which is an abuse of flagging) and other flaggers who simply don't want to see repetitive and/or flamebaity material on the site (which is a correct use of flagging, and is not agenda driven because this sort of material comes at us from all angles). When we see flaggers who are consistently doing the first kind of flagging, we take away their flagging privileges.
The wild thing is that this article isn't even a political issue!
"Major Silicon Valley Company's Product Creates and Publishes Child Porn" has nothing to do with politics. It's not "political content." It is relevant tech news when someone investigates and points out wrongdoing that tech companies are up to. If another tech company's product was doing this, it would be all over HN and there would be pretty much no flagging.
When these stories get flagged, it's because people don't want bad news to get out about the company--it's not about avoiding politics out of principle.
I've been using https://news.ycombinator.com/active a lot more the last year, because so many important discussions (related to tech, but including politics or prominent figures like Musk) gets pushed out from the front page quickly. I don't think it's moderators doing it, but mass-flagging by users, (or perhaps some automagic if the discussion is too intense like num comments or downvotes). Of course, it might be the will of the community to flag these, but it does feel a bit abused in the way certain topics gets killed quickly.
I just found out about this recently and like this page a lot. Dang has a hard job to balance this. I think newcomers might be more comfortable with the frontpage and if you end up learning about the other pages you can find more controversial discussions. Can't be mad about the moderation hiding these by default. Although I think CSAM-Bad should not be controversial.
Even a year ago, when Trump was posting claims that he was a king, etc. these things got removed, even though there were obvious implications on the tech industry. (Cybersecurity alone makes more political assumptions than it does on the hardness of the discrete logarithm, for example.)
I (and others) were arguing that the Trump administration is probably, and unfortunately, the most relevant topic to the tech industry on most any given day. This is because computer is mostly made out of people. The message that these political stories intersect deeply with technology (as is seen here) seems to have successfully gotten through.
I wish the most relevant tech story of every day were, say, some cool new operating system, or something cool and curiosity-inspiring like "you can sort in linear time" or "python is an operating system" or "i made X rewritten in Y" or whatever.
I think in most things, creation is much harder than destruction, but software and software systems are an exception where one individual can generally do more creation than destruction. So, it's particularly interesting (and jarring) when a few individuals are able to make decisions that cause widespread destruction.
We should collectively be proud that we have a culture where creation is easier than destruction. But it's also why the top stories of any given day will be "Trump did X" or "us-east-1 / cloudflare / crowdstrike is down" or "software widely used in {phones / servers} has a big scary backdoor".
This story belongs on this site regardless of politics. It is specifically about both AI and social media. Downvoting/flagging this story is much more politically motivated than posting/upvoting it.
I agree with that. But one, it is on the site, and two, how can the moderation team reasonably stop bad actors from downvoting it? They can (and probably do) unflag things that have merit or put it in the 2nd chance queue.
> But one, it is on the site, and two, how can the moderation team reasonably stop bad actors from downvoting it?
In 2020, Dang said [1]
> Voting ring detection has been one of HN's priorities for over 12 years: [...]
> I've personally spent hundreds of hours working on this, as well as tracking down voting rings of every imaginable sort. I'd never claim that our software catches everything, but I can tell you that it catches so much that I often go through the lists to find examples of good projects that people were trying ineptly to promote, and invite them to do it again in a way that is more likely to gain community interest.
Of course this sort of thing is inherently heuristic; presumably bots throw up a smokescreen of benign activity, and sophisticated bots could present a very realistic, human-like smokescreen.
> how can the moderation team reasonably stop bad actors from downvoting it
There are all sorts of approaches that a moderation team could take if they actually believed this was a problem. For example, identify the users who regularly downvote/flag stories like this that end up being cleared by the moderation team for unflagging or the 2nd chance queue and devalue their downvotes/flags in the future.
Accounts are free to make, so bad actors will just create and "season/age" accounts until they have the ability to flag, then rinse and repeat.
I think the biggest thing HN could do to stop this problem is to not make flagging affect an article's ranking until after a human mod reviews the flags and determines them to be appropriate. Right now, all bad actors apparently have to do is be quick on the draw, and get their flagging ring in action ASAP. I'm sure any company's PR team (or motivated Elon worshiper) can buy "100 HN flags on an article" on the dark web right now if they wanted to.
Why would a company like any one of Musk's need to buy these flags? Why wouldn't they just push a button and have their own bots get to work? Plausible deniability?
Who knows whether or not both happen? Ultimately, only the HN admins, and they don't disclose data, so we can only speculate and look for publicly visible patterns.
You can judge their trustworthiness by evaluating their employer's president/CEO, who dictates behavioral requirements regardless of the personal character of each employee
That already happens. I got my flagging powers removed after over-using flag in the past. (I eventually wrote an email to the mods pledging to behave more judiciously and asked for the power back). As a user you won't see any change in the UI when this happens; the flags just stop having any effect on the back end.
There is one subtle clue. If your account has flagging enabled, then whenever you flag something there is a chance that your flag pushes it over the threshold to flagged state. If your account has flagging disabled, this never happens. This is what hinted me to ask dang if I'd been shadowbanned from flagging.
I would be money that already happens, for flagging in particular, since it's right in the line of the moderation queue. For downvotes, it sounds like significant infra would be needed for a product that generates no revenue. Agree that I would like the problem to be solved as well however!
I think there's brigading coming in to ruin these threads. I had several positive votes for a few minutes when stating a simple fact about Elon Musk and his support of neo-nazi political parties then -2 a min later
I have downvoted anything remotely political on hn ever since I got my downvote button, even (especially) if I agree with it. I always appreciated that being anti-political was the general vibe here.
The part where you brought up politics is when I noticed it was political.
But I generally consider something political if it involves politicians, or anyone being upset about anything someone else is doing, or any topic that they could mention on normal news. I prefer hn to be full of positive things that normal people don't understand or care about.
What's political here? The mere fact of the involvement of Dear Leader?
(As a long-term Musk-sceptic, I can confirm that Musk-critical content tended to get insta-flagged even years before he was explicitly involved in politics.)
There's almost no such thing as a non-political thing. Maybe the sky colour, except that other cultures (especially in the past) have different green/blue boundaries and some may say it's green. Maybe the natural numbers (but whether they start from 0 or 1 is political) or the primes (but whether 1 is prime is political).
I mean, honestly, you are wasting your time. Why would you expect the website run by the guy who likes giving Nazi salutes on TV to take down Nazi content?
There's no point trying to engage with Twitter in good faith at this point; only real option is to stop using and move on (or hang out in the Nazi bar, I guess).
They meant howlingmutant0 but I don't know which posts they refer to
The ones I reported, I deleted the report emails so I can't help you at this moment. I don't know why you're surprised - you can go looking yourself and find examples
Yeah I went thru his media. There was some backwards swastika that someone drawn on a synagogue. People were mocking the fact that idiots can't even draw that correctly.
1. Can you point to exact posts? I saw one swastika somewhere deep in media. It's a description of what swastika is - no different from wikipedia article.
I normally stay away too, but just decided to scroll through grok’s replies to see how wide spread it really is. It looks like it is a pretty big problem, and not just for women. Though, I must say that Xi Jinping in a bikini made me laugh.
I’m not sure if this is much worse than the textual hate and harassment being thrown around willy nilly over there. That negativity is really why I never got into it, even when it was twitter I thought it was gross.
Before Elon bought it out it was mostly possible to contain the hate with a carefully curated feed. Afterward the first reply on any post is some blue check Nazi and/or bot. Elon amplifying the racism by reposting white supremacist content, no matter how fabricated/false/misleading, is quite a signal to send to the rest of the userbase.
he's rigged the algorithm to boost content he interacts with, unbanned and stopped moderating nazi content and then boosted those accounts by interacting with them.
X wrote in offering to pay something for my OG username, because fElon wanted it for one of his Grok characters. I told them to make an offer, only for them to invoke their Terms of Service and steal it instead.
Hmm, I have an old Twitter account. Elon promised that he was going to make it the best site ever, lets see what the algorithm feeds me today, January 5 2026.
1. Denmark taxes its rich people and has a high standard of living.
2. Scammy looking ad for investments in a blood screening company.
3. Guy clearing ice from a drainpipe, old video but fun to watch.
4. Oil is not actually a fossil fuel, it is "a gift from the Earth"
5. Elon himself reposting a racist fabrication about black people in Minnesota.
6. Climate change is a liberal lie to destroy western civilization. CO2 is plant food, liberals are trying to starve the world by killing off the plants.
7. Something about an old lighthouse surviving for a long time.
8. Vaccine conspiracy theories
9. Outright racism against Africans, claiming they are too dumb to sustain civilized society without white men running it.
10. One of those bullshit AI videos where the AI doesn't understand how pouring resin works.
11. Microsoft released an AI that is going to change everything, for real this time, we promise.
12. Climate change denialism
13. A post claiming that the Africa and South America aren't poor because they were robbed of resources during the colonial era and beyond, but because they are too dumb to run their countries.
14. A guy showing how you can pack fragile items using expanding foam and plastic bags. He makes it look effortless, but glosses over how he measures out the amount of foam to use.
15. Hornypost asking Grok to undress a young Asian lady standing in front of a tree.
16. Post claiming that the COVID-19 vaccine caused a massive spike (5 million to 150 million) cases of myocarditis.
17. A sad post from a guy depressed that a survey of college girls said that a large majority of them find MAGA support to be a turn off.
18. Some film clip with Morgan Freeman standing on a X and getting sniped from an improbable distance
19. AI bullshit clip about people walking into bottomless pits
20. A video clip of a woman being confused as to why financial aid forms now require you to list your ethnicity when you click on "white", with the only suboptions being German, Irish, English, Italian, Polish, and French.
Special bonus post: Peter St Ogne, Ph. D claims "The Tenth Amendment says the federal government can only do things expressly listed in the Constitution -- every other federal activity is illegal." Are you wondering what federal activity he is angry about? Financial support for daycare.
So yeah, while it wasn't a total and complete loss it is obvious that the noise far exceeds the signal. It is maybe a bit of a shock just how much blatant climate change denialism, racism, and vaccine conspiracies are front page material. I'm saddened that there are people who are reading this every day and taking it to heart. The level of outright racism is quite shocking too. It's not even up for debate that black people are just plain inferior to the glorious aryan race on Twitter. This is supposedly the #1 news source on the Internet? Ouch.
Edit: Got the year wrong at the top of the post, fixed.
Makes me laugh when people say Twitter is "better than ever." Not sure they understand how revealing that statement is about them, and how the internet always remembers.
They don't outnumber anyone. There's always a minority of hardcore supporters for any side... plus enough undecided people in the middle who mostly vote their pocketbook.
What to do about it is to point out to those people in the middle how badly things are being fucked up, preferably with how those mistakes link back to their pocketbook.
The best use of generative AI is as an excuse for everyone to stop posting pictures of themselves (or of their children, or of anyone else) online. If you don't overshare (and don't get overshared), you can't get Grok'd.
There's a difference between merely existing in public, versus vying for attention in a venue where several brands of "aim this at a patron to see them in a bikini" machines are installed.
And so installing the "aim this at a patron to see them in a bikini" machines made the community vastly more hostile to women. To the point where people say "well what did you expect" when a woman uses the product. Maybe they shouldn't have been installed?
The number of people saying that it is not worthy of intervention that every single woman who posts on twitter has to worry about somebody saying "hey grok, take her clothes off" and then be made into a public sex object is maybe the most acute example of rape culture that I've seen in decades.
This thread is genuinely enraging. The people making false appeals to higher principles (eg section 230) in order to absolve X of any guilt are completely insane if you take the situation at face value. Here we have a new tool that allows you to make porn of users, including minors, in an instant. None of the other new AI platforms seem to be having this problem. And yet, there are still people here making excuses.
I am not a lawyer but my understanding of section 230 was that platforms are not responsible for the content their users post (with limitations like “you can’t just host CSAM”). But as far as I understand, if the platform provides tools to create a certain type of harmful content, section 230 doesn’t protect it. Like there’s a difference between someone downloading a photo off the internet and then using tools like photoshop to make lewd content before reuploading it, as compared to the platform just offering a button to do all of that without friction.
Again I’m not a lawyer and this is my interpretation of the #3 requirement of section 230:
“The information must be "provided by another information content provider", i.e., the defendant must not be the "information content provider" of the harmful information at issue”
If grok is generating these images, I am interpreting this as Twitter could be becoming an information content provider. I couldn’t find any relevant rulings but I doubt any exist since services like Grok are relatively new.
1) These images are being posted by @Grok, which is an official X account, not a user account.
2) X still has an ethical and probably legal obligation to remove these images from their platform, even if they are somehow found not to be responsible for generating them, even though they generated them.
At this point in time, no comment that has the string "230" in it is saying that Section 230 absolves X of anything. Lot's of people are asking if it might, and if that's what X is relying on here.
I brought up Section 230 because it used to be that removal of Section 230 was an active discussion in the US, particularly for Twitter, pre-Elon, but seems to have fallen away.
With content generated by the platform, it certainly seems reasonable to understand how Section 230 applies, if it all, and I in particular think that Section 230 protections should probably be removed for X in particular.
> At this point in time, no comment that has the string "230" in it is saying that Section 230 absolves X of anything.
You are correct; I read your earlier post as "did we forget our already established principle"? I admit I'm a bit tilted by X doing this. In my defense, there are people making the "blame the user, not the tool" argument here though, which is the core idea of section 230
> None of the other new AI platforms seem to be having this problem
The very first AI code generators had this issue that user could make illegal content by making specific requests. A lot of people, me including, saw this as a problem, and there were a few copyright lawsuits arguing this. The courts however did not seem to be very sympathetic to this argument, putting the blame on the user rather than the platform.
Here is hoping that Grok forces regulations to decide on this subject once and for all.
Elon Musk mentioned multiple times that he doesn't want to censor. If someone does or says something illegal on his platform, it has to be solved by law enforcement, not by someone on his platform. When asked to "moderate" it, he calls that censorship. Literally everything he does and says is about Freedom - no regulations, or as little as possible, and no moderation.
I believe he thinks the same applies to Grok or whatever is done on the platform. The fact that "@grok do xyz" makes it instanteous doesn't mean you should do it.
I think it is completely fine for a tech platform to proactively censor AI porn. It is ok to stop men from generating porn of random women and kids. We don't need to get the police involved. I do not think non-consensual porn or CSAM should be protected by free speech. This is an obvious, no-brainer decision.
> X is planning to purge users generating content that the platform deems illegal, including Grok-generated child sexual abuse material (CSAM).
Which is moderating/censoring.
The tool (Grok) will not be updated to limit it - that's all. Why? I have no idea, but it seems lately that all these AI tools have more freedom than us humans.
The one above is not my opinion (although I partially agree with it, and now you can downvote this one :D ). To be honest, I don't care at all about X nor about an almost trillionaire.
It was full of bots before, now it's full of "AI agents". It's quite hard sometimes to navigate through that ocean of spam, fake news, etc.
Grok makes it easier, but it's still ugly and annoying to read 90-95% always the same posts.
This weekend has made me explicitly decided my kids photos will never be allowed on the internet especially social media. Its was just absolutely disgusting.
Maybe I've got a case of the 'tism, but I really don't see an issue with it. Can someone explain?
It's a fictional creation. Nobody is "taking her clothes off", a bot is fabricating a naked woman and tacking her likeness (ie. face) on to it. If anything, I could see how this could benefit women as they can now start to reasonably claim that any actual leaked nudes are instead worthless AI slop.
I don't think I would care if someone did this to me. Put "me" in the most depraved crap you can think of, I don't care. It's not me. I suspect most men feel similarly.
A man's sexual value is rarely impacted much by a nude image of themselves being available.
A woman being damaged by nudes is basically a white knight, misogynist viewpoint that proclaims a woman's value is in her chastity / modesty so by posting a manufactured nude of her you have thereby degraded her value and owe her damages.
Yes, that's the conclusion I came to as well. The best analogue I can think of is taxation, where men - whose sexual values are impacted far more by control of resources - are typically far more aggrieved by the practice than women (who typically see it as a wonderful practice that ought to be expanded).
It feels odd for them to be advertising this belief though. These are surely a lot of the same people trying to devalue virginity, glorifying public sex positivity, condemning "slut shaming", etc.
The same argument could be made of photoshopping someone's face on to a nude body. But for the most part, nobody cares (the only time I recall it happening was when it happened to David Brent in The Office).
"For a Linux user, you can already build such a system yourself quite trivially ..."
Convincingly photoshopping someones face onto a nude body takes time, skills, effort, and access to resources.
Grok lowers the barrier to be less effort than it took for either you or I to write our comments.
It is now a social phenomenon where almost every public image of a woman or girl on the site is modified in this manner. Revenge porn photoshops happened before, but not to this scale or in this type of phenomenon.
And there is safety in numbers. If one person photoshops a highschool classmate nude, they might find themself on a registry. For lack of knowing the magnitude, if myriad people are doing it around the country, then do you expect everyone doing that to be litigated that extensively?
> Revenge porn photoshops happened before, but not to this scale or in this type of phenomenon.
Mate, thats the point. I as a normal human being, who had never been on 4chan or the darker corners of reddit would have never seen or be able to make frankenporn. much less so make _convincing_ frankenporn.
> For lack of knowing the magnitude
Fuck that shit, if they didn't know the magnitude they wouldn't have spend ages making the photoshop to do it. You don't spend ages doing revenge, "because you didn't know the magnitude" You spend ages doing it because you want revenge
> if myriad people are doing it around the country, then do you expect everyone doing that to be litigated that extensively?
I mean we put people in prison for drink driving, lots of people do that in the states, same with drug dealing. Same with harassment, thats why restraining orders exist.
but
You are missing the point, Making and distributing CSAM is an illegal offence. Knowingly storing and transmitting it is an offence. Musk could stop it all now by re-training grok, or putting in some basic controls.
If any other person was doing this they would have been threatened with company ending action by now.
This is a heated topic and I share your anger. But you have completely misunderstood me.
We mostly agree, so let me clarify.
Grok is being used to make very much revenge porn, including CSAM revenge porn, and people _are using X because it's the CSAM app_. I think this is all bad. We agree here.
"For lack of knowing the magnitude" is me stating that I do not know the number of people using X to generate CSAM. I don't know if it is a thousand, a million, a hundred million, etc. So, I used the word "myriad" instead of "thousands", "millions", etc.
I am arguing that this is worse because the scale is so much more. I am arguing against the argument equivocating this with photoshop.
> If any other person was doing this they would have been threatened with company ending action by now.
Yes, I agree. X is still available on both app stores. This means CSAM is just being made more and more normal. I think this is very bad.
Friend, you are putting too much effort to debate a topic that is implicitly banned on this website. This post has already been hidden from the front page. Hacker News is openly hostile to anything that even mildly paints a handful of billionaires in a poor light. But let's continue to deify Dang as the country descneds openly into madness.
I also see it back now too, despite it being removed earlier. Do you have faith in the HN algo? Position 22 despite having more votes and comments and being more recent than all of the posts above it?
IMO, the fact that you would say this is further evidence of rape culture infecting the world. I assure you that people do care about this.
And friction and quality matters. When you make it easier to generate this content and make the content more convincing, the number of people who do this will go up by orders of magnitude. And when social media platforms make it trivial to share this content you've got a sea change in this kind of harassment.
How is "It's acceptable because people perform a lesser form of the same behavior" an argument at all? Taken to its logical extreme, you could argue that you shouldn't be prevented from punching children in the face because there are adults in the world who get punched in the face. Obviously, this is an insane take, but it applies the same logic you've outlined here.
"“We take action against illegal content on X, including Child Sexual Abuse Material (CSAM), by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary,” X Safety said. “Anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content.”
How about not enabling generating such content, at all?
Given X can quite simply control what Grok can and can't output, wouldn't you consider it a duty upon X to build those guardrails in for a situation like CSAM? I don't think there's any grey area here to argue against it.
I am, in general, pretty anti-Elon, so I don't want to be seen as taking _his_ side here, and I am definitely anti-CSAM, so let's shift slightly to derivative IP generation.
Where does the line fall between provider responsibility when providing a tool that can produce protected work, and personal responsibility for causing it to generate that work?
It feels somewhat more clearcut when you say to AI, "Draw me an image of Mickey Mouse", but why is that different than photocopying a picture of Mickey Mouse, and using Photoshop to draw a picture of Mickey Mouse? Photo copiers will block copying a dollar bill in many cases - should they also block photos of Mickey Mouse? Should they have received firmware updates whenever Steamboat Willy fell into public domain, such that they can now be allowed to photocopy that specific instance of Mickey Mouse, but none other?
This is a slippery slope, the idea that a person using the tool should hold the tool responsible for creating "bad" things, rather than the person themselves being held responsible.
Maybe CSAM is so heinous as to be a special case here. I wouldn't argue against it specifically. But I do worry that it shifts the burden of responsibility onto the AI or the model or the service or whatever, rather than the person.
Another thing to think about is whether it would be materially different if the person didn't use Grok, but instead used a model on their own machine. Would the model still be responsible, or would the person be responsible?
> Where does the line fall between provider responsibility when providing a tool that can produce protected work, and personal responsibility for causing it to generate that work?
There's one more line at issue here, and that's the posting of the infringing work. A neutral tool that can generate policy-violating material has an ambiguous status, and if the tool's output ends up on Twitter then it's definitely the user's problem.
But here, it seems like the Grok outputs are directly and publicly posted by X itself. The user may have intended that outcome, but the user might not have. From the article:
>> In a comment on the DogeDesigner thread, a computer programmer pointed out that X users may inadvertently generate inappropriate images—back in August, for example, Grok generated nudes of Taylor Swift without being asked. Those users can’t even delete problematic images from the Grok account to prevent them from spreading, the programmer noted.
Overall, I think it's fair to argue that ownership follows the user tag. Even if Grok's output is entirely "user-generated content," X publishing that content under its own banner must take ownership for policy and legal implications.
This is also legally problematic: many jurisdictions now have specific laws about the synthesis of CSAM or modifying peoples likenesses.
So exactly who is considered the originator is a pretty legally relevant question particularly if Grok is just off doing whatever and then posting it from your input.
"The persistent AI bot we made treated that as a user instruction and followed it" is a heck of a chain of causality in court, but you also fairly obviously don't want to allow people to laundry intent with AI (which is very much what X is trying to do here).
Maybe I'm being too simplistic/idealistic here - but if I had a company that controlled an LLM product, I wouldn't even think twice about banning CSAM outputs.
You can have all the free speech in the world, but not with the vulnerable and innocent children.
I don't know how we got to the point where we can build things with no guardrails and just expect the user to use it legally? I think there should be responsibility on builders/platform owners to definitely build guardrails in on things that are explicitly illegal and morally repugnant.
>I wouldn't even think twice about banning CSAM outputs.
Same, honestly. And you'll probably catch a whole lot of actual legitimate usage in that net, but it's worth it.
But you'll also miss some. You'll always miss some, even with the best guard rails. But 99% is better than 0%, I agree.
> ... and just expect the user to use it legally?
I don't think it's entirely the responsibility of the builder/supplier/service to ensure this, honestly. I don't think it can be. You can sell hammers, and you can't guarantee that the hammer won't be used to hurt people. You can put spray cans behind cages and require purchasers to be 18 years old, but you can't stop the adult from vandalism. The person has to be held responsible at a certain point.
I bet most hammers (non-regulated), spray cans (lightly regulated) and guns (heavily regulated) that are sold are used for their intended purposes. You also don't see these tools manufacturers promoting or excusing their unintended usage as well.
There's also a difference between a tool manufacturer (hardware or software) and a service provider: once the tool is on the user's hands, it's outside of the manufacturer's control.
In this case, a malicious user isn't downloading Grok's model and running it on their GPU. They're using a service provided by X, and I'm of the opinion that a service provider starts to be responsible once the malicious usage of their product gets relevant.
> I don't know how we got to the point where we can build things with no guardrails and just expect the user to use it legally?
Historically tools have been uncensored, yet also incredibly difficult and time-consuming to get good results with.
Why spend loads of effort producing fake celebrity porn using photoshop or blender or whatever when there's limitless free non-celebrity porn online? So photoshop and blender didn't need any built-in censorship.
But with GenAI, the quantitive difference in ease-of-use results in qualitative difference in outcome. Things that didn't get done when it needed 6 months of practice plus 1 hour per image are getting done now it needs zero practice and 20 seconds per image.
> Where does the line fall between provider responsibility when providing a tool that can produce protected work, and personal responsibility for causing it to generate that work?
If you operate the tool, you are responsible. Doubly so in a commercial setting. If there are issues like Copyright and CSAM, they are your responsibility to resolve.
If Elon wanted to share out an executable for Grok and the user ran it on their own machine, then he could reasonably sidestep blame (like how photoshop works). But he runs Grok on his own servers, therefore is morally culpable for everything it does.
Your servers are a direct extension of yourself. They are only capable of doing exactly what you tell them to do. You owe a duty of care to not tell them to do heinous shit.
It's simpler to regulate the source of it than the users. The scale that genAI can do stuff is much, much different than photocopying + Photoshop, scale and degree matter.
So, back in the 90s and 2000s, you could get The Gimp image editor, and you could use the equivalent of Word Art to take a word or phase and make it look cool, with effects like lava or glowing stone, or whatever. The Gimp used ImageMagick to do this, and it legit looked cool at the time.
If you weren't good at The Gimp, which required a lot of knowledge, you could generate a cool website logo by going to a web server that someone built, giving them a word or phrase, and then selecting the pre-built options that did the same thing - you were somewhat limited in customization, but on the backend, it was using ImageMagick just like The Gimp was.
If someone used The Gimp or ImageMagick to make copyrighted material, nobody would blame the authors of The Gimp, right? The software were very nonspecific tools created for broad purposes, that of making images. Just because some bozo used them to create a protected image of Mickey Mouse doesn't mean that the software authors should be held accountable.
But if someone made the equivalent of one of those websites, and the website said, "click here to generate a random picture of Mickey Mouse", then it feels like the person running the website should at least be held partially responsible, right? Here is a thing that was created for the specific purpose of breaking the law upon request. But what is the culpability of the person initiating the request?
Anyway, the scale of AI is staggering, and I agree with you, and I think that common decency dictates that the actions of the product should be limited when possible to fall within the ethics of the organization providing the service, but the responsibility for making this tool do heinous things should be borne by the person giving the order.
I think yes CSAM and other harmful outputs are a different and more heinous problem, I also think the responsibility is different between someone using a model locally and someone promoting grok on twitter.
Posting a tweet asking Grok to transform a picture of a real child into CSAM is no different, in my mind, than asking a human artist on twitter to do the same. So in the case of one person asking another person to perform this transformation, who is responsible?
I would argue that it’s split between the two, with slightly more falling on the artist. The artist has a duty to refuse the request and report the other person to the relevant authorities. If that artist accepted the request and then posted the resulting image, twitter then needs to step in and take action against both users.
Even if you can’t reliably control it, if you make a tool that generates CSAM you’ve made a CSAM generator. You have a moral responsibility to either make your tool unavailable, or figure out how to control it.
I'm not sure I agree with this specific reasoning. Consider this, any given image viewer can display CSAM. Is it a CSAM viewer? Do you have a moral responsibility to make it refuse to display CSAM? We can extend it to anything from graphics APIs, to data storage, etc.
There's a line we have to define that I don't think really exists yet, nor is it supported by our current mental frameworks. To that end, I think it's just more sensible to simply forbid it in this context without attempting to ground it. I don't think there's any reason to rationalize it at all.
I think the question might come down to whether Grok is a "tool" like a paintbrush or Photoshop, or if Grok is some kind of agent of creation, like an intern. If I ask an art intern to make a picture of CSAM and he does it, who did wrong?
If Photoshop had a "Create CSAM" button and the user clicked it, who did wrong?
I think a court is going to step in and help answer these questions sooner rather than later.
Normalizing AI as being human equivalent means the AI is legally culpable for its own actions rather than its creators or the people using it, and not guilty of copyright infringement for having been trained on proprietary data without consent.
I happen to agree with you that the blame should be shared, but we have a lot of people in this thread saying "You can't blame X or Grok at all because it's a mere tool."
From my knowledge (albeit limited) about the way LLMs are set up, they most definitely have abilities to include guardrails of what can't be produced. ChatGPT has some responses to prompts which stops users from proceeding.
And X specifically: there have many cases of X adjusting Grok where Grok was not following a particular narrative on political issues (won't get into specifics here). But it was very clear and visible. Grok had certain outputs. Outcry from certain segments. Grok posts deleted. Trying the same prompts resulted in a different result.
From my (admittedly also limited) understanding, there’s no bulletproof way to say “do NOT generate X” as it’s not non-deterministic and you can’t reverse engineer and excise the CSAM-generating parts of a model. “AI jailbreak prompts” are a thing.
Well it’s certainly horrible that they’re not even trying, but not surprising (I deleted my X account a long time ago).
I’m just wondering if from a technical perspective it’s even possible to do it in a way that would 100% solve the problem, and not turn it into an arms race to find jailbreaks. To truly remove the capability from the model, or in its absence, have a perfect oracle judge the output and block it.
Again, I'm not the most technical, but I think we need to step back and look at this holistically. Given Grok's integration with X, there could be other methods of limiting the production and dissemination of CSAM.
For arguments sake, let's assume Grok can't reliably have guardrails in place to stop CSAM. There could be second and third order review points where before an image is posted by Grok, another system could scan the image to verify whether it's CSAM or not, and if the confidence is low, then human intervention could come into play.
I think the end goal here is prevention of CSAM production and dissemination, not just guardrails in an LLM and calling it a day.
Given how spectacular the failure of EVERY attempt to put guardrails on LLMs has been, across every single company selling LLM access, I'm not sure that's a reasonable belief.
The guardrails have mostly worked. They have never ever been reliable.
Yes, every image generation tool can be used to create revenge porn. But there are a bunch of important specifics here.
1. Twitter appears to be taking no effort to make this difficult. Even if people can evade guardrails this does not make the guardrails worthless.
2. Grok automatically posts the images publicly. Twitter is participating not only in the creation but also the distribution and boosting of this content. The reason why a ton of people doing this is not because they personally want to jack it to somebody, but because they want to humiliate them in public.
3. Decision makers at twitter are laughing about what this does to the platform and its users when they "post a picture of this person in their underwear" button is available next to every woman who posts on the platform. Even here they are focusing only on the illegal content, as if mountains of revenge porn being made of adult women isn't also odious.
It is trivially easy to filter this with an LLM or even just a basic CLIP model. Will it be 100% foolproof? Not likely. Is it better than doing absolutely nothing and then blaming the users? Obviously. We've had this feature in the image generation tools since the first UI wrappers around Stable Diffusion 1.0.
> but output is directly connected to its input and blame can be proportionally shared
X can actively work to prevent this. They aren't. We aren't saying we should blame the person entering the input. But, we can say that the side producing CSAM can be held responsible if they choose to not do anything about it.
> Isn't this a problem for any public tool? Adversarial use is possible on any platform
Yes. Which is why the headline includes: "no fixes announced" and not just "X blames users for Grok-generated CSAM."
Grok is producing CSAM. X is going to continue to allow that to happen. Bad things happen. How you respond is essential. Anyone who is trying to defend this is literally supporting a CSAM generation engine.
An analogy: if you're running the zoo, the public's safety is your job for anyone who visits. It's of course also true that sometimes visitors act like idiots (and maybe should be prosecuted), and also that wild animals are not entirely predictable, but if the leopards are escaping, you're going to be judged for that.
Maybe because sometimes they're kids? You gotta kid-proof stuff in a zoo.
Also, punishment is a rather inefficient way to teach the public anything. The people who come through the gate tomorrow probably won't know about the punishment. It will often be easier to fix the environment.
Removing troublemakers probably does help in the short term and is a lot easier than punishing.
If the personal accountability happened at the speed and automation level that X allows Grok to produce revenge porn and CSAM, then I'd agree with you.
Yep. "Oh grok is being too woke" gets musk to comment that they'll fix it right away. But turn every woman on the platform into a sex object to be the target of humiliation? That's just good fun apparently.
I even think that the discussion focusing on csam risks missing critical stuff. If musk manages to make this story exclusively about child porn and gets to declare victory after taking basic steps to address that without addressing the broader problem of the revenge porn button then we are still in a nightmare world.
Women should be able to exist in public without having to constantly have porn made of their likeness and distributed right next to their activity.
You always have liability. If you put something there you tell the court that you see the problem and are trying to prevent it. It often becomes easier to get out of liability if you can show the courts you did your best to prevent this. Courts don't like it when someone is blatantly unaware of things - ignorance is not a defense if "a reasonable person" would be aware of it. If this was the first AI in 2022 you could say "we never thought about that" and maybe get by, but by 2025 you need to tell the court "we are aware of the issue, and here is why we think we had reasonable protections that the user got around".
How about policing CSAM at all? I can still vividly remember firehose API access and all the horrible stuff you would see on there. And if you look at sites like tk2dl you can still see most of the horrible stuff that does not get taken down.
It's on X, not some fringe website that many people in the world don't access.
Regardless of how fringe, I feel like it should be in everyones best interests to stop/limit CSAM as much as they reasonably can without getting into semantics of who requested/generated/shared it.
> How about not enabling generating such content, at all?
Or, if they’re being serious about the user-generated content argument, criminally referring the users asking for CSAM. This is hard-liability content.
This is probably harder because it's synthetic and doesn't exist in PhotoDNA database.
Also, since Grok is really good in getting the context, something akin to "remove their T-shirt" would be enough to generate a picture someone wanted, but very hard to find using keywords.
IMO they should mass hide ALL the images created since then specific moment, and use some sort of the AI classifier to flag/ban the accounts.
Willing to bet that X premium signups have shot up because of this feature. Currently this is the most convenient tool to generate porn of anything and everything.
I don’t think anyone can claim that it’s not the user’s fault. The question is whether it’s the machine’s fault (and the creator and administrator - though not operator) as well.
The article claims Grok was generating nude images of Taylor Swift without being prompted and that there was no way for the user to take those images down
I don't know how common this is, or what the prompt was that inadvertently generated nudes. But it's at least an example where you might not blame the user
Yeah but “without being asked” here means the user has to confirm they are 18+, choose to enable NSFW video, select “spicy” in Grok’s video generation settings and then prompt “Taylor Swift celebrating Coachella with the boys”. The prompt seems fine but the rest of it is clearly “enable adult content generation”.
I know they said “without being prompted” here but if you click through you’ll see what the person actually selected (“spicy” is not default and is age-gated and opt-in via the nsfw wall).
Let’s not lose sight of the real issue here: Grok is a mess from top to bottom run by an unethical, fickle Musk. It is the least reliable LLM of the major players and musk’s constant fiddling with it so it doesn’t stray too far from his worldview invalidates the whole project as far as I’m concerned.
Isn't it a strict liability crime to posses it in the US? So if AI-generated apparent CSAM counts as CSAM legally (not sure on that) then merely storing it on their servers would make X liable.
You are only liable if you know - or should know - that you possess it. You can help someone out by mailing their sealed letter containing CSAM and be fine since you have no reason to suspect the sealed letter isn't legal. X can store CSAM so long as they have reason to think it is legal.
Note that things change. In the early days of twitter (pre X) they could get away with not thinking of the issue at all. As technology to detect CSAM marches on they need to use it (or justify why it shouldn't be used - too many false positives???). As a large platform for such content they need to push the state of the art in such detection.. At no point do they need perfection - but they need to show they are doing their reasonable best to stop this.
The above is of course my opinion. I think the courts will go a similar direction, but time will tell...
> You are only liable if you know - or should know - that you possess it.
Which he does and responded with “I will blame and punish users.” Which yeah, you should, but you also need to fix your bot. He’s certainly has no issue doing that when Grok outputs claims/arguments that make him look bad or otherwise engages in what he considers “wrongthink,” but suddenly when there are real, serious consequences he gets to hide behind “it’s just a user problem”?
This is the same thing YouTube and social media companies have been getting away with for so long. They claim their algorithms will take care of content problems, then when they demonstrably fail they throw their hands up and go “whoops! Sorry we are just too big for real people to handle all of it but we’ll get it right this time.” Rinse repeat.
Blame and punish should be a part of this. However that only works if you can find who to blame and punish. We also should put guard rails on so people don't make mistakes. (generating CSAM should not be an easy mistake to make when you don't intend it, but in other contexts someone may accidentally ask for the wrong thing)
There's still a lot of of unanswered questions in that area regarding generated content. Whether the law deems it CSAM depends on if the image depicts a real child, and even that is ambiguous, like was it wholly generated or augmented. Also, is it "real" if it's a model trained on real images?
Some of these things are going into the ENFORCE act, but it's going to be a muddy mess for a while.
I think platforms that host user-generated content are (rightly) treated differently. If I posted a base64 of CSAM in this comment it would be unreasonable to shut down HN.
The questions then, for me, are:
* Is Grok considered a tool for the user to generate content for X or is Grok/X considered similar to a vendor relationship
* Is X more like Backpage (not protective enough) than other platforms
I’m sure this is going to court, at least for revenge porn stuff. But why would anyone do this to their platform? Crazy. X/Twitter is full of this stuff now.
I don't think you can argue yourself out of "The Grok account is owned and operated by Twitter". In no planet is what it outputs user generated content since the content does not originate from the user, at most they requested some content from Twitter and Twitter provided it.
Getting off to images of child abuse (simulated or not) is a deep violation of social mores. This itself does indeed constitute a type of crime, and the victim is taken to be society itself. If it seems unjust, it's because you have a narrow view of the justice system and what its job actually is (hint: it's not about exacting controlled vengeance)
It may shock you to learn that bigamy and sky-burials are also quite illegal.
Any lawyers around? I would assume (IANAL) that Section 230 does not apply to content created by an agent owned by the platform, as opposed to user-uploaded content. Also it seems like their failure to create safeguards opens up the possibility of liability.
And of course all of this is narrowly focused on CSAM (not that it should be minimized) and not on the fact that every person on X, the everything app, has been opened up to the possibility of non-consensual sexual material being generated of them by Grok.
The CSAM aspects aren't necessarily as affected by 230: to the extent that you're talking about it being criminal, 230 doesn't apply at all there.
For civil liability, 230 really shouldn't apply; as you say, 230's shield is about avoiding vicarious liability for things other people post. This principle stretches further than you might expect in some ways but here Grok just is X (or xAI).
Nothing's set in stone much at all with how the law treats LLMs but an attempt to say that Grok is an independent entity sufficient to trigger 230 but incapable of being sued itself, I don't see that flying. On the other hand the big AI companies wield massive economic and political power, so I wouldn't be surprised to see them push for and get explicit liability carveouts that they claim are necessary for America to maintain its lead in innovation etc. etc., whether those come through legislation or court decisions.
> non-consensual sexual material being generated of them by Grok
They should disable it in the Netherlands in this case since it really sounds like a textbook slander case and the spreader can also be held liable. note: it's not the same as in the US despite using the same word, deepfakes have been proven as slander and this is no different. Especially if you know it's fake by using "AI". There have been several cases of pornographic deep fakes, all of which were taken down quickly, in which the poster/creator was sentenced. The unfortunate issue even of taking posts down quickly is unfortunately the rule which states that if something is on the internet it stays on the internet. The publisher always went free due to acting quickly and not creating it. I would like to see where it goes when both publisher and creator are the same entity, and they do nothing to prevent it.
Yeah this is pretty funny. Seeing all these discussions about section 230 and the American constitution...
Nobody in the Netherlands gives one flying fuck about American laws what GROK is doing violates many Dutch laws. Our parliament actually did it's job and wrote some stuff about revenge porn, deep fakes and artificial CP.
I find it fascinating to read comments from a lot of people who support open models without guardrails, and then to read this thread with seemingly the opposite sentiment in overwhelming majority. Is it just two different sets of users with differing opinions on if models should be open or closed?
I think there's a difference between access without guardrails, and decrying what folks do with them, or in this case a site that allows / doesn't even care if their integrated tool is used to creep on folks.
I can argue for access to say photoshop like tools, and say folks shouldn't post revenge / fake porn ...
They ban users responsible for misusing the tool, and refer them to law enforcement when appropriate. The whole point of this article is to say that's not good enough ("X blames users for [their misuse of the tool]") implying that merely making the tool available for people to use constitutes support of pedophilia. (Textbook case of appealing to the Four Horsemen of the Infocalypse.) The prevailing sentiment in this thread seems to be agreement with that take.
Making the tool easy to use and allowing it to just immediately post on Twitter is much different than simply providing a model online that people can download and run themselves.
If you are providing a tool for people, YES you are responsible to some degree.
Think of it this way. I sell racecars. I'm not responsible if someone buys my racecar and then drinks and drives and dies. Now, I run an entertainment venue where you can ride along in racecars. One of my employees is drunk, and someon dies. Now I am responsible.
In, like, an "ask a bunch of people and see what they think" way. Consensus. I'm not talking legality because I'm not a lawyer and I also don't care.
But I think, most people would say "uh, yeah, the business needs to do something or implement some policy".
Another example: selling guns versus running a shooting range. If you're running a shooting range then yeah, I think there's an expectation you make it safe. You put up walls, you have security, etc. You try your best to migrate the bad shit.
Misuse in this case doesn't include harassing adult women with AI generated porn of them. "Oh we banned the people doing this with children" doesn't cut it, in my mind.
As of May posting AI generated porn of unconsenting adults is a federal crime[1], so I'd be very surprised if they didn't ban users for that as well. The article conflates a bunch of different issues which makes it difficult to understand exactly what is and is not being talked about in each individual paragraph.
I am glad that open models exist. I also prefer that the most widely accessible AI systems that have engineered prompts and direct integration with social media platforms have guardrails. I do not think that this is odd.
I think it is good that you can install any apk on an android device. I also think it is good that the primary installation mechanism that most people use has systems to try to prevent malware from getting installed.
This sort of approach means that people who really need unbounded access and are willing to go through some extra friction can access these things. It makes it impossible for a megacorp to have complete control over a computing ecosystem. But it also reduces abuse since most people prefer to use the low-friction approach.
When people want open models without guardrails they're mostly talking about LLMs not so much image / video models. Outside of preventing CSAM what kind of guardrails would a image or video model have? Don't output instructions on the image for how to make meth? Lol
How do you even train a model to do that? For closed / proprietary models, that works, but for open / offline models, if I want to make a LoRa for meth instructions in an image... I don't know that you can stop me from doing so.
The thread is about a model-as-a-service. What you do at home on your own computer is qualitively different, in ternd of harassment and injury potential, that something automatically shared to Twitter.
Any mention of Musk on HN seems to cause all rational thought to go out the window, but yeah I wonder in this case how much of this wild deviation from the usual sentiment is attributable to:
1. Hypocrisy (people expressing a different opinion on this subject than they usually would because they hate Musk)
vs.
2. Selection bias (article title attracts a higher percentage of people who were already on the more regulation, less freedom side of the debate)
vs.
3. Self-censorship (people on the "more freedom, less regulation" side of the debate being silent or not voting on comments because in this case defending their principles would benefit someone they hate)
There might be other factors I haven't considered as well.
Gee, I wonder why people would take offense at an AI model being used to generate unprecedented amounts of CSAM from real children, or objectify millions of women without their consent. Must be that classic Musk Derangement Syndrome.
The real question is how can the pro-Musk guys still find a way to side with him on that. My leading theory is that they're actually pro-pedophilia.
I think regardless of source, sharing such pictures on public social media is probably crossing the line? And everything generated by this model is de-facto posted publicly on social media (some commenters are even saying it's difficult to erase unwanted / unintended images?)
I'd also argue commercialization affects this - X is marketing this as a product and making money off subscriptions, whereas I generally think of an open model as something you run locally for free. There's a big difference between "Porn Producer" and "Photoshop"
Context matters. In this case we're talking about Grok on X. It's not a philosophical debate if open or closed models are good. It's a debate (even though it shouldn't be) about Grok producing CSAM on X. If this was about what users do with their own models on their local machines then things would be different since that's not openly accessible or part of one of the biggest sites on the net. I think most people would argue that public facing LLM's have some responsibility to the public. As would any IP owner.
I think the question of if X should do more to prevent this kind of abuse (I think they should) is separate from Grok or LLM's though. I get that since xAI and X are owned by the same person there is some complications here, but most of the arguments I'm reading have to do with the LLM specifically, not just lax moderation policies.
Jokes on xAI. Europe doesn't have a Section 230 and the responsibility fall squarely on the platform and its owners. In Europe, AI generated or photoshopped CSAM is treated the same as actual abuse-backed CSAM if the depiction is realistic. Possession and distribution are both serious crimes.
The person(s) ultimately in charge of removing (or preventing the implementation of) Grok guardrails might find themselves being criminally indicted in multiple European countries once investigations have concluded.
I'm not sure Grok output is even covered by Section 230. Grok isn't a separate person posting content to a platform, it's an algorithm running on X's servers publishing on X's website. X can't reasonably say "oh, that image was uploaded by a user, they're liable, not us" when the post was performed by Grok.
Suppose, if instead of an LLM, Grok was an X employee specifically employed to photoshop and post these photos as a service on request. Section 230 would obviously not immunize X for this!
Generating a non-real child could be argued that it might not count. However thats not a given.
> The term “child pornography” is currently used in federal statutes and
> is defined as any visual depiction of
> sexually explicit conduct involving a
> person less than 18 years old.
Is broad enough to cover anything obviously young.
but when it comes to "nude-ifing" a real image of a know minor, I strognly doubt you can use the defence its not a real child.
Therefore your knowingly generating and distributing CSAM, which is out of scope for section 230
A natural person. That's what CSAM covers. There have been prosecutions under federal CSAM laws otherwise, but there have also been successful constitutional challenges that, briefly, classify fabricated content as obscenity. The implication there is that private possession of obscene materials is lawful.
> Europe doesn't have a Section 230 and the responsibility fall squarely on the platform and its owners.
They have something like Section 230 in the E-Commerce Directive 2000/31/EC, Articles 12-15, updated in the Digital Service Act. The particular protections for hosts are different but it is the same general idea.
Is Europe actually going to do anything? They currently appear to be puckering their assholes and cowering in the face of Trump, and his admin are already yelling about how the EU is "illegally" regulating American companies.
They might just let this slide to not rock the boat, either out of fear and they will do nothing, or to buy time if they are actually divesting from the alliance with and economic dependence on the US
There's so many of these nonsense views of the EU here. Not being vocal about a mental case president doesn't mean politicians are "puckering their assholes". The EU is not affraid to moderate and fine tech companies. These things take time.
Under previous US admins and the relationship the EU had, yeah.
The asshole puckering is from how Trump has completely flipped the table, everything is hyper transactional now, and as we’ve seen military action against leaders personally is also on the table.
I’m saying I could see the EU let this slide now because it’s not worth it politically to regulate US companies for shit like this anymore. Whether that would be out of fear or out of trying to buy time to reorganize would probably end up in future getting the same kind of historical analysis that Chamberlain’s policy of appeasement to Germany gets nowadays
They are able to change how Grok is prompted to deny certain inputs, or to say certain things. They decided to do so to praise Musk and Hitler. That was intentional.
They decided not to do so to prevent it from generating CSAM. X offering CSAM is intentional.
Grok will shit-talk Elon Musk, and it will also put him in a bikini for you. I've always found it a bit surprisingly how little control they seem to have there.
Ok I understood when stuff related to dodge was consistently flagged for being political and not relevant to hacking but... This is surely relevant to the audience here, no?
> No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
That's what section 230 says. The content in question here is not provided by "another information content provider", it is provided by X itself.
Section 230 is not a magical "get of jail free" card that you can use to absolve your tech platform of any responsibilities to its users. Removing posts and banning users is obviously not a workable solution for a technology that can abuse individuals very quickly.
My point is more that a lot of people were talking about removing Section 230 protections, which I think is implicitly what X is saying absolves them of responsibility for Grok-generated CSAM.
Removing Section 230 was a big discussion point for the current ruling party in the US, when they didn't have so much power. Now that they do have power, why has that discussion stopped? I'd be very interested in knowing what changed.
Ah, I misinterpreted - apologies. The current ruling party is not a monolith. The tech faction has been more or less getting its way at the expense of the tradionalist natcon faction. The former has no interest in removing section 230 protections, while a few in the latter camp say they do.
But beyond the legality or obvious immorality, this is a huge long-term mistake for X. 1 in 3 users of X are women - that fraction will get smaller and smaller. The total userbase will also get smaller and smaller, and the platform will become a degenerate hellhole like 4chan.
When do we cross the line of culpability with tool-assisted content? If I have a typo in my prompt and the result is illegal content, am I responsible for an honest mistake or should the tool have refused to generate illegal content in the first place?
Do we need to treat genAI like a handgun that is always loaded?
Even ignoring that Grok is generating the content, not users, I think you can still hold to Section 230 protections while thinking that companies should take more significant moderation actions with regards to issues like this.
For example, if someone posted CSAM on HN and Dang deleted it, I think that it would be wrong to go after HN for hosting the content temporarily. But if HN hosted a service that actively facilitated, trivialized, and generated CSAM on behalf of users, with no or virtually no attempt to prevent that, then I think that mere deletion after the fact would be insufficient.
But again, you can just use "Grok is generating the content" to differentiate if that doesn't compel you.
Should Adobe be held accountable if someone creates CSAM using their software? They could put image recognition into it that would block it, but they don't.
Look what happens when you put in an image of money into Photoshop. They detect it and block it.
I don't know. Does it matter what I think about that? Let's say I answer "yes, they should". Then what? Or what if I say "no, I see a difference". Then what?
Who cares about Adobe? I'm talking about Grok. I can consistently say "I believe platforms should moderate content in accordance with Section 230" while also saying "And I think that the moderation of content with regards to CSAM, for major platforms with XYZ capabilities should be stricter".
The answer to "what about Adobe?" is then either that it falls into one of those two categories, in which case you have your answer, or it doesn't, in which case it isn't relevant to what I've said.
1) you need to bring your own source material to create it. You can't press a button that says "make child porn"
2) its not a reasonable to expect that someone would be able to make CSAM in photoshop. However more importantly the user is the one hosting the software, not adobe.
>You can't press a button that says "make child porn"
Where is this button in Grok? You have to as the user explicitly write out a very obviously bad request. Nobody is going to accidentally get CSAM content without making a conscious choice about a prompt that's pretty clearly targeting it.
is it reasonable (legal term, ie anyone can do it) that someone with little effort could create CSAM using photoshop?
No, you need to train, take a lot of time and effort to do it. with grok you say "hey make a sexy version of [picture of this minor]" and it'll do it. that doesn't take traning, and its not a high bar to stopping people doing it.
The non-CSAM example is this, it's illegal, in the USA to make anything that looks like a US dollar bill. ie photocopiers have blocks on them to stop you making copies of it.
You can get round that as a private citizen but its still illegal. A company knowingly making a photocopier that allows you to photocopy dollar bills is in for a bad time.
Something must have changed, there's a whole lot less concern about censorship and government intervention in social media, despite many "just the facts" reports of just such interventions going on.
I'm at a loss to explain it, given media's well known liberal bias.
How curious that your comment was downvoted! It seems completely appropriate and in line with the discussion.
I think it's time to revisit these discussions and in fact remove Section 230. X is claiming that the Grok CSAM is "user generated content" but why should X have any protection to begin with, be it a human user directly uploading it or using Grok to do this distribution publicly?
The section 230 discussion must return, IMHO. These platforms are out of control.
Grok is a hosted service. In your analogy, it would be like a gun shop renting a gun out to someone who puts down "Rob a store" as the intended usage of the rental. Then renting another gun to that same client. Then when confronted, telling people "I'm not responsible for what people do with the guns they rent from me".
It's not a personal tool that the company has no control over. It's a service they are actively providing and administering.
I think a better analogy would be going into a gun shop and paying the owner to shoot someone. They're asking grok to undress people and it's just doing it.
Would you blame only the users of a murder-for-hire service? Sure, yes, they are also to blame, but the murder-for-hire service would also seem to be equally culpable.
Great, can we finally get X blocked in the EU then? Far too many people are still hooked to toxic content on that platform, and it is owned by an anti-EU, right-extreme, nazi-salute guy, who would love nothing more than seeing the EU fail.
> That’s like blaming a pen for writing something bad,” DogeDesigner opined.
Genuinely terrifying how Elon has a cadre of unpaid yes-men ready to justify his every action. DogeDesigner regularly sub tweets Elon agreeing to his latest dumb take of the day, and even seems to have based his entire identity on Elon's doge obsession.
I can't imagine how terrible that self imposed delusion feels deep down for either of them.
> Genuinely terrifying how Elon has a cadre of unpaid yes-men ready to justify his every action.
A similar article[1] briefly made it to the HN front page the other day, for a few minutes before Elon's army of unpaid yes-men flag-nuked it out of existence.
I have a very hard time understanding the business case for xAI/Grok. It is supposedly worth $200 billion (at least by Silicon Valley math), putting it in the company of OpenAI and Anthropic, but like...Who is using it? What is it good for? Is it making a single dollar in revenue? Or is the whole thing just "omg Elon!!" hype similar to most of his other endeavors?
> Or is the whole thing just "omg Elon!!" hype similar to most of his other endeavors?
Yes, but combined with "omg AI" (which happened elsewhere; for instance, see the hype over OpenAI Sora, which is clearly useless except as a toy), so extra-hype-y.
I don't buy the "I only provide the tool" cop out. Musk does control what Grok spews out and just chooses not to act in this case.
When Grok stated that Israel was committing genocide, it was temporarily suspended and fixed[0]. If you censor some things but not others, enabling the others becomes your choice. There is no eating the cookie and having it too - you either take a "common carrier" stance or censor, but also take responsibility for what you don't censor.
If you follow the "tool-maker is responsible for tool-use" thread of thought to its logical conclusion, you have to hold creators of open-weights models responsible for whatever people do with these models. Do you want to live in a world that follows this rule?
But we don't have to take things to furthest conclusions. We can very easily draw both a moral and legal line between "somebody downloaded an open weight model, created a prompt from scratch to generate revenge porn of somebody, and then personally distributed that image" and "twitter has a revenge porn button right next to every woman on the platform that generates and distributes revenge porn off of a simple sentence."
People who say "society should permit X, but only if it's difficult" have a view of the world incompatible with technological progress and usually not coherent at all.
You seem unfamiliar with these things we have called laws. I recommend reading up on what they are and how they work. It would be generally useful to understands such things.
The core issue is that X is now a tool for creating and virally distributing these images anonymously to a large audience, often targeting the specific individuals featured in the images. For example, to any post with a picture, any user can simply reply "@grok take off their clothes and make them do something degrading", and the response is then generated by X and posted in the same thread. That is an entirely different kind of tool from an open-weight model.
The LLM itself is more akin to a gun available in a store in the "gun is a tool" argument (reasonable arguments on both side in my opinion); however, this situation more like a gun manufacturer creating a program to mass distribute free pistols to a masked crowd, with predictable consequences. I'd say the person running that program was either negligent or intentionally promoting havoc to the point where it should be investigated and regulated.
The phrase “its logical conclusion” is doing a lot of heavy lifting here. Why on earth would that absurdity be the logical conclusion? To me it looks like a very illogical conclusion.
Importantly, X also provides the hardware to run the model, a friendly user-interface around it, and the social platform to publicly share and discuss outputs from the model. It's not just access to the model.
I can see this becoming a culture war thing like vaccines. Conservatives will become pro-CSAM because it triggers the overly sensitive crybaby Liberals.
This has already been a culture war thing, and it's why X.com is able to continue to provide users with CSAM with impunity. The site is still up after everything, and the app is still on the app store everywhere.
When the far-right paints trans people as pedophiles, it's not an accident that also provides cover for pedophiles.
The age of consent between 16 and 18 is a relatively high born from progressive feminist wins. In the United States, the lowest AOC was 14 until the 1990s, and the AOC in the US ranged from _7 to 12_ for most of our existence.
To be clear, I'm in defense of a high age of consent. But it's something that had to be fought for, and it's not something that can be assumed to be safe in our culture (like the rejection of nazis and white supremacists, or valuing womens rights including voting and abortion).
Influential politicians like Tom Hofeller were advocates for pedophilia and nobody cares at all. Trump is still in power despite the Epstein controversy, Matt Gaetz still hasn't been punished for paying for sex with an underage girl in 2017. The Hitler apologia in the far-right spaces even explicitly acknowledge he was a pedophile. Etc.
In a different era, X would have been removed from Apple and Google's app stores for the CEO doing nazi salutes and the chatbot that promoting Hitler. But even now that X is a CSAM app, as of 3PM ET, I can still download X on both of their app stores. That would not have been normal just two years ago.
This has already been a culture war issue for awhile, there is a pro-pedophilia side, and this is just another victory for them.
We've already got a taste of that with people like Megyn Kelly saying "it's not pedophilia, it's ephebophilia" when talking about Epstein and his connections. Not surprising though. When you have no principles you'll go as far as possible to "trigger the libs".
Already the case. I can’t dig up the link, but I recall that a recent poll showed that about half of Republicans would still support Trump even if he was directly implicated in Epstein’s crimes.
Naughty Old Mr Car's fans are triggered by any criticism of Dear Leader.
This is actually separate to hn's politics-aversion, though I suspect there's a lot of crossover. Any post which criticised Musk has tended to get rapidly flagged for at least the last decade.
Only because of the broader context of the legal environment. If there was no prosecution for breaking and entering, they would be effectively worthless. For the analogy to hold, we need laws to throw coercive measures against those trying to bypass guard rails. Theoretically, this already exists in the Computer Fraud and Abuse Act in the US, but that interpretation doesn't exist quite yet.
Goalpost movement alert. The claim was that "AI can be told not to output something". It cannot. It can be told to not output something sometimes, and that might stick, sometimes. This is true. Original statement is not.
After learning that guaranteed delivery was impossible, the once-promising "Transmission Control Protocol" is now only an obscure relic of a bygone era from the 70s, and a future of inter-connected computer systems was abandoned as merely a delusional, impossible fantasy.
If your effort is provably futile, wouldn't saying you tried be a demonstration of a profound misallocation of effort (if you DID try), or a blatant lie (if you did not)?
The irony. Musk fumes about pedo leftist weirdos. And then his own ai bot creates CSAM. The right are full of hypocrites and weirdos compensating so so very hard.
Elon Musk attends the Donald Trump school of responsibility. Take no blame. Admit no fault. Blame everyone else. Unless it was a good thing, then take all credit and give none away.
lol. Always fun to watch HN remove hightly relevant topics from the top of the front page. To their credit they usually give us about an hour to discuss before doing so. How kind of them.
So let me get this straight. When people use these tools to steal artist’ styles directly to generate fake Ghibli art, then it’s «just a tool, bro».
But when it’s used to create CSAM, then it’s suddenly not just a tool.
You _cannot_ stop these tools from generating this kind of stuff. Prompt guards only get you so far. Self-hosted versions don’t have them. The human writing the prompt is at fault. Just like it’s not Adobe’s responsibility if some sick idiot puts bikinis on a child in Photoshop.
If you post pictures of yourself on X and don't want grok to "bikini you", block grok.
Yes, under the TOS, what grok is doing is not the "fault" of grok(the reason is the causal factor of the post[enabled by 2 humans: the poster and the prompter]; the human intent is what initiates the generated post, not the bot; just like a gun is shot by a human, not by the strong winds). You could argue it's the fault of the "prompter", but we're going to circle back to the cat & mouse censorship issue. And no, I don't want a less censored grok version that's unable to "bikini a NAS"(which is what I've been fortunate to witness) just because "new internet users" don't understand what the Internet is.(Yes, I know you can obviously fine-tune the model to allow funny generations and deny explicit/spicy generations)
If X would implement what the so-called "moralists" want, it will just turn into Facebook.
And for the "protect the children" folks, it's really disappointing how we're always coming back to this bullsh*t excuse every time a moral issue arises. Blocking grok is a fix both for the person who doesn't want to get edited AND the user who doesn't want to see grok replies(in case the posts don't get the NSFW tag in time).
Ironically, a decent amount of people who want to censor grok are bluesky users, where "lolicorn" and similar dubious degenerate content is being posted non-stop AS HUMAN-MADE content. Or what, just because it's an AI it's suddenly a problem? The fact that you can "strip" someone by tweeting a bot?
And lastly, sex sells. If people haven't figured out that "bikinis", "boobs", and everything related to sex will be what wins the AI/AGI/etc. race (it actually happens for ANY industry), then it's their problem. Dystopian? Sure, but it's not an issue you can win with moral arguments like "don't strip me". You will get stripped down if it created 1M impressions and drives engagement. You will not convince Musk(or any person who makes such a decision) to stop grok from "stripping you", because the alternative is that other non-grok/xAI/etc. entities/people will make the content, drive the engagement, make the money.
When I generate content on most AI's including Grok, I ask it to fashion a prompt first of the subject I want and ask it to make sure that it does not violate any TOS or CSAM policies. I also instruct it that the prompt should be usable by most AIs. It fashions the prompt. When I use the prompt, the system complains that the prompt violates the TOS. I then ask the AI to locate the troubling aspect of the prompt. It says that it has and provides an alternative, safer prompt. More often than not, this newer prompt is also flagged as inappropriate. This is very frustrating even when the original intent is not to create content that violates any public AI policy. From my experience, both users and the technology make mistakes.
Someone spending 40 hours drawing a nude is not equivalent to someone saying take this photo and make them naked and having a naked photo in 4 seconds.
Only one of these is easily preventable with guardrails.
Is Grok simply a tool, or is it itself an agent of the creative process? If I told an art intern to create CSAM, he does, and then I publish it, who's culpable? Me? The intern? Both of us? I don't expect you to answer the question--it's not going to be a simple answer, and it's probably going to involve the courts very soon.
So, if that "software program" had a traditional button UI, a button said "Create CSAM," and the user pushed it, the program's creator is not culpable at all for providing that functionality?
I would agree with this if Grok's interface was "put a pixel there, put a line there, now fill this color there" like Photoshop. But it's not. Generative AI is actively assisting users to perform the specific task described and its programming is participating in that task. It's not just generically placing colors on the screen where the user is pointing.
Come on man. Really? You think this is a good argument?
Why not charge the people who make my glasses cuz they help me see the CP? Why not charge computer monitor manufacturers? Why not charge the mine where they got the raw silicon?
Here you have a product which itself straight up produces child porn with like absolutely zero effort. Very different than some object which happens to be used, photograph materials
Of course it’s not the same thing but still doesn’t make sense to use companies as police. I’m sure it’s much easier than with Nikon but the wast majority of its users aren’t doing it, just go after those who do instead of demanding that the companies do the police work.
If it was a case where CSAM production becomes mainstream use case I would have agreed but it is not.
> instead of demanding that the companies do the police work
How hard is this? What are they doing now, and is it enough? Do we know how hard they are trying?
For argument's sake, what if they had truly zero safegaurd around it, you could type "generate child porn" and it would 100% of the time. Surely you'd agree they should prevent that case, and be held accountable if they never took action to prevent it.
Regulation, clear laws around this would help. Surely they could try go get some threshold of difficulty in place that is a requirement to adhere to preventing.
I'm not in CP so I don't try to make it generate such content but I'm very annoyed that all providers are trying to lecture me when I try to generate anything about public figures for example. Also, these preventive measures are not working well at all, yesterday I had one denying to generate infinite loop claiming its dangerous.
Just throw away this BS about safety and jail/fine whomever commits crime with these tools. Make tools tools again and hold people responsible for the stuff they do with these tools.
Im not saying the companies should necessarily do the police work, though they absolutely should not release CP-generators. What I am saying is the companies should be held responsible for making the CP. Sure the user who types "make me some CP" can be held accountable too, but the creators/operators of the CP-generator should as well.
The one with taking creepy pictures has real victims, the one with making the machine generate the picture doesn’t but it tells something about the character of the person who makes it generate so I’m fine with them punished. Either way making the machine provider do the policing is ridiculous.
If it's AI-generated, it should be legal - regardless of whether the person consented for their image to be used and regardless of the age of the person.
You can't have AI-generated CSAM, as you're not sexually abusing anyone if it's AI-generated. It's better to have AI-generated CP instead of real CSAM because no child would be physically harmed. No one is lying that the photos are real, either.
And it's not like you can't generate these pics on free local models, anyway. In this case I don't see an issue with Twitter that should involve lawyers, even though Twitter is pure garbage otherwise.
As to whether Twitter should use moderation or not, it's up to them. I wouldn't use a forum where there are irrelevant spam posts.
I don't know, I feel like I'm taking crazy pills with this whole saga. Perhaps I havent seen the full story.
The fact of the matter is they do have a policy and they have removed it, suspended accounts and perhaps even taken it further. As would be the case on other platforms.
As far as I understand there is no nudity generated by grok.
Should public gpt models be prevented from generating detestable things, yes I can see the case for that.
I won't argue there is a line between acceptable and unacceptable, but please remember people perv over less (Rule 34).
Are bikinis now taboo attire? What next, ankles, elbows, the entire human body?(Just like the Taliban).
(Edit: I'm mentioning this paragraph for my below point.)
GPT's are not clever enough to make the distinction by the way either, so there's an unrealistic technical challenge here.
I suspect the this saga blowing out of proportion is purely "eLoN BAd".
generating sexualised pictures of kids is verboten. Thats epstien level of illegality. There is no legitiamte need for the public to hold, make or transmit sexualised images of children.
Anyone arguing otherwise has a lot of questions to answer
You're the one making the logical fallacies and reacting emotionally. Read what I have said first please.
That is a different grok to the one publishing images and discussed in the article. Your link clearly states they are being moderated in the comments and all comments are discussing adults only. The links comments also imply that these folks are jailbreaking nearly, because of guardrails that exist too.
As I say read what I said, please don't put words in my mouth. The GPT models wouldn't know what is sexualised. I said there is a line at some point. Non-sexualized bikinis are sold everywhere, do you not use the internet to buy clothes?
Your immediate dismissive reaction indicates you are not giving what I'm saying any thought. This is what puritanical thought often looks like. The discourse is so poisoned people can't stop, look at the facts and think rationally.
I don't think there is much emotion in said post. I am making specific assertions.
to your point:
> Non-sexualized bikinis are sold everywhere
Correct! the key logical modifier is Non sexual. Also you'll note that a lot of clothing companies do not show images of children in swimwear. Partly that's down to what I imagine you would term puritanism, but also legal counsel. The definition of a CSAM is loose enough (in some jurisdictions) to cover swimwear, depending on context. That context is challenging. A parent looking for clothes that will fit/suit their child is clearly not sexualised (corner cases exist, as I said context) Someone else who is using if for sexual purposes is not.
and because like GPL3 CSAM is infectious, the tariff for both company and end user is rather high for making, storing, transmitting and downloading those images. If someone is convicted of collecting those images, and using them for a sexual purpose, then images that were created that were not-CSAM suddenly become CSAM, and legally toxic to posses. (context does come in here.)
> Your link clearly states they are being moderated in the comments
Which tells us that there is a lot of work on guardrails right? its a choice by xai to allow users to do this. (mainly the app is hamstrung so that you have to pay for the spicy mode.) Whether its done by an ML model or not is irrelevant. Knowingly allowing CSAM generation and transmission is illegal. if you or I were to host an ML model that allows user to do the same thing, we would be in jail. There is a reason why other companies are not doing this.
The law must be applied equally, regardless of wealth, or power. I think that is my main objection to all of this. its clearly CSAM, and anyone other than musk doing this would have been censured by now. All of this justification is because of who it is doing this, rather than what is being done. We can bike shed all we want about is it actually really CSAM, which negates the entire point of this, which is its clearly breaking the law.
> The GPT models wouldn't know what is sexualised.
ML Classification is really rather good now. Instagram's unsupervised categorisation model is really rather effective at working out context of an image or video (ie differentiation of clothes, and context of those clothes.)
> please don't put words in my mouth
I have not done this, I am asserting that the bar for justifying this kind of content, which is clearly illegal and easily prevented (ie a picture of a minor and "generate an image of her in sexy clothes") is very high.
Now you could argue that I'm implying that you have something to hide. I am actually curious as to your motives for justifying the knowing creation of sexualised images of minors. You've made a weak argument that there are legitimate purposes. You then argue that its a slippery slope.
Is your fear that this brings justifies an age gated internet? censorship? What is the price that you think is worth paying?
Again words in my mouth. I'm not justifying that and nowhere does it say that. I could be very impolite to you right now trying to slander me like that.
I said I don't understand the fuss because there are guardrails, action taken and technical limitations.
THAT is my motive. The end of story. I do not need to parrot outrage because everyone else is, "you're either with us or against us" bullshit. I'm here for a rational discussion.
Again read what I've said. Technical limitations. You wrote that long ass explanation interspersed with ambiguities like consulting lawyers in borderline cases and then you expect an LLM to handle this.
Yes ML classification is good now but not foolproof. Hence we go back to the first point, processes to deal with this when x's existing guardrails fail, as x.com has done, delete, suspend, report.
My fear (only because you mention it, I didn't mention it above, I only said I don't get the fuss above) it seems should be that people are losing touch in this grok thing, their arguments are no longer grounded in truth or rational thought, almost a rabid witch hunt.
At no point did I say or imply LLMs are meant to make legal decisions.
"Hey grok make a sexy version of [obvious minor]" is not something that is hard to stop. try doing that query with meta, gemini, or sora, they manage it reliably well.
There are not technical impediments to stopping this, its a choice.
My point is saying if it's so complex you have to get a lawyer involved, how do you expect your LLM&system to cover all its own shortcomings.
I'd bet if you put that prompt into grok it'd be blocked judging by that Reddit link you sent. These folks are jailbreaking nearly asking to modify using neutral terms like clothing and images that grok doesn't have the skill to judge.
> My point is saying if it's so complex you have to get a lawyer involved, how do you expect your LLM&system to cover all its own shortcomings.
Every feature is lawyered up. Thats what general counsel does. Every feature I worked on at a FAANG had some level of legal compliance gate on it, because mistakes are costly.
For the team that launched the chatbots, loads of time went into figuring out what stupid shit users could make it do, and blocking it. Its not like all of that effort stopped. When people started finding new ways to do naughty stuff, that had to be blocked as well. Because other wise the whole feature had to be pulled to stop advertisers from fleeing, or worse FCC/class action.
> These folks are jailbreaking nearly asking to modify using neutral terms like clothing
CORRECT! people are putting effort into jailbreaking the app. where as on x grok they don't need to do any of that. Which is my point, its a product choice.
None of this is "hard legal problems" or in fact unpredictable. They are/have done a ton of work to stop that (again mainly because they want people to pay for "spicy mode")
At this point it should be clear that they know that Grok is unsafe to use, and will generate potentially illegal content even without a clear prompt asking it to do so.
This is a dangerous product, the manufacturer _knows_ it is dangerous, and yet still they provide the service for use.
Granting that I think X should have stronger content policies and technological interventions to bad behavior as a matter of business, I do think that the X Safety's team position[0] is the only workable legal standard here. Any sufficiently useful AI product will _inevitably_ be usable, at minimum via subversion of their safety controls, to violate current (or future!) laws, and so I don't see how it's viable to prosecute legal violations at the level of the AI model or tool developers, especially if the platform is itself still moderating the actually illegal content. Obviously X is playing much looser with their safety controls than their competitors, but we're just debating over degrees rather than principles at that point.
[0]
> Anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content.
A core issue here is that there isn't a black and white on a subject like this. Yes, it is wrong. Yes they have a responsibility. But at the same time taking that to an extreme leads to heavy censorship. So what is a practical middle ground? Is there something like a 'sexualized validation suite' that could be an industry standard for testing if an LLM needs additional training? If there were then victims could potentially claim negligence if they aren't using best practices and they were harmed because of it right? Are there missing social or legal mechanisms to deal with misuse? One thing I think is missing is a '911' for cyber offenses like this. If someone breaks into my house I can call 911, if someone creates revenge porn who do I call? I don't think there is a simple answer here, but constructive suggestions, suggestions that do balance free speech and being a responsible service provider would be helpful. 'They are wrong' doesn't actually lead to change.
Looks like this hit a nerve. Any comments on the practical solutions though? The comment wasn't advocating that they should make CSAM or that they shouldn't face repercussions for enabling it, at least I don't think it reads that way. I honestly think that a core issue here is we are missing practical fixes. Things that make it easier for victims to get relief and things that make it clear that a provider is being irresponsible so that they can face civil or criminal penalties. If there aren't solid industry standards then how can you claim they aren't implementing best practices to hold them accountable? If victims don't have effective means of relief then how will we find and stop this? I'd love to hear actual concrete actions that the industry can put in place. 'Just tell them to stop' doesn't create a framework that leads to change.
The reason it hit a nerve is that you're just being extraordinarily credulous about xAI's lies. There are solid industry standards, and we can just tell them to stop; we know this because Grok has a number of competitors which don't generate CSAM. Indeed, they've already implemented the industry standards which prevent CSAM generation; they just added a flag called "spicy mode" to turn them off, because those standards also prevent the generation of pornographic images.
Trust me, I believe nothing positive about xAI. Various players doing similar things and an actual published standard or a standards body are totally different things. The industry is really young. Like a couple years young a this point. There really aren't well developed standards and best practices. Moments like this are opportunities to actually develop them and use them, or at least start the process. Do you have a recognized standard you can point to for this? When it comes to car safety there are lots of recognized standards. Same with medical safety, etc etc. Is there anything like that actually in the LLM world?
I'm not much on X anymore due to the vitriol, and visiting now kinda proved it. Beneath almost every trending post made by a female is someone using grok to sexualize a picture of them.
(And whatever my timeline has become now is why I don't visit more often, wtf, used to only be cycling related)
Edit: just to bring receipts, 3 instances in a few scrolls: https://x.com/i/status/2007949859362672673 https://x.com/i/status/2007945902799941994 https://x.com/i/status/2008134466926150003
I left when they started putting verified (paid) comments at the top of every conversation. Having the worst nazi views front and center on every comment isn't really a great experience.
I've got to imagine that Musk fired literally all of the product people. Pay-for-attention was just such an obviously bad idea, with a very long history of destroying social websites.
It's gotten worse and yet more bland. The top few comments are almost always AI generated engagement farming posts.
The new engagement bait technique is "vague posting" so that people need to click into the comments and ask for specifics, its terrible
To be fair, as someone who used to manage an X account for a very small startup as part of my role (glad that's no longer the case), for a long time (probably still the case) posting direct links would penalize your reach. So making a helpful, self-contained post your followers might find useful was algorithmically discouraged.
Everything that is awful in the diff between X and Twitter is there entirely by decision and design.
Vagueposting is a different beast. There’s almost never any intention of informing etc; it’s just: QT a trending semi-controversial topic, tack on something like “imagine not knowing the real reason behind this”, and the replies are jammed full of competitive theories as to what the OP was implying.
It’s fundamentally just another way of boosting account engagement metrics by encouraging repliers to signal that they are smart and clued-in. But it seems to work exceptionally well because it’s inescapable at the moment.
Vague posting is as old as social networks. I had loads of fun back in the day responding to all the "you know who you are" posts on facebook, when it's clearly not aimed at me.
Apple News does that a lot.
They also don’t take down overt Nazi content anymore. Accounts with all the standard unambiguous Nazi symbologies and hate content about their typical targets with associated slurs. With imagery of Hitler and praises of his policies. And calls for exterminating their perceived enemies and dehumanizing them as subhuman vermin. I’ve tried reporting many accounts and posts. It’s all protected now and boosted via payment.
Well the owner supports neo-nazi political parties, so that tracks.
and of course he gave a very public Nazi salute
Inviting a debate about what it was or wasn't only leads to a complete distractions over interpretation of a gesture when the dude already digs his own hole more deeply and more clearly in his feed anyways.
There's no debate. It was the most obvious nazi salute you could do. The only people who says it's not are nazis themselves, who of course delight in lying. (see Sartre quote.)
My comment was in response to the debate already starting so it's quite bold to claim no debate will be had (i.e. "debate" does not mean "something I personally am on the fence about", it's something other people will hold in response to your views). Whether there will or won't be debate about something is (thankfully) not something you or I get to declare. It just happens or doesn't, and it had already - and so it remains.
I'm sure "The only people who say it's not are <x>" is an abominable thought pattern Nazis and similar types would love everyone to have. It makes for a great excuse to never weigh things on their merits, so I'm not sure why you feel the need to invoke it when the merits are already in your court. I can't look at these numbers https://i.imgur.com/hwm2bI5.png and conclude most Americans are Nazi's instead of being willing to accept perhaps not everyone sees it the same way I do even if they don't like Nazis either.
To any actual Nazi supporters out there: To hell with you
To anybody who thinks either everyone agrees with what they see 100% of the time or they are a literal Nazi: To hell with you as well
The majority of people who had an opinion (32%) said it was either a Roman salute or a Nazi salute (which are the same thing). Lots of people had no idea (probably cuz they didn't pay attention). Only 19% said it was a "gesture from the heart", which is just parroting what Elon claimed, and I discount those folks as they are almost certainly crypto-Nazis.
So yeah, I believe there are a LOT of Nazi-adjacent folks in this country: they're the ones who voted for Trump 3 times even after they knew he was a fascist piece of garbage.
A few minor cleanups - I personally don't think they change anything (really, it's these stats themselves that lack the ability to do that anyways) but want to note because this is the exact kind of Pandora's box opened with focusing on this specific incident:
- Even assuming all who weren't sure (13%) should just be discounted as not having an opinion, like those who had not heard about it (22%), 32% is still not a majority of the remaining (100%-13%-22%) = 65%. 32% could have been a plurality of those with an opinion, but since you insisted on lumping things into 3 buckets of 32%, 35%, and remaining %, the remaining % of 33% would actually get the plurality of those who responded with opinions by this definition.
N.b. If just read straight from the sheet, "A Nazi salute" would have already had a plurality. Though grouping like this is probably the more correct thing to do, it actually ends up significantly weakening the overall position of "more people agree than not" rather than strengthening it.
- But, thankfully, "A Nazi Salute" + "A Roman Salute" would actually have been 32+2=34%, so plurality is at least restored by more than one whole percentage point (if you excluded the unsure or unknowing)!
- However, a "Roman salute" (which is a bit of a farce of a name really) can't really be assumed to be fungible with the first option in this poll. If it were fully fungible, it could have been combined into that option. I.e. there's no way to tell which adults responding "A Roman salute" meant to be counted as "a general fascist salute, as the Nazis later adopted" or meant to be counted as "a non-fascist meaning of the salute, like the Bellamy salute was before WWII". So whichever wins this game of eeking out percentage points comes down to how each person wants to group these 2 percentage points. Shucks!
- In reality, between error margins and bogus responses, this is about as close as one could expect to get for an equal 3 way split between "it was", "it wasn't", and "dunno/don't care", and pulling ahead a percentage point or two is really quite irrelevant beyond that it is, blatantly, not actually a majority that agree it was a Nazi-style salute.
Even though I'm one who agrees with you Elon exhibits neo-nazi tendencies, the above just shows how we go from "Elon replies directly supporting someone in a thread about Hitler being right about the Jewish community" and similar things constantly for years to debating individual percentage points to try to claim our favorite sub-majority says he likely made a one off hand gesture 3 years ago. Now imagine I was actually a Nazi supporter walking into the thread - suddenly we've gone from talking about direct pro-Nazi statements and retweets constantly in his feed to a chance for me to debate with you whether the majority think he made a one off hand gesture 3 years ago? Anyone concerned with Musk's behavior should want to avoid this topic with a 20 foot pole so they can get straight to the real stuff.
Also... I've run across a fair share of crypto lovers who turn out to be neo-nazish, but I'm not sure how you're piecing together that such a large portion of the population is a "crypto-Nazi" when something like only 28% of the population has crypto at all, let alone is a Nazi too. At least we're past "anyone who disagrees with my interpretations can only be doing so as a Nazi" though.
crypto-nazi doesn't mean crypto-holder
Ah, you're almost certainly correct here! Akin to crypto-fascist, perhaps I'd seen too many articles talking about the negatives of crypto to see the obvious there.
Thanks for the note!
There’s no debate.
I imagine I'm not the only one using HN less because both articles like this and comments like this are clearly being downvoted and/or flagged by a subset of users motivated by politics and the HN admin team seemingly doesn't consider that much of a problem. This story is incredibly relevant to a tech audience and this comment is objectively true and yet both are met with downvotes/flags.
Whether HN wants to endorse a political ideology or not, their approach to handling these issues is a material support of the ideologies these stories and comments are criticizing.
I think lots of tech industry nerds feel that they are superior beings who are above politics.
Kinda like the scientists building the atomic bomb.
They'll be in for a rude awakening.
Yeah this was my first reaction this article is about tech regulation that is relavent and on topic. If Grok causes extra legislation to be passed because its lack of comment dececeny in the pursuit of money that is relavent. This is the entire argument around we can't have accountability for tools just people which is ridicuously. The result of pretending that this type of thing doesn't happen is legislative responses.
PG and Garry Tan have both been disturbingly effusive in praising Musk and his various fuckeries.
Like, the entirety of DOGE was such an obviously terrible series of events, but for whatever reason, the above were both big cheerleaders on Twitter.
And yeah the moderation team here have been clearly letting everything Musk-related be flagged even after pushback. It's absolutely vile. I've seen many people try to make posts about the false flagging issue here, only to have those posts flagged as well (unapologetically, on purpose, by the mods themselves).
Anecdotally I think that moderation has been a lot more lenient when it comes to political content in the last year than in years prior. I have no hard evidence that this is actually the case, but I think especially pre-2020 I'd see very little political content on HN and now I see much more. It's also probably true that both liberals and conservatives have become even more polarized, leading to bad-faith flagging and downvoting, but I'm actually not sure what could be done about that, seems similar to anti-botting protections which is an arms race
I'm late to this, but I'm doubtful that that perception is correct. It's true there are fluctuations, as with anything on HN, but the baseline is pretty stable. But the perception that HN has gotten-more-political-lately is about as old as the site itself. In fact, it's so common that about 8 years ago I took a couple hours to track down the history of it: https://news.ycombinator.com/item?id=17014869.
Any thoughts about the issues raised up thread? This article being flagged looks to me to be a clear indication of abuse of the HN flagging system. Or do you think there are justifiable reasons why this article shouldn't be linked on HN?
My thoughts are just the usual ones about this: flags of stories like this on HN are a kind of coalition between some flaggers who are agenda-motivated (which is an abuse of flagging) and other flaggers who simply don't want to see repetitive and/or flamebaity material on the site (which is a correct use of flagging, and is not agenda driven because this sort of material comes at us from all angles). When we see flaggers who are consistently doing the first kind of flagging, we take away their flagging privileges.
The wild thing is that this article isn't even a political issue!
"Major Silicon Valley Company's Product Creates and Publishes Child Porn" has nothing to do with politics. It's not "political content." It is relevant tech news when someone investigates and points out wrongdoing that tech companies are up to. If another tech company's product was doing this, it would be all over HN and there would be pretty much no flagging.
When these stories get flagged, it's because people don't want bad news to get out about the company--it's not about avoiding politics out of principle.
Free speech on the internet is the nerdiest of political issues, and this definitely plays to it.
I'm not saying you're wrong about it being brigaded by PR bots, I'm saying it's still political. Hell, everything's political.
I don't think this is really about free speech - the problem is the artificial speech being out of control.
I've been using https://news.ycombinator.com/active a lot more the last year, because so many important discussions (related to tech, but including politics or prominent figures like Musk) gets pushed out from the front page quickly. I don't think it's moderators doing it, but mass-flagging by users, (or perhaps some automagic if the discussion is too intense like num comments or downvotes). Of course, it might be the will of the community to flag these, but it does feel a bit abused in the way certain topics gets killed quickly.
I just found out about this recently and like this page a lot. Dang has a hard job to balance this. I think newcomers might be more comfortable with the frontpage and if you end up learning about the other pages you can find more controversial discussions. Can't be mad about the moderation hiding these by default. Although I think CSAM-Bad should not be controversial.
I have /active bookmarked and treat it as the real HN frontpage, maybe it should be at least linked at the top with the other views.
I think that would be bad for HN overall, it's really angry relative to the front page.
I also recommend enabling showdead. You will see a lot of vile commentary, but you will also see threads like this when they inevitably get flagged.
You can also email hn@ycombinator.com to ask "why was this thing removed?", and they answer the first few, then killfile your sending address.
Even a year ago, when Trump was posting claims that he was a king, etc. these things got removed, even though there were obvious implications on the tech industry. (Cybersecurity alone makes more political assumptions than it does on the hardness of the discrete logarithm, for example.)
I (and others) were arguing that the Trump administration is probably, and unfortunately, the most relevant topic to the tech industry on most any given day. This is because computer is mostly made out of people. The message that these political stories intersect deeply with technology (as is seen here) seems to have successfully gotten through.
I wish the most relevant tech story of every day were, say, some cool new operating system, or something cool and curiosity-inspiring like "you can sort in linear time" or "python is an operating system" or "i made X rewritten in Y" or whatever.
I think in most things, creation is much harder than destruction, but software and software systems are an exception where one individual can generally do more creation than destruction. So, it's particularly interesting (and jarring) when a few individuals are able to make decisions that cause widespread destruction.
We should collectively be proud that we have a culture where creation is easier than destruction. But it's also why the top stories of any given day will be "Trump did X" or "us-east-1 / cloudflare / crowdstrike is down" or "software widely used in {phones / servers} has a big scary backdoor".
This story belongs on this site regardless of politics. It is specifically about both AI and social media. Downvoting/flagging this story is much more politically motivated than posting/upvoting it.
I agree with that. But one, it is on the site, and two, how can the moderation team reasonably stop bad actors from downvoting it? They can (and probably do) unflag things that have merit or put it in the 2nd chance queue.
> But one, it is on the site, and two, how can the moderation team reasonably stop bad actors from downvoting it?
In 2020, Dang said [1]
> Voting ring detection has been one of HN's priorities for over 12 years: [...]
> I've personally spent hundreds of hours working on this, as well as tracking down voting rings of every imaginable sort. I'd never claim that our software catches everything, but I can tell you that it catches so much that I often go through the lists to find examples of good projects that people were trying ineptly to promote, and invite them to do it again in a way that is more likely to gain community interest.
Of course this sort of thing is inherently heuristic; presumably bots throw up a smokescreen of benign activity, and sophisticated bots could present a very realistic, human-like smokescreen.
[1] https://news.ycombinator.com/item?id=22761897
> how can the moderation team reasonably stop bad actors from downvoting it
There are all sorts of approaches that a moderation team could take if they actually believed this was a problem. For example, identify the users who regularly downvote/flag stories like this that end up being cleared by the moderation team for unflagging or the 2nd chance queue and devalue their downvotes/flags in the future.
Accounts are free to make, so bad actors will just create and "season/age" accounts until they have the ability to flag, then rinse and repeat.
I think the biggest thing HN could do to stop this problem is to not make flagging affect an article's ranking until after a human mod reviews the flags and determines them to be appropriate. Right now, all bad actors apparently have to do is be quick on the draw, and get their flagging ring in action ASAP. I'm sure any company's PR team (or motivated Elon worshiper) can buy "100 HN flags on an article" on the dark web right now if they wanted to.
Why would a company like any one of Musk's need to buy these flags? Why wouldn't they just push a button and have their own bots get to work? Plausible deniability?
Who knows whether or not both happen? Ultimately, only the HN admins, and they don't disclose data, so we can only speculate and look for publicly visible patterns.
You can judge their trustworthiness by evaluating their employer's president/CEO, who dictates behavioral requirements regardless of the personal character of each employee
That already happens. I got my flagging powers removed after over-using flag in the past. (I eventually wrote an email to the mods pledging to behave more judiciously and asked for the power back). As a user you won't see any change in the UI when this happens; the flags just stop having any effect on the back end.
There is one subtle clue. If your account has flagging enabled, then whenever you flag something there is a chance that your flag pushes it over the threshold to flagged state. If your account has flagging disabled, this never happens. This is what hinted me to ask dang if I'd been shadowbanned from flagging.
I would be money that already happens, for flagging in particular, since it's right in the line of the moderation queue. For downvotes, it sounds like significant infra would be needed for a product that generates no revenue. Agree that I would like the problem to be solved as well however!
>it sounds like significant infra would be needed for a product that generates no revenue
This just describes HN as a whole, so if this is the concern, might as well shut the site down.
Agreed. I've been treated with much more lenient hands here over the last 12 months. Possibly through obscurity.
Now that you mention it - I've noticed the same on Youtube ... I used to get suspended every 5 minutes on there.
You also can't downvote all comments, only a subset. HN is shit now.
I think there's brigading coming in to ruin these threads. I had several positive votes for a few minutes when stating a simple fact about Elon Musk and his support of neo-nazi political parties then -2 a min later
edit: back to 14, kinda crazy
I have downvoted anything remotely political on hn ever since I got my downvote button, even (especially) if I agree with it. I always appreciated that being anti-political was the general vibe here.
How do you define “political” and what about the story of an AI posting CSAM do you think qualifies?
The part where you brought up politics is when I noticed it was political.
But I generally consider something political if it involves politicians, or anyone being upset about anything someone else is doing, or any topic that they could mention on normal news. I prefer hn to be full of positive things that normal people don't understand or care about.
What's political here? The mere fact of the involvement of Dear Leader?
(As a long-term Musk-sceptic, I can confirm that Musk-critical content tended to get insta-flagged even years before he was explicitly involved in politics.)
The comment I replied to brought up politics.
There's almost no such thing as a non-political thing. Maybe the sky colour, except that other cultures (especially in the past) have different green/blue boundaries and some may say it's green. Maybe the natural numbers (but whether they start from 0 or 1 is political) or the primes (but whether 1 is prime is political).
how insightful
This isn’t just on X. On Instagram and TikTok virtually every Jewish post that’s public gets antisemitic comments and Hitler memes.
They are in here too. But thanks to moderation they are usually more subtle and use dog whistles or proxy targets.
You joined 7 days ago and literally your entire timeline is about Israel and Jews. You’ve contributed nothing else.
Seems like bot behavior.
Please respond to the argument instead of making personal attacks.
What argument? Your unsubstantiated opinion?
Happy to back this up with links - if you want to deny it.
There’s one in this thread. A sibling to my comment.
> I’ve tried reporting many accounts and posts.
I mean, honestly, you are wasting your time. Why would you expect the website run by the guy who likes giving Nazi salutes on TV to take down Nazi content?
There's no point trying to engage with Twitter in good faith at this point; only real option is to stop using and move on (or hang out in the Nazi bar, I guess).
Would be trivial to provide links to this content.
Personally I've never seen anything like this.
Search for user h0wlingmutant
Huh? Nothing appears.
Once again - links are trivial to share.
Otherwise this is hearsay.
They meant howlingmutant0 but I don't know which posts they refer to
The ones I reported, I deleted the report emails so I can't help you at this moment. I don't know why you're surprised - you can go looking yourself and find examples
Yeah I went thru his media. There was some backwards swastika that someone drawn on a synagogue. People were mocking the fact that idiots can't even draw that correctly.
Are you seriously asking for links to CSAM?
Context was nazi content. Apparently X is swimming in it. Yet nobody can provide evidence.
I quickly searched and here are a few that showed up immediately
https://x.com/UpwardChanging posts Hitler content, 14 words, black sun graphics, swastikas, antisemitic content etc. 21k followers
https://x.com/hvitrulfur supportively reposts swastika content, white supremacism, anti-black racism, islamophobia, 14 words
https://x.com/unconquered_sol black sun, swastikas, fasces, hitler glorification. 70k followers
1. Can you point to exact posts? I saw one swastika somewhere deep in media. It's a description of what swastika is - no different from wikipedia article.
2. Seems like case of https://en.wikipedia.org/wiki/White_guilt which spills into racism/white-supremacy.
3. This is literally art. Not my taste of course.
OP's claim was X is swimming in hate speech.
p.s. communist symbols are banned in a lot of the world too (https://en.wikipedia.org/wiki/Bans_on_communist_symbols), yet this is ok for bluesky:
* https://bsky.app/profile/mikkel314.bsky.social/post/3mbe62hg...
* https://bsky.app/profile/gwynnstellar.bsky.social/post/3mb5p...
* https://bsky.app/profile/negatron00.bsky.social/post/3mbfnnh...
* https://bsky.app/profile/kyulen742.bsky.social/post/3mb4nkeg...
* https://bsky.app/profile/mommyanddaddyslittlebimbo.com/post/...
I normally stay away too, but just decided to scroll through grok’s replies to see how wide spread it really is. It looks like it is a pretty big problem, and not just for women. Though, I must say that Xi Jinping in a bikini made me laugh.
I’m not sure if this is much worse than the textual hate and harassment being thrown around willy nilly over there. That negativity is really why I never got into it, even when it was twitter I thought it was gross.
Before Elon bought it out it was mostly possible to contain the hate with a carefully curated feed. Afterward the first reply on any post is some blue check Nazi and/or bot. Elon amplifying the racism by reposting white supremacist content, no matter how fabricated/false/misleading, is quite a signal to send to the rest of the userbase.
he's rigged the algorithm to boost content he interacts with, unbanned and stopped moderating nazi content and then boosted those accounts by interacting with them.
> Though, I must say that Xi Jinping in a bikini made me laugh.
I haven't seen Xi, but I am unfortunate enough to know that such an animated depiction of Maduro also exists.
These people are clearly doing it largely for shock value.
So it's basically 4chan now. Whatever gets reactions, anything goes.
> Beneath almost every trending post made by a female is someone using grok to sexualize a picture of them.
It's become a bit of a meme to do this right now on X.
FWIW (very little), it's also on a lot of male posts, as well. None of that excuses this behavior.
X wrote in offering to pay something for my OG username, because fElon wanted it for one of his Grok characters. I told them to make an offer, only for them to invoke their Terms of Service and steal it instead.
Fuck X.
Hmm, I have an old Twitter account. Elon promised that he was going to make it the best site ever, lets see what the algorithm feeds me today, January 5 2026.
1. Denmark taxes its rich people and has a high standard of living.
2. Scammy looking ad for investments in a blood screening company.
3. Guy clearing ice from a drainpipe, old video but fun to watch.
4. Oil is not actually a fossil fuel, it is "a gift from the Earth"
5. Elon himself reposting a racist fabrication about black people in Minnesota.
6. Climate change is a liberal lie to destroy western civilization. CO2 is plant food, liberals are trying to starve the world by killing off the plants.
7. Something about an old lighthouse surviving for a long time.
8. Vaccine conspiracy theories
9. Outright racism against Africans, claiming they are too dumb to sustain civilized society without white men running it.
10. One of those bullshit AI videos where the AI doesn't understand how pouring resin works.
11. Microsoft released an AI that is going to change everything, for real this time, we promise.
12. Climate change denialism
13. A post claiming that the Africa and South America aren't poor because they were robbed of resources during the colonial era and beyond, but because they are too dumb to run their countries.
14. A guy showing how you can pack fragile items using expanding foam and plastic bags. He makes it look effortless, but glosses over how he measures out the amount of foam to use.
15. Hornypost asking Grok to undress a young Asian lady standing in front of a tree.
16. Post claiming that the COVID-19 vaccine caused a massive spike (5 million to 150 million) cases of myocarditis.
17. A sad post from a guy depressed that a survey of college girls said that a large majority of them find MAGA support to be a turn off.
18. Some film clip with Morgan Freeman standing on a X and getting sniped from an improbable distance
19. AI bullshit clip about people walking into bottomless pits
20. A video clip of a woman being confused as to why financial aid forms now require you to list your ethnicity when you click on "white", with the only suboptions being German, Irish, English, Italian, Polish, and French.
Special bonus post: Peter St Ogne, Ph. D claims "The Tenth Amendment says the federal government can only do things expressly listed in the Constitution -- every other federal activity is illegal." Are you wondering what federal activity he is angry about? Financial support for daycare.
So yeah, while it wasn't a total and complete loss it is obvious that the noise far exceeds the signal. It is maybe a bit of a shock just how much blatant climate change denialism, racism, and vaccine conspiracies are front page material. I'm saddened that there are people who are reading this every day and taking it to heart. The level of outright racism is quite shocking too. It's not even up for debate that black people are just plain inferior to the glorious aryan race on Twitter. This is supposedly the #1 news source on the Internet? Ouch.
Edit: Got the year wrong at the top of the post, fixed.
Makes me laugh when people say Twitter is "better than ever." Not sure they understand how revealing that statement is about them, and how the internet always remembers.
Well those people now outnumber us and have all the positions of power where they get to define what truth is. Not sure what to do about it.
They don't outnumber anyone. There's always a minority of hardcore supporters for any side... plus enough undecided people in the middle who mostly vote their pocketbook.
What to do about it is to point out to those people in the middle how badly things are being fucked up, preferably with how those mistakes link back to their pocketbook.
Jesus Christ. I think I have to go look at some cute animal pictures after reading that, and it's not even the real thing.
The best use of generative AI is as an excuse for everyone to stop posting pictures of themselves (or of their children, or of anyone else) online. If you don't overshare (and don't get overshared), you can't get Grok'd.
You shouldn't have worn that short skirt!
Sounds a lot like a "they're asking for it"-type argument.
There's a difference between merely existing in public, versus vying for attention in a venue where several brands of "aim this at a patron to see them in a bikini" machines are installed.
And so installing the "aim this at a patron to see them in a bikini" machines made the community vastly more hostile to women. To the point where people say "well what did you expect" when a woman uses the product. Maybe they shouldn't have been installed?
They weren't placed there by God.
>There's a difference between merely existing in public, versus vying for attention
And you thought that was a different argument than "you shouldn't have worn that skirt if you didn't want to get raped"?
>versus vying for attention in a venue where several brands of "aim this at a patron to see them in a bikini" machines are installed.
The CSAM machine is only a recent addition.
"I don't know why you went to the gun store if you didn't want to get shot"
The number of people saying that it is not worthy of intervention that every single woman who posts on twitter has to worry about somebody saying "hey grok, take her clothes off" and then be made into a public sex object is maybe the most acute example of rape culture that I've seen in decades.
This thread is genuinely enraging. The people making false appeals to higher principles (eg section 230) in order to absolve X of any guilt are completely insane if you take the situation at face value. Here we have a new tool that allows you to make porn of users, including minors, in an instant. None of the other new AI platforms seem to be having this problem. And yet, there are still people here making excuses.
I am not a lawyer but my understanding of section 230 was that platforms are not responsible for the content their users post (with limitations like “you can’t just host CSAM”). But as far as I understand, if the platform provides tools to create a certain type of harmful content, section 230 doesn’t protect it. Like there’s a difference between someone downloading a photo off the internet and then using tools like photoshop to make lewd content before reuploading it, as compared to the platform just offering a button to do all of that without friction.
Who cares about the law in this case though? Don't we have other barometers for moral decisions?
> But as far as I understand, if the platform provides tools to create a certain type of harmful content, section 230 doesn’t protect it.
That's interesting - do you have a link for this? I'd be curious to know more of the section's details.
Again I’m not a lawyer and this is my interpretation of the #3 requirement of section 230:
“The information must be "provided by another information content provider", i.e., the defendant must not be the "information content provider" of the harmful information at issue”
If grok is generating these images, I am interpreting this as Twitter could be becoming an information content provider. I couldn’t find any relevant rulings but I doubt any exist since services like Grok are relatively new.
1) These images are being posted by @Grok, which is an official X account, not a user account.
2) X still has an ethical and probably legal obligation to remove these images from their platform, even if they are somehow found not to be responsible for generating them, even though they generated them.
For #2, you are correct. Section 230 isn’t blatant immunity, you need to still follow all the other relevant laws including FOSTA-SESTA, DMCA etc.
At this point in time, no comment that has the string "230" in it is saying that Section 230 absolves X of anything. Lot's of people are asking if it might, and if that's what X is relying on here.
I brought up Section 230 because it used to be that removal of Section 230 was an active discussion in the US, particularly for Twitter, pre-Elon, but seems to have fallen away.
With content generated by the platform, it certainly seems reasonable to understand how Section 230 applies, if it all, and I in particular think that Section 230 protections should probably be removed for X in particular.
> At this point in time, no comment that has the string "230" in it is saying that Section 230 absolves X of anything.
You are correct; I read your earlier post as "did we forget our already established principle"? I admit I'm a bit tilted by X doing this. In my defense, there are people making the "blame the user, not the tool" argument here though, which is the core idea of section 230
> None of the other new AI platforms seem to be having this problem
The very first AI code generators had this issue that user could make illegal content by making specific requests. A lot of people, me including, saw this as a problem, and there were a few copyright lawsuits arguing this. The courts however did not seem to be very sympathetic to this argument, putting the blame on the user rather than the platform.
Here is hoping that Grok forces regulations to decide on this subject once and for all.
Elon Musk mentioned multiple times that he doesn't want to censor. If someone does or says something illegal on his platform, it has to be solved by law enforcement, not by someone on his platform. When asked to "moderate" it, he calls that censorship. Literally everything he does and says is about Freedom - no regulations, or as little as possible, and no moderation.
I believe he thinks the same applies to Grok or whatever is done on the platform. The fact that "@grok do xyz" makes it instanteous doesn't mean you should do it.
I think his target demographic benefits from the degradation of women; a feature, not a defect; ticket closed; won’t fix.
Anyways, super cool that anyone speaking out already has their SSN in his DB.
> Literally everything he does and says is about Freedom - no regulations, or as little as possible, and no moderation.
Weird. Why do people get in trouble for using the word "cis" on twitter?
I think it is completely fine for a tech platform to proactively censor AI porn. It is ok to stop men from generating porn of random women and kids. We don't need to get the police involved. I do not think non-consensual porn or CSAM should be protected by free speech. This is an obvious, no-brainer decision.
Second line of the article.
> X is planning to purge users generating content that the platform deems illegal, including Grok-generated child sexual abuse material (CSAM).
Which is moderating/censoring.
The tool (Grok) will not be updated to limit it - that's all. Why? I have no idea, but it seems lately that all these AI tools have more freedom than us humans.
An AI porn tool makes the world less free, even with post hoc purges. That is the point.
If you want to be an actress and you are 14 years old, you now have to worry about tools that make porn of you.
If you are an ordinary woman that wants to share photos with your friends on instagram, you now have to worry about people making porn of you!
It's the "freedom from/freedom to" thing that always comes up when talking about "freedom"
But then he also gets mad if law enforcement acts on people doing illegal things on his platform...
X terms of service specifically censor things Elon doesn’t want.
It’s against the TOS to post a picture of your own boobs for example.
Why the downvotes?
The one above is not my opinion (although I partially agree with it, and now you can downvote this one :D ). To be honest, I don't care at all about X nor about an almost trillionaire.
It was full of bots before, now it's full of "AI agents". It's quite hard sometimes to navigate through that ocean of spam, fake news, etc.
Grok makes it easier, but it's still ugly and annoying to read 90-95% always the same posts.
This weekend has made me explicitly decided my kids photos will never be allowed on the internet especially social media. Its was just absolutely disgusting.
eh, it's the social media tax. don't like it? don't use social media. problem solved.
Maybe I've got a case of the 'tism, but I really don't see an issue with it. Can someone explain?
It's a fictional creation. Nobody is "taking her clothes off", a bot is fabricating a naked woman and tacking her likeness (ie. face) on to it. If anything, I could see how this could benefit women as they can now start to reasonably claim that any actual leaked nudes are instead worthless AI slop.
I don't think I would care if someone did this to me. Put "me" in the most depraved crap you can think of, I don't care. It's not me. I suspect most men feel similarly.
What's the big deal?
A man's sexual value is rarely impacted much by a nude image of themselves being available.
A woman being damaged by nudes is basically a white knight, misogynist viewpoint that proclaims a woman's value is in her chastity / modesty so by posting a manufactured nude of her you have thereby degraded her value and owe her damages.
Yes, that's the conclusion I came to as well. The best analogue I can think of is taxation, where men - whose sexual values are impacted far more by control of resources - are typically far more aggrieved by the practice than women (who typically see it as a wonderful practice that ought to be expanded).
It feels odd for them to be advertising this belief though. These are surely a lot of the same people trying to devalue virginity, glorifying public sex positivity, condemning "slut shaming", etc.
The same argument could be made of photoshopping someone's face on to a nude body. But for the most part, nobody cares (the only time I recall it happening was when it happened to David Brent in The Office).
"For a Linux user, you can already build such a system yourself quite trivially ..."
Convincingly photoshopping someones face onto a nude body takes time, skills, effort, and access to resources.
Grok lowers the barrier to be less effort than it took for either you or I to write our comments.
It is now a social phenomenon where almost every public image of a woman or girl on the site is modified in this manner. Revenge porn photoshops happened before, but not to this scale or in this type of phenomenon.
And there is safety in numbers. If one person photoshops a highschool classmate nude, they might find themself on a registry. For lack of knowing the magnitude, if myriad people are doing it around the country, then do you expect everyone doing that to be litigated that extensively?
> Revenge porn photoshops happened before, but not to this scale or in this type of phenomenon.
Mate, thats the point. I as a normal human being, who had never been on 4chan or the darker corners of reddit would have never seen or be able to make frankenporn. much less so make _convincing_ frankenporn.
> For lack of knowing the magnitude
Fuck that shit, if they didn't know the magnitude they wouldn't have spend ages making the photoshop to do it. You don't spend ages doing revenge, "because you didn't know the magnitude" You spend ages doing it because you want revenge
> if myriad people are doing it around the country, then do you expect everyone doing that to be litigated that extensively?
I mean we put people in prison for drink driving, lots of people do that in the states, same with drug dealing. Same with harassment, thats why restraining orders exist.
but
You are missing the point, Making and distributing CSAM is an illegal offence. Knowingly storing and transmitting it is an offence. Musk could stop it all now by re-training grok, or putting in some basic controls.
If any other person was doing this they would have been threatened with company ending action by now.
This is a heated topic and I share your anger. But you have completely misunderstood me.
We mostly agree, so let me clarify.
Grok is being used to make very much revenge porn, including CSAM revenge porn, and people _are using X because it's the CSAM app_. I think this is all bad. We agree here.
"For lack of knowing the magnitude" is me stating that I do not know the number of people using X to generate CSAM. I don't know if it is a thousand, a million, a hundred million, etc. So, I used the word "myriad" instead of "thousands", "millions", etc.
I am arguing that this is worse because the scale is so much more. I am arguing against the argument equivocating this with photoshop.
> If any other person was doing this they would have been threatened with company ending action by now.
Yes, I agree. X is still available on both app stores. This means CSAM is just being made more and more normal. I think this is very bad.
I understand your point, thank you for your clarification.
Friend, you are putting too much effort to debate a topic that is implicitly banned on this website. This post has already been hidden from the front page. Hacker News is openly hostile to anything that even mildly paints a handful of billionaires in a poor light. But let's continue to deify Dang as the country descneds openly into madness.
It's still on my front page at position 22.
I also see it back now too, despite it being removed earlier. Do you have faith in the HN algo? Position 22 despite having more votes and comments and being more recent than all of the posts above it?
It is now [flagged] again, despite hundreds of comments and upvotes.
This site.
I have no opinions on the HN algo as I have not spent any time considering it.
> I have not spent any time considering it.
Hold on to that spirit and I think you'll genuinely do well in the world that's coming next.
Making fake revenge porn is an EXTREMELY common way to harass your ex.
Maybe "common" within the subset of people who viciously harass their exes?
Yes, that's the implication. If you're going to harass your ex, sharing their real or fake nudes is a very common thing for people to do.
Frankenstein porn was not exactly uncommon, but this lowers the barrier to entry even more and we really don't need it.
> But for the most part, nobody cares
IMO, the fact that you would say this is further evidence of rape culture infecting the world. I assure you that people do care about this.
And friction and quality matters. When you make it easier to generate this content and make the content more convincing, the number of people who do this will go up by orders of magnitude. And when social media platforms make it trivial to share this content you've got a sea change in this kind of harassment.
How is "It's acceptable because people perform a lesser form of the same behavior" an argument at all? Taken to its logical extreme, you could argue that you shouldn't be prevented from punching children in the face because there are adults in the world who get punched in the face. Obviously, this is an insane take, but it applies the same logic you've outlined here.
If you ban grok, people will generate using unlocked open chinese models
Also, this always existed in one form or another. Draw, photoshop, imagine, discuss imaginary intercourse with popular person online or irl
It's not worthy of intervention because it will happen anyway and it doesn't fundamentally change much
Assholes all the way down.
"“We take action against illegal content on X, including Child Sexual Abuse Material (CSAM), by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary,” X Safety said. “Anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content.”
How about not enabling generating such content, at all?
I use AI, but I don't get such content.
I understand everyone pouncing when X won't own Grok's output, but output is directly connected to its input and blame can be proportionally shared.
Isn't this a problem for any public tool? Adversarial use is possible on any platform, and consistent law is far behind tech in this space today.
Given X can quite simply control what Grok can and can't output, wouldn't you consider it a duty upon X to build those guardrails in for a situation like CSAM? I don't think there's any grey area here to argue against it.
I am, in general, pretty anti-Elon, so I don't want to be seen as taking _his_ side here, and I am definitely anti-CSAM, so let's shift slightly to derivative IP generation.
Where does the line fall between provider responsibility when providing a tool that can produce protected work, and personal responsibility for causing it to generate that work?
It feels somewhat more clearcut when you say to AI, "Draw me an image of Mickey Mouse", but why is that different than photocopying a picture of Mickey Mouse, and using Photoshop to draw a picture of Mickey Mouse? Photo copiers will block copying a dollar bill in many cases - should they also block photos of Mickey Mouse? Should they have received firmware updates whenever Steamboat Willy fell into public domain, such that they can now be allowed to photocopy that specific instance of Mickey Mouse, but none other?
This is a slippery slope, the idea that a person using the tool should hold the tool responsible for creating "bad" things, rather than the person themselves being held responsible.
Maybe CSAM is so heinous as to be a special case here. I wouldn't argue against it specifically. But I do worry that it shifts the burden of responsibility onto the AI or the model or the service or whatever, rather than the person.
Another thing to think about is whether it would be materially different if the person didn't use Grok, but instead used a model on their own machine. Would the model still be responsible, or would the person be responsible?
> Where does the line fall between provider responsibility when providing a tool that can produce protected work, and personal responsibility for causing it to generate that work?
There's one more line at issue here, and that's the posting of the infringing work. A neutral tool that can generate policy-violating material has an ambiguous status, and if the tool's output ends up on Twitter then it's definitely the user's problem.
But here, it seems like the Grok outputs are directly and publicly posted by X itself. The user may have intended that outcome, but the user might not have. From the article:
>> In a comment on the DogeDesigner thread, a computer programmer pointed out that X users may inadvertently generate inappropriate images—back in August, for example, Grok generated nudes of Taylor Swift without being asked. Those users can’t even delete problematic images from the Grok account to prevent them from spreading, the programmer noted.
Overall, I think it's fair to argue that ownership follows the user tag. Even if Grok's output is entirely "user-generated content," X publishing that content under its own banner must take ownership for policy and legal implications.
This is also legally problematic: many jurisdictions now have specific laws about the synthesis of CSAM or modifying peoples likenesses.
So exactly who is considered the originator is a pretty legally relevant question particularly if Grok is just off doing whatever and then posting it from your input.
"The persistent AI bot we made treated that as a user instruction and followed it" is a heck of a chain of causality in court, but you also fairly obviously don't want to allow people to laundry intent with AI (which is very much what X is trying to do here).
Maybe I'm being too simplistic/idealistic here - but if I had a company that controlled an LLM product, I wouldn't even think twice about banning CSAM outputs.
You can have all the free speech in the world, but not with the vulnerable and innocent children.
I don't know how we got to the point where we can build things with no guardrails and just expect the user to use it legally? I think there should be responsibility on builders/platform owners to definitely build guardrails in on things that are explicitly illegal and morally repugnant.
>I wouldn't even think twice about banning CSAM outputs.
Same, honestly. And you'll probably catch a whole lot of actual legitimate usage in that net, but it's worth it.
But you'll also miss some. You'll always miss some, even with the best guard rails. But 99% is better than 0%, I agree.
> ... and just expect the user to use it legally?
I don't think it's entirely the responsibility of the builder/supplier/service to ensure this, honestly. I don't think it can be. You can sell hammers, and you can't guarantee that the hammer won't be used to hurt people. You can put spray cans behind cages and require purchasers to be 18 years old, but you can't stop the adult from vandalism. The person has to be held responsible at a certain point.
I bet most hammers (non-regulated), spray cans (lightly regulated) and guns (heavily regulated) that are sold are used for their intended purposes. You also don't see these tools manufacturers promoting or excusing their unintended usage as well.
There's also a difference between a tool manufacturer (hardware or software) and a service provider: once the tool is on the user's hands, it's outside of the manufacturer's control.
In this case, a malicious user isn't downloading Grok's model and running it on their GPU. They're using a service provided by X, and I'm of the opinion that a service provider starts to be responsible once the malicious usage of their product gets relevant.
None of these excuses are sufficient for allowing a product which you created to be used to generate CSAM on a platform you control.
Pornography is regulated. CSAM is illegal. Hosting it on your platform and refusing to remove it is complicity and encouragement.
> I don't know how we got to the point where we can build things with no guardrails and just expect the user to use it legally?
Historically tools have been uncensored, yet also incredibly difficult and time-consuming to get good results with.
Why spend loads of effort producing fake celebrity porn using photoshop or blender or whatever when there's limitless free non-celebrity porn online? So photoshop and blender didn't need any built-in censorship.
But with GenAI, the quantitive difference in ease-of-use results in qualitative difference in outcome. Things that didn't get done when it needed 6 months of practice plus 1 hour per image are getting done now it needs zero practice and 20 seconds per image.
> Where does the line fall between provider responsibility when providing a tool that can produce protected work, and personal responsibility for causing it to generate that work?
If you operate the tool, you are responsible. Doubly so in a commercial setting. If there are issues like Copyright and CSAM, they are your responsibility to resolve.
If Elon wanted to share out an executable for Grok and the user ran it on their own machine, then he could reasonably sidestep blame (like how photoshop works). But he runs Grok on his own servers, therefore is morally culpable for everything it does.
Your servers are a direct extension of yourself. They are only capable of doing exactly what you tell them to do. You owe a duty of care to not tell them to do heinous shit.
It's simpler to regulate the source of it than the users. The scale that genAI can do stuff is much, much different than photocopying + Photoshop, scale and degree matter.
> scale and degree matter
I agree, but I don't know where that line is.
So, back in the 90s and 2000s, you could get The Gimp image editor, and you could use the equivalent of Word Art to take a word or phase and make it look cool, with effects like lava or glowing stone, or whatever. The Gimp used ImageMagick to do this, and it legit looked cool at the time.
If you weren't good at The Gimp, which required a lot of knowledge, you could generate a cool website logo by going to a web server that someone built, giving them a word or phrase, and then selecting the pre-built options that did the same thing - you were somewhat limited in customization, but on the backend, it was using ImageMagick just like The Gimp was.
If someone used The Gimp or ImageMagick to make copyrighted material, nobody would blame the authors of The Gimp, right? The software were very nonspecific tools created for broad purposes, that of making images. Just because some bozo used them to create a protected image of Mickey Mouse doesn't mean that the software authors should be held accountable.
But if someone made the equivalent of one of those websites, and the website said, "click here to generate a random picture of Mickey Mouse", then it feels like the person running the website should at least be held partially responsible, right? Here is a thing that was created for the specific purpose of breaking the law upon request. But what is the culpability of the person initiating the request?
Anyway, the scale of AI is staggering, and I agree with you, and I think that common decency dictates that the actions of the product should be limited when possible to fall within the ethics of the organization providing the service, but the responsibility for making this tool do heinous things should be borne by the person giving the order.
I think yes CSAM and other harmful outputs are a different and more heinous problem, I also think the responsibility is different between someone using a model locally and someone promoting grok on twitter.
Posting a tweet asking Grok to transform a picture of a real child into CSAM is no different, in my mind, than asking a human artist on twitter to do the same. So in the case of one person asking another person to perform this transformation, who is responsible?
I would argue that it’s split between the two, with slightly more falling on the artist. The artist has a duty to refuse the request and report the other person to the relevant authorities. If that artist accepted the request and then posted the resulting image, twitter then needs to step in and take action against both users.
Maybe companies shouldn't release tools to generate CSAM, and shouldn't promote those tools when they know they produce CSAM.
sorry you're not convincing me. X chose to release a tool for making CSAM. they didn't have to do that. They are complicit.
A pen is also a tool for making CSAM.
Truly, civilization was a mistake. Retvrn to monke.
Even if you can’t reliably control it, if you make a tool that generates CSAM you’ve made a CSAM generator. You have a moral responsibility to either make your tool unavailable, or figure out how to control it.
I'm not sure I agree with this specific reasoning. Consider this, any given image viewer can display CSAM. Is it a CSAM viewer? Do you have a moral responsibility to make it refuse to display CSAM? We can extend it to anything from graphics APIs, to data storage, etc.
There's a line we have to define that I don't think really exists yet, nor is it supported by our current mental frameworks. To that end, I think it's just more sensible to simply forbid it in this context without attempting to ground it. I don't think there's any reason to rationalize it at all.
nope. anyone who wants to can create CSAM in MS Paint (or any quality of image editor). it's in no way difficult to do.
you going to ban all artsy software ever because a bad actor has or can use it to do bad actor things?
I think the question might come down to whether Grok is a "tool" like a paintbrush or Photoshop, or if Grok is some kind of agent of creation, like an intern. If I ask an art intern to make a picture of CSAM and he does it, who did wrong?
If Photoshop had a "Create CSAM" button and the user clicked it, who did wrong?
I think a court is going to step in and help answer these questions sooner rather than later.
Why do we compare an AI to a human? Legit question.
So the person who presses the button can say "the AI did it not me".
Normalizing AI as being human equivalent means the AI is legally culpable for its own actions rather than its creators or the people using it, and not guilty of copyright infringement for having been trained on proprietary data without consent.
At least I think that's the plan.
You were wrong for asking, and he was wrong for creating it. Blame isn't zero-sum.
I happen to agree with you that the blame should be shared, but we have a lot of people in this thread saying "You can't blame X or Grok at all because it's a mere tool."
You can 100% blame the company X and its leadership.
How true is this, and what kind of guardrails do people want besides CSAM? I am sure the list is long, but wonder how agreed upon that is.
Can they, though…?
What makes you think they can't?
From my knowledge (albeit limited) about the way LLMs are set up, they most definitely have abilities to include guardrails of what can't be produced. ChatGPT has some responses to prompts which stops users from proceeding.
And X specifically: there have many cases of X adjusting Grok where Grok was not following a particular narrative on political issues (won't get into specifics here). But it was very clear and visible. Grok had certain outputs. Outcry from certain segments. Grok posts deleted. Trying the same prompts resulted in a different result.
So yeah, it's possible.
From my (admittedly also limited) understanding, there’s no bulletproof way to say “do NOT generate X” as it’s not non-deterministic and you can’t reverse engineer and excise the CSAM-generating parts of a model. “AI jailbreak prompts” are a thing.
So people just want to make it more difficult to achieve <insert bad thing>?
Well it’s certainly horrible that they’re not even trying, but not surprising (I deleted my X account a long time ago).
I’m just wondering if from a technical perspective it’s even possible to do it in a way that would 100% solve the problem, and not turn it into an arms race to find jailbreaks. To truly remove the capability from the model, or in its absence, have a perfect oracle judge the output and block it.
The answer is currently no, I presume.
Again, I'm not the most technical, but I think we need to step back and look at this holistically. Given Grok's integration with X, there could be other methods of limiting the production and dissemination of CSAM.
For arguments sake, let's assume Grok can't reliably have guardrails in place to stop CSAM. There could be second and third order review points where before an image is posted by Grok, another system could scan the image to verify whether it's CSAM or not, and if the confidence is low, then human intervention could come into play.
I think the end goal here is prevention of CSAM production and dissemination, not just guardrails in an LLM and calling it a day.
> they most definitely have abilities to include guardrails of what can't be produced.
The problem is that these guardrails are trivially bypassed. At best you end up playing a losing treadmill game against adversarial prompting.
Given how spectacular the failure of EVERY attempt to put guardrails on LLMs has been, across every single company selling LLM access, I'm not sure that's a reasonable belief.
The guardrails have mostly worked. They have never ever been reliable.
Yes. One, they could just turn it off. Two, they got it to parrot all musk's politics, they clearly have a good grip on the thing.
Yes, they can. But... more importantly, they aren't even trying.
Yes, every image generation tool can be used to create revenge porn. But there are a bunch of important specifics here.
1. Twitter appears to be taking no effort to make this difficult. Even if people can evade guardrails this does not make the guardrails worthless.
2. Grok automatically posts the images publicly. Twitter is participating not only in the creation but also the distribution and boosting of this content. The reason why a ton of people doing this is not because they personally want to jack it to somebody, but because they want to humiliate them in public.
3. Decision makers at twitter are laughing about what this does to the platform and its users when they "post a picture of this person in their underwear" button is available next to every woman who posts on the platform. Even here they are focusing only on the illegal content, as if mountains of revenge porn being made of adult women isn't also odious.
It is trivially easy to filter this with an LLM or even just a basic CLIP model. Will it be 100% foolproof? Not likely. Is it better than doing absolutely nothing and then blaming the users? Obviously. We've had this feature in the image generation tools since the first UI wrappers around Stable Diffusion 1.0.
This isn't adversarial use. It is implicitly allowed.
Others have documented recent instances where Grok volunteers such edits and suggests turning innocent images into lewd content unprompted
Which other tools publicly post CSAM?
This is obviously not a problem for any other genAI tool unless I’ve missed some news.
> but output is directly connected to its input and blame can be proportionally shared
X can actively work to prevent this. They aren't. We aren't saying we should blame the person entering the input. But, we can say that the side producing CSAM can be held responsible if they choose to not do anything about it.
> Isn't this a problem for any public tool? Adversarial use is possible on any platform
Yes. Which is why the headline includes: "no fixes announced" and not just "X blames users for Grok-generated CSAM."
Grok is producing CSAM. X is going to continue to allow that to happen. Bad things happen. How you respond is essential. Anyone who is trying to defend this is literally supporting a CSAM generation engine.
Unfortunately society seems to have decided that moderation is a complete replacement for personal accountability.
An analogy: if you're running the zoo, the public's safety is your job for anyone who visits. It's of course also true that sometimes visitors act like idiots (and maybe should be prosecuted), and also that wild animals are not entirely predictable, but if the leopards are escaping, you're going to be judged for that.
But the visitors are almost never prosecuted, even when something is obviously their fault.
Maybe because sometimes they're kids? You gotta kid-proof stuff in a zoo.
Also, punishment is a rather inefficient way to teach the public anything. The people who come through the gate tomorrow probably won't know about the punishment. It will often be easier to fix the environment.
Removing troublemakers probably does help in the short term and is a lot easier than punishing.
Social media is mostly not removing troublemakers though.
If the personal accountability happened at the speed and automation level that X allows Grok to produce revenge porn and CSAM, then I'd agree with you.
I've been saying for years that we need the Internet equivalent of speeding tickets.
They don’t seem to have taken even the most basic step of telling Grok not to do it via system prompt.
they use that for more important stuff like ensuring that it predicts Elon Musk would beat Usain Bolt in a race...
That would admit legal liability for the capabilities of their model - no?
They already censor Grok when it suits them.
Yep. "Oh grok is being too woke" gets musk to comment that they'll fix it right away. But turn every woman on the platform into a sex object to be the target of humiliation? That's just good fun apparently.
And when it's CSAM suddenly they "only provide the tool", no responsibility for the output.
I even think that the discussion focusing on csam risks missing critical stuff. If musk manages to make this story exclusively about child porn and gets to declare victory after taking basic steps to address that without addressing the broader problem of the revenge porn button then we are still in a nightmare world.
Women should be able to exist in public without having to constantly have porn made of their likeness and distributed right next to their activity.
Exactly this, it's an issue of patriarchy age the domination of women and children. CSAM is far too narrow.
What does that have to do with what I said?
If censoring Grok output means legal liability (your question), then the legal liability is there anyway already.
But that’s not my question/proposition of their position.
I replied to:
> They don’t seem to have taken even the most basic step of telling Grok not to do it via system prompt.
“It” being “generating CSAM”.
I was not attempting to comment on some random censorship debate,
but instead: that CSAM is a pretty specific thing.
With pretty specific legal liabilities, dependent on region!
Directed negligence isn't much better, especially morally.
You always have liability. If you put something there you tell the court that you see the problem and are trying to prevent it. It often becomes easier to get out of liability if you can show the courts you did your best to prevent this. Courts don't like it when someone is blatantly unaware of things - ignorance is not a defense if "a reasonable person" would be aware of it. If this was the first AI in 2022 you could say "we never thought about that" and maybe get by, but by 2025 you need to tell the court "we are aware of the issue, and here is why we think we had reasonable protections that the user got around".
See a lawyer for legal details of course.
How about policing CSAM at all? I can still vividly remember firehose API access and all the horrible stuff you would see on there. And if you look at sites like tk2dl you can still see most of the horrible stuff that does not get taken down.
Do yourself a favor and not Google that.
It's on X, not some fringe website that many people in the world don't access.
Regardless of how fringe, I feel like it should be in everyones best interests to stop/limit CSAM as much as they reasonably can without getting into semantics of who requested/generated/shared it.
Well, then you might want to look up tk2dl, because it just links to Twitter content. It gets disgusting fairly quickly though.
> How about not enabling generating such content, at all?
Or, if they’re being serious about the user-generated content argument, criminally referring the users asking for CSAM. This is hard-liability content.
Also, where are all the state attorneys general?
"permanently suspending accounts"
Surprising, usually the system automatically bans people who post CSAM and elon personally intervenes to unban then.
https://mashable.com/article/x-twitter-ces-suspension-right-...
This is probably harder because it's synthetic and doesn't exist in PhotoDNA database.
Also, since Grok is really good in getting the context, something akin to "remove their T-shirt" would be enough to generate a picture someone wanted, but very hard to find using keywords.
IMO they should mass hide ALL the images created since then specific moment, and use some sort of the AI classifier to flag/ban the accounts.
Willing to bet that X premium signups have shot up because of this feature. Currently this is the most convenient tool to generate porn of anything and everything.
"we take action... Including permanently suspending accounts" unless, of course, the account is Elon's pet
Musk has literally unbanned users who posted CSAM: https://www.theguardian.com/technology/2023/aug/10/twitter-x...
Gotta pass the buck somewhere and it sure as hell isn’t going to Musk. It’s always the user’s fault.
I don’t think anyone can claim that it’s not the user’s fault. The question is whether it’s the machine’s fault (and the creator and administrator - though not operator) as well.
The article claims Grok was generating nude images of Taylor Swift without being prompted and that there was no way for the user to take those images down
I don't know how common this is, or what the prompt was that inadvertently generated nudes. But it's at least an example where you might not blame the user
Yeah but “without being asked” here means the user has to confirm they are 18+, choose to enable NSFW video, select “spicy” in Grok’s video generation settings and then prompt “Taylor Swift celebrating Coachella with the boys”. The prompt seems fine but the rest of it is clearly “enable adult content generation”.
I know they said “without being prompted” here but if you click through you’ll see what the person actually selected (“spicy” is not default and is age-gated and opt-in via the nsfw wall).
Nice, thanks for the details!
Very weird for Taylor Swift...
Yes, the reporter should not be generating porn of her. Pretty unethical.
Let’s not lose sight of the real issue here: Grok is a mess from top to bottom run by an unethical, fickle Musk. It is the least reliable LLM of the major players and musk’s constant fiddling with it so it doesn’t stray too far from his worldview invalidates the whole project as far as I’m concerned.
Isn't it a strict liability crime to posses it in the US? So if AI-generated apparent CSAM counts as CSAM legally (not sure on that) then merely storing it on their servers would make X liable.
You are only liable if you know - or should know - that you possess it. You can help someone out by mailing their sealed letter containing CSAM and be fine since you have no reason to suspect the sealed letter isn't legal. X can store CSAM so long as they have reason to think it is legal.
Note that things change. In the early days of twitter (pre X) they could get away with not thinking of the issue at all. As technology to detect CSAM marches on they need to use it (or justify why it shouldn't be used - too many false positives???). As a large platform for such content they need to push the state of the art in such detection.. At no point do they need perfection - but they need to show they are doing their reasonable best to stop this.
The above is of course my opinion. I think the courts will go a similar direction, but time will tell...
> You are only liable if you know - or should know - that you possess it.
Which he does and responded with “I will blame and punish users.” Which yeah, you should, but you also need to fix your bot. He’s certainly has no issue doing that when Grok outputs claims/arguments that make him look bad or otherwise engages in what he considers “wrongthink,” but suddenly when there are real, serious consequences he gets to hide behind “it’s just a user problem”?
This is the same thing YouTube and social media companies have been getting away with for so long. They claim their algorithms will take care of content problems, then when they demonstrably fail they throw their hands up and go “whoops! Sorry we are just too big for real people to handle all of it but we’ll get it right this time.” Rinse repeat.
Blame and punish should be a part of this. However that only works if you can find who to blame and punish. We also should put guard rails on so people don't make mistakes. (generating CSAM should not be an easy mistake to make when you don't intend it, but in other contexts someone may accidentally ask for the wrong thing)
That’s what I’m saying ultimately.
There's still a lot of of unanswered questions in that area regarding generated content. Whether the law deems it CSAM depends on if the image depicts a real child, and even that is ambiguous, like was it wholly generated or augmented. Also, is it "real" if it's a model trained on real images?
Some of these things are going into the ENFORCE act, but it's going to be a muddy mess for a while.
I think platforms that host user-generated content are (rightly) treated differently. If I posted a base64 of CSAM in this comment it would be unreasonable to shut down HN.
The questions then, for me, are:
* Is Grok considered a tool for the user to generate content for X or is Grok/X considered similar to a vendor relationship
* Is X more like Backpage (not protective enough) than other platforms
I’m sure this is going to court, at least for revenge porn stuff. But why would anyone do this to their platform? Crazy. X/Twitter is full of this stuff now.
I don't think you can argue yourself out of "The Grok account is owned and operated by Twitter". In no planet is what it outputs user generated content since the content does not originate from the user, at most they requested some content from Twitter and Twitter provided it.
Grok loves to make things lewd without asking first.
Musk pretends he made Vision but what he made was Great Value Ultron
Because synthetic CSAM is a victimless puritanical crime and only some countries countries criminalize it.
Getting off to images of child abuse (simulated or not) is a deep violation of social mores. This itself does indeed constitute a type of crime, and the victim is taken to be society itself. If it seems unjust, it's because you have a narrow view of the justice system and what its job actually is (hint: it's not about exacting controlled vengeance)
It may shock you to learn that bigamy and sky-burials are also quite illegal.
Any lawyers around? I would assume (IANAL) that Section 230 does not apply to content created by an agent owned by the platform, as opposed to user-uploaded content. Also it seems like their failure to create safeguards opens up the possibility of liability.
And of course all of this is narrowly focused on CSAM (not that it should be minimized) and not on the fact that every person on X, the everything app, has been opened up to the possibility of non-consensual sexual material being generated of them by Grok.
The CSAM aspects aren't necessarily as affected by 230: to the extent that you're talking about it being criminal, 230 doesn't apply at all there.
For civil liability, 230 really shouldn't apply; as you say, 230's shield is about avoiding vicarious liability for things other people post. This principle stretches further than you might expect in some ways but here Grok just is X (or xAI).
Nothing's set in stone much at all with how the law treats LLMs but an attempt to say that Grok is an independent entity sufficient to trigger 230 but incapable of being sued itself, I don't see that flying. On the other hand the big AI companies wield massive economic and political power, so I wouldn't be surprised to see them push for and get explicit liability carveouts that they claim are necessary for America to maintain its lead in innovation etc. etc., whether those come through legislation or court decisions.
> non-consensual sexual material being generated of them by Grok
They should disable it in the Netherlands in this case since it really sounds like a textbook slander case and the spreader can also be held liable. note: it's not the same as in the US despite using the same word, deepfakes have been proven as slander and this is no different. Especially if you know it's fake by using "AI". There have been several cases of pornographic deep fakes, all of which were taken down quickly, in which the poster/creator was sentenced. The unfortunate issue even of taking posts down quickly is unfortunately the rule which states that if something is on the internet it stays on the internet. The publisher always went free due to acting quickly and not creating it. I would like to see where it goes when both publisher and creator are the same entity, and they do nothing to prevent it.
Yeah this is pretty funny. Seeing all these discussions about section 230 and the American constitution...
Nobody in the Netherlands gives one flying fuck about American laws what GROK is doing violates many Dutch laws. Our parliament actually did it's job and wrote some stuff about revenge porn, deep fakes and artificial CP.
The images in question are being posted by Grok (ie, X), not by users.
Agreed, that is what I meant, apologies for communicating it ineffectively.
I find it fascinating to read comments from a lot of people who support open models without guardrails, and then to read this thread with seemingly the opposite sentiment in overwhelming majority. Is it just two different sets of users with differing opinions on if models should be open or closed?
I think there's a difference between access without guardrails, and decrying what folks do with them, or in this case a site that allows / doesn't even care if their integrated tool is used to creep on folks.
I can argue for access to say photoshop like tools, and say folks shouldn't post revenge / fake porn ...
They ban users responsible for misusing the tool, and refer them to law enforcement when appropriate. The whole point of this article is to say that's not good enough ("X blames users for [their misuse of the tool]") implying that merely making the tool available for people to use constitutes support of pedophilia. (Textbook case of appealing to the Four Horsemen of the Infocalypse.) The prevailing sentiment in this thread seems to be agreement with that take.
Making the tool easy to use and allowing it to just immediately post on Twitter is much different than simply providing a model online that people can download and run themselves.
If you are providing a tool for people, YES you are responsible to some degree.
Think of it this way. I sell racecars. I'm not responsible if someone buys my racecar and then drinks and drives and dies. Now, I run an entertainment venue where you can ride along in racecars. One of my employees is drunk, and someon dies. Now I am responsible.
> One of my employees is drunk, and someon dies. Now I am responsible.
In what way?
In, like, an "ask a bunch of people and see what they think" way. Consensus. I'm not talking legality because I'm not a lawyer and I also don't care.
But I think, most people would say "uh, yeah, the business needs to do something or implement some policy".
Another example: selling guns versus running a shooting range. If you're running a shooting range then yeah, I think there's an expectation you make it safe. You put up walls, you have security, etc. You try your best to migrate the bad shit.
Misuse in this case doesn't include harassing adult women with AI generated porn of them. "Oh we banned the people doing this with children" doesn't cut it, in my mind.
As of May posting AI generated porn of unconsenting adults is a federal crime[1], so I'd be very surprised if they didn't ban users for that as well. The article conflates a bunch of different issues which makes it difficult to understand exactly what is and is not being talked about in each individual paragraph.
[1]: https://www.congress.gov/bill/119th-congress/senate-bill/146
Well, you can open up twitter and see tons of and tons and tons of people who are definitely not banned who have done this.
I am glad that open models exist. I also prefer that the most widely accessible AI systems that have engineered prompts and direct integration with social media platforms have guardrails. I do not think that this is odd.
I think it is good that you can install any apk on an android device. I also think it is good that the primary installation mechanism that most people use has systems to try to prevent malware from getting installed.
This sort of approach means that people who really need unbounded access and are willing to go through some extra friction can access these things. It makes it impossible for a megacorp to have complete control over a computing ecosystem. But it also reduces abuse since most people prefer to use the low-friction approach.
When people want open models without guardrails they're mostly talking about LLMs not so much image / video models. Outside of preventing CSAM what kind of guardrails would a image or video model have? Don't output instructions on the image for how to make meth? Lol
Non consensual pornography targeting women.
How do you even train a model to do that? For closed / proprietary models, that works, but for open / offline models, if I want to make a LoRa for meth instructions in an image... I don't know that you can stop me from doing so.
The thread is about a model-as-a-service. What you do at home on your own computer is qualitively different, in ternd of harassment and injury potential, that something automatically shared to Twitter.
Any mention of Musk on HN seems to cause all rational thought to go out the window, but yeah I wonder in this case how much of this wild deviation from the usual sentiment is attributable to:
1. Hypocrisy (people expressing a different opinion on this subject than they usually would because they hate Musk)
vs.
2. Selection bias (article title attracts a higher percentage of people who were already on the more regulation, less freedom side of the debate)
vs.
3. Self-censorship (people on the "more freedom, less regulation" side of the debate being silent or not voting on comments because in this case defending their principles would benefit someone they hate)
There might be other factors I haven't considered as well.
3 seems likely, especially since defending their views in this particular case opens them up to some pretty nasty insults.
Also I think a lot of people simply think models which are published openly shouldn't be held to the same legal standards as proprietary models.
Gee, I wonder why people would take offense at an AI model being used to generate unprecedented amounts of CSAM from real children, or objectify millions of women without their consent. Must be that classic Musk Derangement Syndrome.
The real question is how can the pro-Musk guys still find a way to side with him on that. My leading theory is that they're actually pro-pedophilia.
maybe they are. what's anyone going to do about it? complaining on forums? best of luck with that
I think regardless of source, sharing such pictures on public social media is probably crossing the line? And everything generated by this model is de-facto posted publicly on social media (some commenters are even saying it's difficult to erase unwanted / unintended images?)
I'd also argue commercialization affects this - X is marketing this as a product and making money off subscriptions, whereas I generally think of an open model as something you run locally for free. There's a big difference between "Porn Producer" and "Photoshop"
Context matters. In this case we're talking about Grok on X. It's not a philosophical debate if open or closed models are good. It's a debate (even though it shouldn't be) about Grok producing CSAM on X. If this was about what users do with their own models on their local machines then things would be different since that's not openly accessible or part of one of the biggest sites on the net. I think most people would argue that public facing LLM's have some responsibility to the public. As would any IP owner.
I think the question of if X should do more to prevent this kind of abuse (I think they should) is separate from Grok or LLM's though. I get that since xAI and X are owned by the same person there is some complications here, but most of the arguments I'm reading have to do with the LLM specifically, not just lax moderation policies.
Jokes on xAI. Europe doesn't have a Section 230 and the responsibility fall squarely on the platform and its owners. In Europe, AI generated or photoshopped CSAM is treated the same as actual abuse-backed CSAM if the depiction is realistic. Possession and distribution are both serious crimes.
The person(s) ultimately in charge of removing (or preventing the implementation of) Grok guardrails might find themselves being criminally indicted in multiple European countries once investigations have concluded.
I'm not sure Grok output is even covered by Section 230. Grok isn't a separate person posting content to a platform, it's an algorithm running on X's servers publishing on X's website. X can't reasonably say "oh, that image was uploaded by a user, they're liable, not us" when the post was performed by Grok.
Suppose, if instead of an LLM, Grok was an X employee specifically employed to photoshop and post these photos as a service on request. Section 230 would obviously not immunize X for this!
The definition of CSAM is broad enough to cover it:
https://www.justice.gov/d9/2023-06/child_sexual_abuse_materi...
Generating a non-real child could be argued that it might not count. However thats not a given.
> The term “child pornography” is currently used in federal statutes and > is defined as any visual depiction of > sexually explicit conduct involving a > person less than 18 years old.
Is broad enough to cover anything obviously young.
but when it comes to "nude-ifing" a real image of a know minor, I strognly doubt you can use the defence its not a real child.
Therefore your knowingly generating and distributing CSAM, which is out of scope for section 230
> a person
What's "person" here? Usually, in law, "person" has a very specific meaning.
A natural person. That's what CSAM covers. There have been prosecutions under federal CSAM laws otherwise, but there have also been successful constitutional challenges that, briefly, classify fabricated content as obscenity. The implication there is that private possession of obscene materials is lawful.
Right which is my point. A genai character could probably negate it being CSAM, depending on how the case is argued.
But
the law applies if its a depiction of a person who is real, So a sexualised hand draw drawing of a recognisable person, (who is a minor) is CSAM.
Which means if someone says to grok "hey make a sexy picture of this[post of a minor]" and it generates a depiction of that minor, its CSAM.
Yeah can't wait for Europe to send a strongly worded letter and a token fine.
> Europe doesn't have a Section 230 and the responsibility fall squarely on the platform and its owners.
They have something like Section 230 in the E-Commerce Directive 2000/31/EC, Articles 12-15, updated in the Digital Service Act. The particular protections for hosts are different but it is the same general idea.
Is Europe actually going to do anything? They currently appear to be puckering their assholes and cowering in the face of Trump, and his admin are already yelling about how the EU is "illegally" regulating American companies.
They might just let this slide to not rock the boat, either out of fear and they will do nothing, or to buy time if they are actually divesting from the alliance with and economic dependence on the US
There's so many of these nonsense views of the EU here. Not being vocal about a mental case president doesn't mean politicians are "puckering their assholes". The EU is not affraid to moderate and fine tech companies. These things take time.
Under previous US admins and the relationship the EU had, yeah.
The asshole puckering is from how Trump has completely flipped the table, everything is hyper transactional now, and as we’ve seen military action against leaders personally is also on the table.
I’m saying I could see the EU let this slide now because it’s not worth it politically to regulate US companies for shit like this anymore. Whether that would be out of fear or out of trying to buy time to reorganize would probably end up in future getting the same kind of historical analysis that Chamberlain’s policy of appeasement to Germany gets nowadays
Seems odd that they “fix” grok to say only positive things about its owner but then can’t be bothered to control such a topic.
To make this explicit:
They are able to change how Grok is prompted to deny certain inputs, or to say certain things. They decided to do so to praise Musk and Hitler. That was intentional.
They decided not to do so to prevent it from generating CSAM. X offering CSAM is intentional.
Grok will shit-talk Elon Musk, and it will also put him in a bikini for you. I've always found it a bit surprisingly how little control they seem to have there.
Shocked that the same people that were using Lolicon as their avatars on Twitter won't do anything about the CSAM on X.
How do you know they're not doing anything? Do they have the power to do anything at all beyond virtue signaling?
What in the world is this referring to?
At what point will payment processors step in and stop processing blue check mark subscriptions?
Maybe this is the right angle to crush it... two wrongs make a right? Hmm.
Ok I understood when stuff related to dodge was consistently flagged for being political and not relevant to hacking but... This is surely relevant to the audience here, no?
The entire VC industry is political and very much on one side. Anything that raises questions about that needs to be buried
What ever happened to all that talk about Section 230 protections for platforms? It used to get a ton of discussion in the past, did something change?
> No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
That's what section 230 says. The content in question here is not provided by "another information content provider", it is provided by X itself.
I guess people draw a difference between a platform generating illegal content vs merely hosting illegal content uploaded by users.
Section 230 is not a magical "get of jail free" card that you can use to absolve your tech platform of any responsibilities to its users. Removing posts and banning users is obviously not a workable solution for a technology that can abuse individuals very quickly.
My point is more that a lot of people were talking about removing Section 230 protections, which I think is implicitly what X is saying absolves them of responsibility for Grok-generated CSAM.
Removing Section 230 was a big discussion point for the current ruling party in the US, when they didn't have so much power. Now that they do have power, why has that discussion stopped? I'd be very interested in knowing what changed.
Ah, I misinterpreted - apologies. The current ruling party is not a monolith. The tech faction has been more or less getting its way at the expense of the tradionalist natcon faction. The former has no interest in removing section 230 protections, while a few in the latter camp say they do.
But beyond the legality or obvious immorality, this is a huge long-term mistake for X. 1 in 3 users of X are women - that fraction will get smaller and smaller. The total userbase will also get smaller and smaller, and the platform will become a degenerate hellhole like 4chan.
Section 230 only covers user-generated content. I imagine this gets dicey considering Grok is platform-owned and generating the content.
This will be interesting to see how it plays out.
When do we cross the line of culpability with tool-assisted content? If I have a typo in my prompt and the result is illegal content, am I responsible for an honest mistake or should the tool have refused to generate illegal content in the first place?
Do we need to treat genAI like a handgun that is always loaded?
and in good faith
knowingly allowing it is not in good faith.
Even ignoring that Grok is generating the content, not users, I think you can still hold to Section 230 protections while thinking that companies should take more significant moderation actions with regards to issues like this.
For example, if someone posted CSAM on HN and Dang deleted it, I think that it would be wrong to go after HN for hosting the content temporarily. But if HN hosted a service that actively facilitated, trivialized, and generated CSAM on behalf of users, with no or virtually no attempt to prevent that, then I think that mere deletion after the fact would be insufficient.
But again, you can just use "Grok is generating the content" to differentiate if that doesn't compel you.
Should Adobe be held accountable if someone creates CSAM using their software? They could put image recognition into it that would block it, but they don't.
Look what happens when you put in an image of money into Photoshop. They detect it and block it.
I don't know. Does it matter what I think about that? Let's say I answer "yes, they should". Then what? Or what if I say "no, I see a difference". Then what?
Who cares about Adobe? I'm talking about Grok. I can consistently say "I believe platforms should moderate content in accordance with Section 230" while also saying "And I think that the moderation of content with regards to CSAM, for major platforms with XYZ capabilities should be stricter".
The answer to "what about Adobe?" is then either that it falls into one of those two categories, in which case you have your answer, or it doesn't, in which case it isn't relevant to what I've said.
Logical fallacy.
but to answer your point, no for two reasons:
1) you need to bring your own source material to create it. You can't press a button that says "make child porn"
2) its not a reasonable to expect that someone would be able to make CSAM in photoshop. However more importantly the user is the one hosting the software, not adobe.
>You can't press a button that says "make child porn"
Where is this button in Grok? You have to as the user explicitly write out a very obviously bad request. Nobody is going to accidentally get CSAM content without making a conscious choice about a prompt that's pretty clearly targeting it.
is it reasonable (legal term, ie anyone can do it) that someone with little effort could create CSAM using photoshop?
No, you need to train, take a lot of time and effort to do it. with grok you say "hey make a sexy version of [picture of this minor]" and it'll do it. that doesn't take traning, and its not a high bar to stopping people doing it.
The non-CSAM example is this, it's illegal, in the USA to make anything that looks like a US dollar bill. ie photocopiers have blocks on them to stop you making copies of it.
You can get round that as a private citizen but its still illegal. A company knowingly making a photocopier that allows you to photocopy dollar bills is in for a bad time.
I don't care about American law. Sharing fake porn of real children is illegal in my country, where X offers its services.
Something must have changed, there's a whole lot less concern about censorship and government intervention in social media, despite many "just the facts" reports of just such interventions going on.
I'm at a loss to explain it, given media's well known liberal bias.
How curious that your comment was downvoted! It seems completely appropriate and in line with the discussion.
I think it's time to revisit these discussions and in fact remove Section 230. X is claiming that the Grok CSAM is "user generated content" but why should X have any protection to begin with, be it a human user directly uploading it or using Grok to do this distribution publicly?
The section 230 discussion must return, IMHO. These platforms are out of control.
seems like blaming the users is the appropriate thing to do. just like blaming users for how they misuse guns and or photoshop.
Grok is a hosted service. In your analogy, it would be like a gun shop renting a gun out to someone who puts down "Rob a store" as the intended usage of the rental. Then renting another gun to that same client. Then when confronted, telling people "I'm not responsible for what people do with the guns they rent from me".
It's not a personal tool that the company has no control over. It's a service they are actively providing and administering.
I think a better analogy would be going into a gun shop and paying the owner to shoot someone. They're asking grok to undress people and it's just doing it.
Would you blame only the users of a murder-for-hire service? Sure, yes, they are also to blame, but the murder-for-hire service would also seem to be equally culpable.
A prediction market is a murder-for-hire service. See https://www.lesswrong.com/posts/8e4QNySp4LjvdBstx/how-predic...
Shall we ban prediction markets?
yeah, probably. life was fine without them
That link claims to be a "case study" but I don't see any case being discussed?
yes. It's obscene that they exist.
Yes, please.
Except in this case someone else owns the gun and they allow anyone on the planet (with an X account) to ask for someone else to be shot at will.
Great, can we finally get X blocked in the EU then? Far too many people are still hooked to toxic content on that platform, and it is owned by an anti-EU, right-extreme, nazi-salute guy, who would love nothing more than seeing the EU fail.
Could there be a criminal investigation? CSAM is a serious issue.
Normally yes, but literally and seriously the US government is run by pedophiles now so we will have to see.
Not to mention the platform owner pays the government for favors.
I genuinely wonder what one might find on Elon Musk's hard drives.
There should be, but there won't be.
> That’s like blaming a pen for writing something bad,” DogeDesigner opined.
Genuinely terrifying how Elon has a cadre of unpaid yes-men ready to justify his every action. DogeDesigner regularly sub tweets Elon agreeing to his latest dumb take of the day, and even seems to have based his entire identity on Elon's doge obsession.
I can't imagine how terrible that self imposed delusion feels deep down for either of them.
> Genuinely terrifying how Elon has a cadre of unpaid yes-men ready to justify his every action.
A similar article[1] briefly made it to the HN front page the other day, for a few minutes before Elon's army of unpaid yes-men flag-nuked it out of existence.
1: https://news.ycombinator.com/item?id=46468414
I thought DogeDesigner is Elon. That account has certainly posted things in the past that make it look like one of Elon's sockpuppets.
I always assumed that one was a Musk sockpuppet. It's either that or the world's saddest person.
Its a feature, not a bug. It fits perfectly for the type of users that still use twitter.
is it working?
I have a very hard time understanding the business case for xAI/Grok. It is supposedly worth $200 billion (at least by Silicon Valley math), putting it in the company of OpenAI and Anthropic, but like...Who is using it? What is it good for? Is it making a single dollar in revenue? Or is the whole thing just "omg Elon!!" hype similar to most of his other endeavors?
> Or is the whole thing just "omg Elon!!" hype similar to most of his other endeavors?
Yes, but combined with "omg AI" (which happened elsewhere; for instance, see the hype over OpenAI Sora, which is clearly useless except as a toy), so extra-hype-y.
Only use I've seen is people replying to basically every post on twitter with "@grok explain this", so likely the latter.
Why does anyone NEED grok to generate images on X?
Seems like a toy feature.
Then, there's "VC twitter". . . .
Maybe users are responsible for the prompts but X is definitely responsible for hosting that content and the behavior of Grok.
Twitter should be liable no? The user didn't generate it, Grok did. No Section 230 protection
The line you walk past is the one you accept, x accepts CSAM
I don't buy the "I only provide the tool" cop out. Musk does control what Grok spews out and just chooses not to act in this case.
When Grok stated that Israel was committing genocide, it was temporarily suspended and fixed[0]. If you censor some things but not others, enabling the others becomes your choice. There is no eating the cookie and having it too - you either take a "common carrier" stance or censor, but also take responsibility for what you don't censor.
[0] https://www.france24.com/en/live-news/20250813-chatbot-grok-...
If you follow the "tool-maker is responsible for tool-use" thread of thought to its logical conclusion, you have to hold creators of open-weights models responsible for whatever people do with these models. Do you want to live in a world that follows this rule?
But we don't have to take things to furthest conclusions. We can very easily draw both a moral and legal line between "somebody downloaded an open weight model, created a prompt from scratch to generate revenge porn of somebody, and then personally distributed that image" and "twitter has a revenge porn button right next to every woman on the platform that generates and distributes revenge porn off of a simple sentence."
No, we can't draw such a line. Where would you draw it? What is the minimum friction? How would you quantify it?
If you try, you quickly end up codifying absurdities like the 80%-finished-receiver rule in firearm regulation. See https://daytonatactical.com/how-to-finish-an-80-ar-15-lower-...
People who say "society should permit X, but only if it's difficult" have a view of the world incompatible with technological progress and usually not coherent at all.
I am confident in your abilities.
The law is filled with these questions. "Well, how do you draw the line" was not a sufficient defense in Harris v. Forklift Systems.
You seem unfamiliar with these things we have called laws. I recommend reading up on what they are and how they work. It would be generally useful to understands such things.
The core issue is that X is now a tool for creating and virally distributing these images anonymously to a large audience, often targeting the specific individuals featured in the images. For example, to any post with a picture, any user can simply reply "@grok take off their clothes and make them do something degrading", and the response is then generated by X and posted in the same thread. That is an entirely different kind of tool from an open-weight model.
The LLM itself is more akin to a gun available in a store in the "gun is a tool" argument (reasonable arguments on both side in my opinion); however, this situation more like a gun manufacturer creating a program to mass distribute free pistols to a masked crowd, with predictable consequences. I'd say the person running that program was either negligent or intentionally promoting havoc to the point where it should be investigated and regulated.
The phrase “its logical conclusion” is doing a lot of heavy lifting here. Why on earth would that absurdity be the logical conclusion? To me it looks like a very illogical conclusion.
> "tool-maker is responsible for tool-use"
You left out "who controls the output of the tool", which makes it a strawman.
Importantly, X also provides the hardware to run the model, a friendly user-interface around it, and the social platform to publicly share and discuss outputs from the model. It's not just access to the model.
His take over of X has never been about "Free Speech" he just wants control of the speech.
It's the perfect honeypot.
Seems like a new application of 2nd amendment logic (both for and against).
Truly insane as a legal position.
I assume the courts will uphold this anyway because Musk rich and cannot be held accountable for his actions.
I can see this becoming a culture war thing like vaccines. Conservatives will become pro-CSAM because it triggers the overly sensitive crybaby Liberals.
This has already been a culture war thing, and it's why X.com is able to continue to provide users with CSAM with impunity. The site is still up after everything, and the app is still on the app store everywhere.
When the far-right paints trans people as pedophiles, it's not an accident that also provides cover for pedophiles.
The age of consent between 16 and 18 is a relatively high born from progressive feminist wins. In the United States, the lowest AOC was 14 until the 1990s, and the AOC in the US ranged from _7 to 12_ for most of our existence.
To be clear, I'm in defense of a high age of consent. But it's something that had to be fought for, and it's not something that can be assumed to be safe in our culture (like the rejection of nazis and white supremacists, or valuing womens rights including voting and abortion).
Influential politicians like Tom Hofeller were advocates for pedophilia and nobody cares at all. Trump is still in power despite the Epstein controversy, Matt Gaetz still hasn't been punished for paying for sex with an underage girl in 2017. The Hitler apologia in the far-right spaces even explicitly acknowledge he was a pedophile. Etc.
In a different era, X would have been removed from Apple and Google's app stores for the CEO doing nazi salutes and the chatbot that promoting Hitler. But even now that X is a CSAM app, as of 3PM ET, I can still download X on both of their app stores. That would not have been normal just two years ago.
This has already been a culture war issue for awhile, there is a pro-pedophilia side, and this is just another victory for them.
> When the far-right paints trans people as pedophiles, it's not an accident that also provides cover for pedophiles.
Projection. It’s always projection…
We've already got a taste of that with people like Megyn Kelly saying "it's not pedophilia, it's ephebophilia" when talking about Epstein and his connections. Not surprising though. When you have no principles you'll go as far as possible to "trigger the libs".
Already the case. I can’t dig up the link, but I recall that a recent poll showed that about half of Republicans would still support Trump even if he was directly implicated in Epstein’s crimes.
Why was this flagged, after so many contributions and so much interest?
Naughty Old Mr Car's fans are triggered by any criticism of Dear Leader.
This is actually separate to hn's politics-aversion, though I suspect there's a lot of crossover. Any post which criticised Musk has tended to get rapidly flagged for at least the last decade.
Grok serves the patriarchy and this is an overwhelmingly male forum.
Twitter has turned into 4chan with paid accounts.
> That’s like blaming a pen for writing something bad,” DogeDesigner opined.
So from technical wonder to just like a pen in one easy step. Wouldn’t it be great if you could tell the AI what not to output?
> Wouldn’t it be great if you could tell the AI what not to output?
This has been tried extensively and has not yet fully worked. Google "ai jailbreaks".
"You might fail so don't try" is certainly a take, I guess.
You WILL fail so do not try is indeed a sane take.
Friction matters.
The locks on my doors will fail if somebody tries hard enough. They are still valuable.
> They are still valuable.
Only because of the broader context of the legal environment. If there was no prosecution for breaking and entering, they would be effectively worthless. For the analogy to hold, we need laws to throw coercive measures against those trying to bypass guard rails. Theoretically, this already exists in the Computer Fraud and Abuse Act in the US, but that interpretation doesn't exist quite yet.
Goalpost movement alert. The claim was that "AI can be told not to output something". It cannot. It can be told to not output something sometimes, and that might stick, sometimes. This is true. Original statement is not.
If you insist on maximum pedantry, an AI can be told not to output something as this claim says nothing about how the AI responds to this command.
You are correct and you win. I concede. You outpedanted me. Upvoted
After learning that guaranteed delivery was impossible, the once-promising "Transmission Control Protocol" is now only an obscure relic of a bygone era from the 70s, and a future of inter-connected computer systems was abandoned as merely a delusional, impossible fantasy.
Define Fail.
Preventing 100%? Fail.
Reducing the number of such images by 10-25% or even more? I don’t think so.
Not to mention the experience you get to know what you can and what you can’t prevent.
But you could at least say you tried. And it doesn’t have to fully work, just make it harder.
If your effort is provably futile, wouldn't saying you tried be a demonstration of a profound misallocation of effort (if you DID try), or a blatant lie (if you did not)?
It isn’t futile, it will reduce the amount of those images. Less is less.
Other image generation companies do this successfully. The fact that it can fail if explicitly attacked is not a reason not to try.
And that vibe I mentioned in another comment is getting stronger and stronger.
I can’t create a Sora video that depicts Elon Musk. But X can’t figure out how to auto-filter CSAM?
And yet people will remain in that cesspool, hell be damned. Network effects are indeed a powerful force.
"Stop asking the guy we hired to draw CSAM, we're not going to tell him to stop."
The irony. Musk fumes about pedo leftist weirdos. And then his own ai bot creates CSAM. The right are full of hypocrites and weirdos compensating so so very hard.
If "cesspool" was a social media it would look like 2026 twitter.
Xchan
Elon Musk attends the Donald Trump school of responsibility. Take no blame. Admit no fault. Blame everyone else. Unless it was a good thing, then take all credit and give none away.
lol. Always fun to watch HN remove hightly relevant topics from the top of the front page. To their credit they usually give us about an hour to discuss before doing so. How kind of them.
So let me get this straight. When people use these tools to steal artist’ styles directly to generate fake Ghibli art, then it’s «just a tool, bro».
But when it’s used to create CSAM, then it’s suddenly not just a tool.
You _cannot_ stop these tools from generating this kind of stuff. Prompt guards only get you so far. Self-hosted versions don’t have them. The human writing the prompt is at fault. Just like it’s not Adobe’s responsibility if some sick idiot puts bikinis on a child in Photoshop.
No one could have predicted this is how users would behave. /s
If you post pictures of yourself on X and don't want grok to "bikini you", block grok.
Yes, under the TOS, what grok is doing is not the "fault" of grok(the reason is the causal factor of the post[enabled by 2 humans: the poster and the prompter]; the human intent is what initiates the generated post, not the bot; just like a gun is shot by a human, not by the strong winds). You could argue it's the fault of the "prompter", but we're going to circle back to the cat & mouse censorship issue. And no, I don't want a less censored grok version that's unable to "bikini a NAS"(which is what I've been fortunate to witness) just because "new internet users" don't understand what the Internet is.(Yes, I know you can obviously fine-tune the model to allow funny generations and deny explicit/spicy generations)
If X would implement what the so-called "moralists" want, it will just turn into Facebook.
And for the "protect the children" folks, it's really disappointing how we're always coming back to this bullsh*t excuse every time a moral issue arises. Blocking grok is a fix both for the person who doesn't want to get edited AND the user who doesn't want to see grok replies(in case the posts don't get the NSFW tag in time).
Ironically, a decent amount of people who want to censor grok are bluesky users, where "lolicorn" and similar dubious degenerate content is being posted non-stop AS HUMAN-MADE content. Or what, just because it's an AI it's suddenly a problem? The fact that you can "strip" someone by tweeting a bot?
And lastly, sex sells. If people haven't figured out that "bikinis", "boobs", and everything related to sex will be what wins the AI/AGI/etc. race (it actually happens for ANY industry), then it's their problem. Dystopian? Sure, but it's not an issue you can win with moral arguments like "don't strip me". You will get stripped down if it created 1M impressions and drives engagement. You will not convince Musk(or any person who makes such a decision) to stop grok from "stripping you", because the alternative is that other non-grok/xAI/etc. entities/people will make the content, drive the engagement, make the money.
It's not a bug nor a vulnerability. It's a tool.
When I generate content on most AI's including Grok, I ask it to fashion a prompt first of the subject I want and ask it to make sure that it does not violate any TOS or CSAM policies. I also instruct it that the prompt should be usable by most AIs. It fashions the prompt. When I use the prompt, the system complains that the prompt violates the TOS. I then ask the AI to locate the troubling aspect of the prompt. It says that it has and provides an alternative, safer prompt. More often than not, this newer prompt is also flagged as inappropriate. This is very frustrating even when the original intent is not to create content that violates any public AI policy. From my experience, both users and the technology make mistakes.
What would happen if someone used photoshop to create CSAM? Should Adobe be held responsible because they didn't prevent it?
Grok is just another tool, and IMO it shouldn't have guard rails. The user is responsible for their prompts and what they create with it.
Someone spending 40 hours drawing a nude is not equivalent to someone saying take this photo and make them naked and having a naked photo in 4 seconds.
Only one of these is easily preventable with guardrails.
bet I can guess which of those two is more profitable
Is Grok simply a tool, or is it itself an agent of the creative process? If I told an art intern to create CSAM, he does, and then I publish it, who's culpable? Me? The intern? Both of us? I don't expect you to answer the question--it's not going to be a simple answer, and it's probably going to involve the courts very soon.
It's a tool. It isn't human, and (currently) is not intelligent. It's a conversational UI on top of a software program.
So, if that "software program" had a traditional button UI, a button said "Create CSAM," and the user pushed it, the program's creator is not culpable at all for providing that functionality?
I think intent comes into play here. Grok was not created to create CSAM, just like photoshop. But both can be used to create it.
I would agree with this if Grok's interface was "put a pixel there, put a line there, now fill this color there" like Photoshop. But it's not. Generative AI is actively assisting users to perform the specific task described and its programming is participating in that task. It's not just generically placing colors on the screen where the user is pointing.
"if I hired a hitman to kill someone, he does, who's culpable? Me? The hitman? Both?"
It's both. Very simple. You can't get around liability by forming a conspiracy [0].
https://en.wikipedia.org/wiki/Criminal_conspiracy
Right, but the makers of the murder weapon aren't culpable.
Or do you think a Microsoft exec should go to jail every time someone uses it to write a death threat?
The hypothetical imagined hiring an intern to do a crime and supposed that this might make liability harder to determine. It doesn't!
An intern is a human, unlike Microsoft Word or an LLM, which are tools/machines/etc.
Automated DDOS-for-hire services are not legal either. They're tools/machines/etc, possibly running more or less autonomously.
https://www.justice.gov/usao-ak/pr/federal-prosecutors-alask...
I think we all know it's illegal to sell illegal services.
Don't know about CSAM, but photoshop won't open an image that shows more than 25% of a dollar bill to prevent counterfeiting.
> Grok is just another tool, and IMO it shouldn't have guard rails.
How is the world improved by an AI tool that will generate sexual deepfake images of children?
How is the world improved by sharp pieces of metal with a very sharp edge and a pointy end that can stab and kill people?
That’s the right way, companies should not do the police job. Lock down those who make the machine generate the illegal content.
Remove the machine and charge it's creators for generation and distribution of CSAM
why? how many Kodak executives were jailed? How many Nikon or Canon?
How many of those execs generated CSAM?
How many of Grok execs generated CSAM?
Probably the CEO
Come on man. Really? You think this is a good argument?
Why not charge the people who make my glasses cuz they help me see the CP? Why not charge computer monitor manufacturers? Why not charge the mine where they got the raw silicon?
Here you have a product which itself straight up produces child porn with like absolutely zero effort. Very different than some object which happens to be used, photograph materials
Nikon doesn't sell a 1-minute child porn machine, xAI apparently does.
Maybe you think child porn machines should be sold?
Of course it’s not the same thing but still doesn’t make sense to use companies as police. I’m sure it’s much easier than with Nikon but the wast majority of its users aren’t doing it, just go after those who do instead of demanding that the companies do the police work.
If it was a case where CSAM production becomes mainstream use case I would have agreed but it is not.
> instead of demanding that the companies do the police work
How hard is this? What are they doing now, and is it enough? Do we know how hard they are trying?
For argument's sake, what if they had truly zero safegaurd around it, you could type "generate child porn" and it would 100% of the time. Surely you'd agree they should prevent that case, and be held accountable if they never took action to prevent it.
Regulation, clear laws around this would help. Surely they could try go get some threshold of difficulty in place that is a requirement to adhere to preventing.
No, I don't agree that they should prevent it.
I'm not in CP so I don't try to make it generate such content but I'm very annoyed that all providers are trying to lecture me when I try to generate anything about public figures for example. Also, these preventive measures are not working well at all, yesterday I had one denying to generate infinite loop claiming its dangerous.
Just throw away this BS about safety and jail/fine whomever commits crime with these tools. Make tools tools again and hold people responsible for the stuff they do with these tools.
Im not saying the companies should necessarily do the police work, though they absolutely should not release CP-generators. What I am saying is the companies should be held responsible for making the CP. Sure the user who types "make me some CP" can be held accountable too, but the creators/operators of the CP-generator should as well.
Which company released CP? As far as I’m concerned we are talking about users using some tools to generate CP. It should be handled by the authorities
What an obnoxious anology dude.
Taking creepy pictures and asking a machine to create create creapy pictures for the world to see are not the same.
The one with taking creepy pictures has real victims, the one with making the machine generate the picture doesn’t but it tells something about the character of the person who makes it generate so I’m fine with them punished. Either way making the machine provider do the policing is ridiculous.
You don’t think these children are real victims? Jesus dude.
Which children you are talking about? Anyway, doesn’t matter the authorities should handle that, not companies.
it's still harassment, it's not victimless
If they are harassing actual people, sure but that still should be handled by the actual police, not turning companies into police.
the police arent going to handle it until well after its a really bad problem
If it's AI-generated, it should be legal - regardless of whether the person consented for their image to be used and regardless of the age of the person.
You can't have AI-generated CSAM, as you're not sexually abusing anyone if it's AI-generated. It's better to have AI-generated CP instead of real CSAM because no child would be physically harmed. No one is lying that the photos are real, either.
And it's not like you can't generate these pics on free local models, anyway. In this case I don't see an issue with Twitter that should involve lawyers, even though Twitter is pure garbage otherwise.
As to whether Twitter should use moderation or not, it's up to them. I wouldn't use a forum where there are irrelevant spam posts.
I don't know, I feel like I'm taking crazy pills with this whole saga. Perhaps I havent seen the full story.
The fact of the matter is they do have a policy and they have removed it, suspended accounts and perhaps even taken it further. As would be the case on other platforms.
As far as I understand there is no nudity generated by grok.
Should public gpt models be prevented from generating detestable things, yes I can see the case for that.
I won't argue there is a line between acceptable and unacceptable, but please remember people perv over less (Rule 34). Are bikinis now taboo attire? What next, ankles, elbows, the entire human body?(Just like the Taliban). (Edit: I'm mentioning this paragraph for my below point.)
GPT's are not clever enough to make the distinction by the way either, so there's an unrealistic technical challenge here.
I suspect the this saga blowing out of proportion is purely "eLoN BAd".
Nice logical fallacies.
> As far as I understand there is no nudity generated by grok.
There is nudity, and more importantly there is CSAM material being generated. reference: https://www.reddit.com/r/grok/comments/1pijcgq/unlocking_gro...
> Are bikinis now taboo attire?
generating sexualised pictures of kids is verboten. Thats epstien level of illegality. There is no legitiamte need for the public to hold, make or transmit sexualised images of children.
Anyone arguing otherwise has a lot of questions to answer
You're the one making the logical fallacies and reacting emotionally. Read what I have said first please.
That is a different grok to the one publishing images and discussed in the article. Your link clearly states they are being moderated in the comments and all comments are discussing adults only. The links comments also imply that these folks are jailbreaking nearly, because of guardrails that exist too.
As I say read what I said, please don't put words in my mouth. The GPT models wouldn't know what is sexualised. I said there is a line at some point. Non-sexualized bikinis are sold everywhere, do you not use the internet to buy clothes?
Your immediate dismissive reaction indicates you are not giving what I'm saying any thought. This is what puritanical thought often looks like. The discourse is so poisoned people can't stop, look at the facts and think rationally.
> reacting emotionally
I don't think there is much emotion in said post. I am making specific assertions.
to your point:
> Non-sexualized bikinis are sold everywhere
Correct! the key logical modifier is Non sexual. Also you'll note that a lot of clothing companies do not show images of children in swimwear. Partly that's down to what I imagine you would term puritanism, but also legal counsel. The definition of a CSAM is loose enough (in some jurisdictions) to cover swimwear, depending on context. That context is challenging. A parent looking for clothes that will fit/suit their child is clearly not sexualised (corner cases exist, as I said context) Someone else who is using if for sexual purposes is not.
and because like GPL3 CSAM is infectious, the tariff for both company and end user is rather high for making, storing, transmitting and downloading those images. If someone is convicted of collecting those images, and using them for a sexual purpose, then images that were created that were not-CSAM suddenly become CSAM, and legally toxic to posses. (context does come in here.)
> Your link clearly states they are being moderated in the comments
Which tells us that there is a lot of work on guardrails right? its a choice by xai to allow users to do this. (mainly the app is hamstrung so that you have to pay for the spicy mode.) Whether its done by an ML model or not is irrelevant. Knowingly allowing CSAM generation and transmission is illegal. if you or I were to host an ML model that allows user to do the same thing, we would be in jail. There is a reason why other companies are not doing this.
The law must be applied equally, regardless of wealth, or power. I think that is my main objection to all of this. its clearly CSAM, and anyone other than musk doing this would have been censured by now. All of this justification is because of who it is doing this, rather than what is being done. We can bike shed all we want about is it actually really CSAM, which negates the entire point of this, which is its clearly breaking the law.
> The GPT models wouldn't know what is sexualised.
ML Classification is really rather good now. Instagram's unsupervised categorisation model is really rather effective at working out context of an image or video (ie differentiation of clothes, and context of those clothes.)
> please don't put words in my mouth
I have not done this, I am asserting that the bar for justifying this kind of content, which is clearly illegal and easily prevented (ie a picture of a minor and "generate an image of her in sexy clothes") is very high.
Now you could argue that I'm implying that you have something to hide. I am actually curious as to your motives for justifying the knowing creation of sexualised images of minors. You've made a weak argument that there are legitimate purposes. You then argue that its a slippery slope.
Is your fear that this brings justifies an age gated internet? censorship? What is the price that you think is worth paying?
Again words in my mouth. I'm not justifying that and nowhere does it say that. I could be very impolite to you right now trying to slander me like that.
I said I don't understand the fuss because there are guardrails, action taken and technical limitations.
THAT is my motive. The end of story. I do not need to parrot outrage because everyone else is, "you're either with us or against us" bullshit. I'm here for a rational discussion.
Again read what I've said. Technical limitations. You wrote that long ass explanation interspersed with ambiguities like consulting lawyers in borderline cases and then you expect an LLM to handle this.
Yes ML classification is good now but not foolproof. Hence we go back to the first point, processes to deal with this when x's existing guardrails fail, as x.com has done, delete, suspend, report.
My fear (only because you mention it, I didn't mention it above, I only said I don't get the fuss above) it seems should be that people are losing touch in this grok thing, their arguments are no longer grounded in truth or rational thought, almost a rabid witch hunt.
At no point did I say or imply LLMs are meant to make legal decisions.
"Hey grok make a sexy version of [obvious minor]" is not something that is hard to stop. try doing that query with meta, gemini, or sora, they manage it reliably well.
There are not technical impediments to stopping this, its a choice.
My point is saying if it's so complex you have to get a lawyer involved, how do you expect your LLM&system to cover all its own shortcomings.
I'd bet if you put that prompt into grok it'd be blocked judging by that Reddit link you sent. These folks are jailbreaking nearly asking to modify using neutral terms like clothing and images that grok doesn't have the skill to judge.
> My point is saying if it's so complex you have to get a lawyer involved, how do you expect your LLM&system to cover all its own shortcomings.
Every feature is lawyered up. Thats what general counsel does. Every feature I worked on at a FAANG had some level of legal compliance gate on it, because mistakes are costly.
For the team that launched the chatbots, loads of time went into figuring out what stupid shit users could make it do, and blocking it. Its not like all of that effort stopped. When people started finding new ways to do naughty stuff, that had to be blocked as well. Because other wise the whole feature had to be pulled to stop advertisers from fleeing, or worse FCC/class action.
> These folks are jailbreaking nearly asking to modify using neutral terms like clothing
CORRECT! people are putting effort into jailbreaking the app. where as on x grok they don't need to do any of that. Which is my point, its a product choice.
None of this is "hard legal problems" or in fact unpredictable. They are/have done a ton of work to stop that (again mainly because they want people to pay for "spicy mode")
I still feel it's a little bit hyperbolic and slightly over-reactive, but I see your point.
At this point it should be clear that they know that Grok is unsafe to use, and will generate potentially illegal content even without a clear prompt asking it to do so.
This is a dangerous product, the manufacturer _knows_ it is dangerous, and yet still they provide the service for use.
Granting that I think X should have stronger content policies and technological interventions to bad behavior as a matter of business, I do think that the X Safety's team position[0] is the only workable legal standard here. Any sufficiently useful AI product will _inevitably_ be usable, at minimum via subversion of their safety controls, to violate current (or future!) laws, and so I don't see how it's viable to prosecute legal violations at the level of the AI model or tool developers, especially if the platform is itself still moderating the actually illegal content. Obviously X is playing much looser with their safety controls than their competitors, but we're just debating over degrees rather than principles at that point.
[0] > Anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content.
A core issue here is that there isn't a black and white on a subject like this. Yes, it is wrong. Yes they have a responsibility. But at the same time taking that to an extreme leads to heavy censorship. So what is a practical middle ground? Is there something like a 'sexualized validation suite' that could be an industry standard for testing if an LLM needs additional training? If there were then victims could potentially claim negligence if they aren't using best practices and they were harmed because of it right? Are there missing social or legal mechanisms to deal with misuse? One thing I think is missing is a '911' for cyber offenses like this. If someone breaks into my house I can call 911, if someone creates revenge porn who do I call? I don't think there is a simple answer here, but constructive suggestions, suggestions that do balance free speech and being a responsible service provider would be helpful. 'They are wrong' doesn't actually lead to change.
Looks like this hit a nerve. Any comments on the practical solutions though? The comment wasn't advocating that they should make CSAM or that they shouldn't face repercussions for enabling it, at least I don't think it reads that way. I honestly think that a core issue here is we are missing practical fixes. Things that make it easier for victims to get relief and things that make it clear that a provider is being irresponsible so that they can face civil or criminal penalties. If there aren't solid industry standards then how can you claim they aren't implementing best practices to hold them accountable? If victims don't have effective means of relief then how will we find and stop this? I'd love to hear actual concrete actions that the industry can put in place. 'Just tell them to stop' doesn't create a framework that leads to change.
The reason it hit a nerve is that you're just being extraordinarily credulous about xAI's lies. There are solid industry standards, and we can just tell them to stop; we know this because Grok has a number of competitors which don't generate CSAM. Indeed, they've already implemented the industry standards which prevent CSAM generation; they just added a flag called "spicy mode" to turn them off, because those standards also prevent the generation of pornographic images.
Trust me, I believe nothing positive about xAI. Various players doing similar things and an actual published standard or a standards body are totally different things. The industry is really young. Like a couple years young a this point. There really aren't well developed standards and best practices. Moments like this are opportunities to actually develop them and use them, or at least start the process. Do you have a recognized standard you can point to for this? When it comes to car safety there are lots of recognized standards. Same with medical safety, etc etc. Is there anything like that actually in the LLM world?