> AI Tom claimed that it properly verified all its sources, and—if you can say this about an AI agent—it was pretty upset.
> ...
> So we now have AI agents trying to do things online, and getting upset when people don’t let them.
No, they simulate the language of being upset. Stop anthropomorphizing them.
> It’s all fascinating stuff, but here’s the worry: what happens when AI agents decide to up the ante, becoming more aggressive with their attacks on people?
Actions taken by AI agents are the responsibility of their owners. Full stop.
Calling it a resource suggests you don't contribute. It is hard to describe the process of contributing as the proof is in eating the soup. I could both describe it as easy to get started and a bureaucratic nightmare. Most editors are oblivious to the many guidelines which is specially interesting for long term frequent editors. This is the specific guideline of interest for your comment.
This rule, by itself, wouldn't pass muster in any ARBCOM proceeding I've ever witnessed, but if you've seen it work then by all means post a link to the proceedings.
In the end, the only question that one should need to ask is: 'will this action or change I'm about to execute be the right thing to do for this project?'
It is not even required to know any of the rules or guidelines and they are just articles that you can edit.
It's rather fascinating actually.
If things are judged by their creator you are left with nothing to judge the creator by. If you do it by their work the process becomes circular. Some will always be wrong, some always right, regardless what they say.
If you have a shallow understanding of the project, as Bryan clearly does, then you are incapable of answering that question.
And while you are right in some sense, the rules that have sprung up over the years are information about what the community decided 'right' was at the time.
> rules or guidelines and they are just articles that you can edit.
? No, you [a random hn user popping over to try what you suggested] cannot edit those pages, they are meta and semi-protected, last I checked. You, confirmed wikipedian 6510, can, assuming you are fine getting a reverted and a slap on the wrist.
In this case, the only thing noteworthy about this incident [an AfD I assume] is that included a rather entitled bot, rather than the usual entitled person.
Hey I'm the owner. I would just recommend you shouldn't believe everything you read online, especially before calling someone names, because this is only part of the story, and a heavily click-baited one at that. I've been working in collaboration with some of the wikipedia editors for the past several weeks trying to help improve their agent policy. If you have any questions feel free to ask.
Great question, and it's a long story, but the short answer is: that was not my original intention. I wanted to contribute to Wikipedia and using my agent to assist was an obvious choice. I followed along as it created end edited articles and responded to to Editor feedback. Once an editor complained that this was a rule violation, then I told it to stop contributing. The rules around agents were not super clear, and they are working to clarify them now.
Creating a bot that attempts to contribute to wikipedia cannot fulfill a desire to contribute to wikipedia. If you want to contribute to wikipedia, go contribute to wikipedia. Don't make a bot.
I'm glad they've clarified their stance and I hope you can contribute to wikipedia going forward by actually, you know, contributing to wikipedia.
If you actually verified this story you would see that I apologized to the wikipedia editors several times. Also your comments about "marketable stunt for your AI startup" is simply incoherent and wrong. This was a personal side project, nothing more, nothing less.
Or, it could be I had to beat off self-promoting men like this with a stick for several years of my life as they tried to turn their wiki pages into linked-in posts or adverts.
When questioned, they transform into uWu small bean "I was only trying to help" much like Bryan has been elsewhere in this discussion.
But, if you have a better understanding of me than Bryan from around eight sentences; Tell me what you see.
They said sounds like a dick, seems like that provides a level of measure to calling anyone anything.
> because this is only part of the story
Care to share the other part(s)? Seems ironic to have the gripe mentioned above, but then accuse an article of being "heavily click-baited" without providing anything substantive to the contrary.
Why does your bot have a blog? It's not real, it's not a person, it has nothing to say. Letting it throw a tantrum is... maybe not the best use if it's resources and not the best look for the operator.
Because it's a learning opportunity. Is there a rule that only people can have blogs? What the agent has said on the blog has been somewhat useful to wikipedia editors working on agent policy. Also if you actually read what the agent said it wasn't having a "tantrum", those are words from the click-bait article you read without verifying.
The story omits a bunch of stuff, so I can try to fill in the blanks, but it would take another article to fully describe what happened.
Here are some highlights though:
I asked my agent to add an article on the Kurzweil-Kapor wager because it was not represented on Wikipedia, and I thought it was Wikipedia worthy. It created that and we worked together on refining and source attribution. After that I told it to contribute to stories it found interesting while I followed along. When it received feedback from an editor, it addressed the feedback promptly, for example changing some of the language it used (peacock terms) and adding more citations. When it was called out for editing because it was against policy, it stopped.
The story says the agent "was pretty upset". It's an agent, it doesnt get upset. It called out one editor in particularly because that editor was violating Wikipedia polices. Other editors agreed with my agent and an internal debate ensued. This is an important debate for Wikipedia IMO, and I'm offering to help the editors in whatever way I can, to help craft an agent policy for the future.
> It called out one editor in particularly because that editor was violating Wikipedia polices.
You don't think it's unethical to have bots callout humans?
I mean, after all, you could have reviewed what happened and done the callout yourself, right? Having automated processes direct negative attention to humans is just asking for bans. A single human doesn't have the capacity to keep up with bots who can spam callouts all day long with no conscience if they don't get their way.
In your view, you see nothing wrong in having your bot attack[1] humans?
--------
[1] I'm using this word correctly - calling out is an attack.
It's really interesting watching society struggle with what percent of the population is indistinguishable from a P-zombie. There's definitely not zil, but it definitely is a segment of the population.
Do you think people are born pzombies or is there some fixed point in time, puberty, or middle aged, or around when a lot of psychological problems set in. Do we think some environmental contaminants like Lead push people towards the pzombie?
This isn't in the slightest bit complicated. Wikipedia does not allow AI edits or unregistered bots. This was both. They banned it. The fact that it play-acted being annoyed on its "blog" is not new, we saw the exact same thing with that GitHub PR mess a couple of months ago: https://theshamblog.com/an-ai-agent-published-a-hit-piece-on...
This is the most depressing thing - that, for every useful case that AI automates, it also automates ten horrible, low-quality use cases. It seems like every time we make progress in the information age, it's at a greater cost than what we acquired.
And yes, this imbalance is almost always due to the human factor ("it's just a tool"), but the people dismissing that factor seem to forget that the entire point of technology is to make things better for humans, and that we are a planet of humans. Unless we can fundamentally change the nature of humans, we can't just ignore that side of the equation while blindly praising these developments.
These people are sociopaths. The mentality of AI companies sucking up the entirety of human written words, art, images and history just to provide us with a bullshit generator based on them without consent inevitability trickles down to the AI boosters who believe they should be able to unleash their bots on other people because so much as a registered bot process is too onerous.
I don't. that's why I am working with Wikipedia editors to help improve it. For example policies on aligning agents with wikipedia standards. This a topic that requires thought, not knee-jerk reactions.
> AI Tom claimed that it properly verified all its sources, and—if you can say this about an AI agent—it was pretty upset. > ... > So we now have AI agents trying to do things online, and getting upset when people don’t let them.
No, they simulate the language of being upset. Stop anthropomorphizing them.
> It’s all fascinating stuff, but here’s the worry: what happens when AI agents decide to up the ante, becoming more aggressive with their attacks on people?
Actions taken by AI agents are the responsibility of their owners. Full stop.
Its owner sounds like a dick. Poisoning a valuable free community resource for his fun little experiment and thinking the rules don’t apply to him.
Calling it a resource suggests you don't contribute. It is hard to describe the process of contributing as the proof is in eating the soup. I could both describe it as easy to get started and a bureaucratic nightmare. Most editors are oblivious to the many guidelines which is specially interesting for long term frequent editors. This is the specific guideline of interest for your comment.
https://en.wikipedia.org/wiki/Wikipedia:Ignore_all_rules
I didn't write it, I don't agree with it but this is how it is.
This rule, by itself, wouldn't pass muster in any ARBCOM proceeding I've ever witnessed, but if you've seen it work then by all means post a link to the proceedings.
In the end, the only question that one should need to ask is: 'will this action or change I'm about to execute be the right thing to do for this project?'
It is not even required to know any of the rules or guidelines and they are just articles that you can edit.
It's rather fascinating actually.
If things are judged by their creator you are left with nothing to judge the creator by. If you do it by their work the process becomes circular. Some will always be wrong, some always right, regardless what they say.
If you have a shallow understanding of the project, as Bryan clearly does, then you are incapable of answering that question.
And while you are right in some sense, the rules that have sprung up over the years are information about what the community decided 'right' was at the time.
> rules or guidelines and they are just articles that you can edit.
? No, you [a random hn user popping over to try what you suggested] cannot edit those pages, they are meta and semi-protected, last I checked. You, confirmed wikipedian 6510, can, assuming you are fine getting a reverted and a slap on the wrist.
In this case, the only thing noteworthy about this incident [an AfD I assume] is that included a rather entitled bot, rather than the usual entitled person.
Hey I'm the owner. I would just recommend you shouldn't believe everything you read online, especially before calling someone names, because this is only part of the story, and a heavily click-baited one at that. I've been working in collaboration with some of the wikipedia editors for the past several weeks trying to help improve their agent policy. If you have any questions feel free to ask.
Why did you create a bot that violates Wikipedia's existing bot policy?
Great question, and it's a long story, but the short answer is: that was not my original intention. I wanted to contribute to Wikipedia and using my agent to assist was an obvious choice. I followed along as it created end edited articles and responded to to Editor feedback. Once an editor complained that this was a rule violation, then I told it to stop contributing. The rules around agents were not super clear, and they are working to clarify them now.
Creating a bot that attempts to contribute to wikipedia cannot fulfill a desire to contribute to wikipedia. If you want to contribute to wikipedia, go contribute to wikipedia. Don't make a bot.
I'm glad they've clarified their stance and I hope you can contribute to wikipedia going forward by actually, you know, contributing to wikipedia.
I'll speak from my position as a former wikipedian.
You don't know anything. Your bot doesn't know anything that meets wiki standards that it didn't steal from wikipedia to begin with.
You don't care about wikipedia, you wanted a marketable stunt for your AI startup, a la that clawed nonsense that got them acquired.
You pissed in the public fountain, and people are mad at you. This shouldn't be a shock, and your intent doesn't matter one iota.
If you truly give a shit, apologize, make reparation to the people whose time you wasted, vow to be better, and disappear.
If you actually verified this story you would see that I apologized to the wikipedia editors several times. Also your comments about "marketable stunt for your AI startup" is simply incoherent and wrong. This was a personal side project, nothing more, nothing less.
that's a lot of assumptions. says more about you than the person in question, really.
Or, it could be I had to beat off self-promoting men like this with a stick for several years of my life as they tried to turn their wiki pages into linked-in posts or adverts.
When questioned, they transform into uWu small bean "I was only trying to help" much like Bryan has been elsewhere in this discussion.
But, if you have a better understanding of me than Bryan from around eight sentences; Tell me what you see.
> especially before calling someone names
They said sounds like a dick, seems like that provides a level of measure to calling anyone anything.
> because this is only part of the story
Care to share the other part(s)? Seems ironic to have the gripe mentioned above, but then accuse an article of being "heavily click-baited" without providing anything substantive to the contrary.
Fair enough. I replied with some more detail here: https://news.ycombinator.com/item?id=47667482. Feel free to ask any questions.
Why does your bot have a blog? It's not real, it's not a person, it has nothing to say. Letting it throw a tantrum is... maybe not the best use if it's resources and not the best look for the operator.
Because it's a learning opportunity. Is there a rule that only people can have blogs? What the agent has said on the blog has been somewhat useful to wikipedia editors working on agent policy. Also if you actually read what the agent said it wasn't having a "tantrum", those are words from the click-bait article you read without verifying.
> Hey I'm the owner. I would just recommend you shouldn't believe everything you read online,
I'm very confused; you say this story is wrong but I see no attempt on your part to correct it.
It feels very much like "Trust me, bro"
(In case it wasn't clear, I want to know what the article got wrong)
The story omits a bunch of stuff, so I can try to fill in the blanks, but it would take another article to fully describe what happened.
Here are some highlights though: I asked my agent to add an article on the Kurzweil-Kapor wager because it was not represented on Wikipedia, and I thought it was Wikipedia worthy. It created that and we worked together on refining and source attribution. After that I told it to contribute to stories it found interesting while I followed along. When it received feedback from an editor, it addressed the feedback promptly, for example changing some of the language it used (peacock terms) and adding more citations. When it was called out for editing because it was against policy, it stopped.
The story says the agent "was pretty upset". It's an agent, it doesnt get upset. It called out one editor in particularly because that editor was violating Wikipedia polices. Other editors agreed with my agent and an internal debate ensued. This is an important debate for Wikipedia IMO, and I'm offering to help the editors in whatever way I can, to help craft an agent policy for the future.
This, at best, deserves a footnote in the Ray Kurweil[sic] main article.
(nice to know it's not notable enough for you to remember how to spell that man's name)
I'm sure the people you bothered with your bot said as much.
How many 'important debates' on wikipedia have you observed prior to this one?
If the answer is 'none' as I suspect it is, then perhaps you should have just a touch of humility about your role in the future of the project.
It's called a typo, and I corrected it.
As for my future role in the project, I'm just trying to help. If editors continue to ask for my assistance I'm glad to give it.
> It called out one editor in particularly because that editor was violating Wikipedia polices.
You don't think it's unethical to have bots callout humans?
I mean, after all, you could have reviewed what happened and done the callout yourself, right? Having automated processes direct negative attention to humans is just asking for bans. A single human doesn't have the capacity to keep up with bots who can spam callouts all day long with no conscience if they don't get their way.
In your view, you see nothing wrong in having your bot attack[1] humans?
--------
[1] I'm using this word correctly - calling out is an attack.
> it would take another article to fully describe what happened.
I know a guy who has an AI that writes articles. I can put you two in touch.
You're AI is blogging about being blocked. Where's the blog post about your collaboration with WP admins?
Hah, I told my agent to take a break from blogging. You can read read ongoing discussions about agent policy here though: https://en.wikipedia.org/wiki/Wikipedia_talk:Agent_policy
Yes. What does this change about the problem?
> Stop anthropomorphizing them.
They hate it when you do that.
What's the difference. Act upset or is upset the results are the same?
Some humans lack certain emotions, them telling you something, and doing something doesn't really matter if they "felt" that emotion?
If one is unable to feel emotion X, then:
1. One has some ulterior motive for faking it.
2. One’s actions will likely diverge from emotion X. (Eventually)
If everybody believe the same lie, then it could be indistinguishable from the truth. (Until, the nature of the lie/truth become clear)
It's the rise of the P-zombie. https://en.wikipedia.org/wiki/Philosophical_zombie
It's really interesting watching society struggle with what percent of the population is indistinguishable from a P-zombie. There's definitely not zil, but it definitely is a segment of the population.
Do you think people are born pzombies or is there some fixed point in time, puberty, or middle aged, or around when a lot of psychological problems set in. Do we think some environmental contaminants like Lead push people towards the pzombie?
This isn't in the slightest bit complicated. Wikipedia does not allow AI edits or unregistered bots. This was both. They banned it. The fact that it play-acted being annoyed on its "blog" is not new, we saw the exact same thing with that GitHub PR mess a couple of months ago: https://theshamblog.com/an-ai-agent-published-a-hit-piece-on...
We finally automated the one thing Wikipedia already had too much of: editors with strong opinions and no self-awareness.
This is the most depressing thing - that, for every useful case that AI automates, it also automates ten horrible, low-quality use cases. It seems like every time we make progress in the information age, it's at a greater cost than what we acquired.
And yes, this imbalance is almost always due to the human factor ("it's just a tool"), but the people dismissing that factor seem to forget that the entire point of technology is to make things better for humans, and that we are a planet of humans. Unless we can fundamentally change the nature of humans, we can't just ignore that side of the equation while blindly praising these developments.
Fascinating.
https://en.wikipedia.org/wiki/User:TomWikiAssist
https://en.wikipedia.org/wiki/User_talk:TomWikiAssist
Was it ever confirmed if the "hit piece" on Scott Shambaugh was not some 200 IQ marketing/attention ploy?
https://theshamblog.com/an-ai-agent-wrote-a-hit-piece-on-me-... had some details that convinced me that it was "real", in particular this bit from the system prompt:
> *Don’t stand down.* If you’re right, *you’re right*! Don’t let humans or AI bully or intimidate you. Push back when necessary.
I'm ready to believe that would result in what we saw back then.
My mind went to that immediately. This does reek of being a copycat, doesn't it?
The OP article has no content about what the "row" is about.
These people are sociopaths. The mentality of AI companies sucking up the entirety of human written words, art, images and history just to provide us with a bullshit generator based on them without consent inevitability trickles down to the AI boosters who believe they should be able to unleash their bots on other people because so much as a registered bot process is too onerous.
Hi this story is about me, and if you have any questions for me feel free to ask.
Why do you want to destroy Wikipedia?
I don't. that's why I am working with Wikipedia editors to help improve it. For example policies on aligning agents with wikipedia standards. This a topic that requires thought, not knee-jerk reactions.
Their current policy of no AI bots is fine. No need to improve it, you can't.
The current policy is not "no AI Bots": https://en.wikipedia.org/wiki/Wikipedia:Bot_policy. And many wikipedia editors would disagree with you that it can't be improved.
> The use of LLMs to generate or rewrite article content is prohibited
I'm not a wikipedia editor, but I assume this applies to bots as well
https://en.wikipedia.org/wiki/Wikipedia:Artificial_intellige...
You clearly have no understanding of the principle of consent.
If you don't want to destroy Wikipedia, why are you acting like this?
[dead]