Really don’t understand why sane developers who for decades have been advocating for best practices when it comes to security and privacy seem to be completely abandoning all of them simply because it’s AI. Why would you ever let a non deterministic program god level access to everything? What could possibly go wrong?
The security team at my company announced recently that OpenClaw was banned on any company device and could not be used with any company login. Later in an unrelated meeting a non technical executive said they were excited about their new Mac Mini they just bought for OpenClaw. When they were told it was banned they sort of laughed and said that obviously doesn't apply to them. No one said anything back. Why would they? This is an executive team that literally instructed the security team to weaken policies so it could be more accommodating of "this new world we live in."
Similar thing at my company. Someone /very/ high up in the org chart recently said to the entire company that OpenClaw is the future of computing, and specifically called out Moltbook as something amazing and ground breaking. There is literally no way security would ever let OpenClaw in the same room as company systems, never mind actually be installed anywhere with access to our data.
It should be noted that this exec also mentioned we should try "all the AIs", without offering up their credit card to cover the costs. I guess when your base salary is more than most people make in a life time, a few hundred bucks a month to test something doesn't even register.
I'm glad that a term for this exists. It's always seemed so silly to me that someone would think that a group of people would all conform to the same opinion.
It's a Venn diagram: there are two camps and there is no doubt some overlap because the number of people involved. GP was obviously talking about the overlap, not literally equating this with two specific people or two groups that are 100% overlapping.
I agree with a lot of the siblings that it's probably not the same people. But for the overlap that probably does exists, I don't think "because it's AI" is their reasoning. If I were to guess, I'd say it's something closer to "exploring the potential of this new thing is worth the risk to me".
Who are these developers that have both been "advocating for best practices" and also "seem to be completely abandoning all of them simply because it’s AI"? Can you point to a dozen blogs/Twitter profiles, or are you just inventing a fictitious "other" to attack?
The person being quoted for one, who is apparently focused on safety and alignment at meta. Safety being handing over your email credentials to the shiny new thing, apparently
I'm enthusiastic about AI (it's gone from the 2nd most important thing to happen in my career to tied for first, with the Internet) and I am baffled by OpenClaw.
Was building a claw clone the other day when for debugging I added a bash shell. So I type arbitrary text into a Telegram bot and then it runs it as bash commands on my laptop.
Naturally I was horrified by what I had created.
But suddenly I realized, wait a minute... strictly this is less bad than what I had before, which is the same thing except piped through a LLM!
Funny how that works, subjectively...
(I have it, and all coding agents, running as my "agent" user, which can't touch my files. But I appear to be in the minority, especially on the discord, where it's popular to run it as the main admin user on Windows.)
As for what could go wrong, that is an interesting question. RCE aside, the agentic thing is its own weird security situation. Like people will run it sandboxed in Docker, but then hook it up to all their cloud accounts. Or let it remote control their browser for hours unattended...
This isn't any different than pre-Claude. We've always had people that wrote code, but had no clue about systems. Not everyone is a CS major. I've seen people do the strangest things that you would think a sane person would never do, yet, their the strangeness is happening by someone you would otherwise consider sane/smart. Not everyone is a sysadmin banging perl to automate things.
To the extent that anyone can be replaced they will be replaced and nothing they do now will save them. The good news is that so far I haven't seen companies having much success outright replacing workers with AI chatbots.
They don't have the successes but they do replace them. I've seen a couple of examples of that in the last couple of months, there is just no way to avoid these abominations any more.
it's not successfully replacing them with AI that is the problem; it's firing them to then replace them with AI which, when it doesn't work is either too late or at best incredibly disruptive for the people impacted.
You'd be amazed at the corporate IT world where any extra equipment like that would just not be available and/or allowed. Besides, if it were a corporate machine and not my personal machine and work was forcing me to use AI, I'd have no qualms. They get what they ask for with the equipment provided!
"We're geniuses! God's gift to Mac open source and software development! but sudo is leeeee haaaaard, so we'll just add a ton of directories in /opt that are owned and writable by the user account, and then add them to the user's path, with higher priority than system binary paths! What could possibly go wrong? YOOOOLOOOOOOO!!!!!!"
There are definitely problems with homebrew, but user-owned directories isn’t high on the list, imo. Your ssh private keys, startup scripts, and any number of other things that can do serious damage are all owned by your user. Frankly, if install vim as my user, I want it to execute instead of the built-in version, unless I’m running a command with sudo, in which case the system binaries take precedence. So I don’t even see path order as a major issue here. If someone has compromised your user, you’re compromised whether you’ve used homebrew or not.
This post exists in that Poe's law purgatory of it being impossible for someone without the proper context to know whether this is sarcastically mocking OpenClaw or an attempt at defending OpenClaw against some of the bad press it has received due to people not understanding the risks involved. Because the comments here are responding of if this post is a sane reasonable take, but I read it and just see a laundry list of restrictions you need to put on OpenClaw listed one after another until you get to the point in which the software is effectively useless.
I mean if you are not connecting it to the real things why even bother, just chatgpt or Claude online at that point.
We have enough assistants, the key idea with opeclaw is it can do stuff instead of talk with what you have. It’s terrible security but that’s the only way it makes sense. Otherwise it’s just a lot of hoops to combine cron jobs with a AI agent on the cloud that can do things an report back.
Not that I think anyone should do it, it’s a recipe for disaster
Yeah, it's like saying you can hire a con artist as your personal assistant as long as they work from a sealed box and just pass little reviewed paper slips back and forth through a slit. Why have one at that point? Very difficult to be 'assisted' without granting access.
Safety and Alignment is just the same old trust & safety people from social media platforms, they somehow managed to convince the people with money of their relevance. I'll never understand that move - the slightest pause for consideration of necessary personnel by those in charge should have nixed any such hiring, but they're spending billions in stock and salary on these folks. Good for them, I guess.
Listen carefully: OpenClaw is basically a real person you have hired, whose capabilities are vast and fast — in ways both good and potentially bad. But you’ve hired it in the absence of a resume or behavioral background check results.
...Except that a human is culpable and subject to consequences when they directly disobey instructions in a way that causes damage, particularly if you give them repeated direct instructions to "stop what you are doing".
And also, when it says "You're absolutely right! I disobeyed your direct instructions causing irreparable damage, so sorry, that totes won't happen again, pinky promise!", those are just some words, not actually a meaningful apology or promise to not disobey future instructions.
Personally, I question the usefulness of an AI assistant that can't even be trusted to add an entry to my calendar.
you withhold and limit access to your devices, your account credentials, and even its own full account permissions, from the start, to the same extent that you would withhold such access from a new hire.
No, like I pointed out, a new hire has signed an employment agreement filled with legalese and is subject to legal ramifications if they delete all my emails while I'm screaming "stop what you are doing!". And if they say "oh, sorry, I totally misunderstood your instructions, that won't happen again" and then do it again, they're committing a crime.
What's the point of hiring a personal assistant who is incapable of sending email? Isn't that precisely what you hire a PA to do?
Would you let a human being with the aforementioned characteristics — brilliant and capable, but lacking a resume or behavioral background check results — directly use your personal computer or your work computer?
Really don’t understand why sane developers who for decades have been advocating for best practices when it comes to security and privacy seem to be completely abandoning all of them simply because it’s AI. Why would you ever let a non deterministic program god level access to everything? What could possibly go wrong?
The security team at my company announced recently that OpenClaw was banned on any company device and could not be used with any company login. Later in an unrelated meeting a non technical executive said they were excited about their new Mac Mini they just bought for OpenClaw. When they were told it was banned they sort of laughed and said that obviously doesn't apply to them. No one said anything back. Why would they? This is an executive team that literally instructed the security team to weaken policies so it could be more accommodating of "this new world we live in."
Similar thing at my company. Someone /very/ high up in the org chart recently said to the entire company that OpenClaw is the future of computing, and specifically called out Moltbook as something amazing and ground breaking. There is literally no way security would ever let OpenClaw in the same room as company systems, never mind actually be installed anywhere with access to our data.
It should be noted that this exec also mentioned we should try "all the AIs", without offering up their credit card to cover the costs. I guess when your base salary is more than most people make in a life time, a few hundred bucks a month to test something doesn't even register.
Those people aren't the same. Those are two ideas that you heard from the internet, and you're imagining it's the same person talking.
There's a name for this: https://en.wiktionary.org/wiki/Goomba_fallacy
I'm glad that a term for this exists. It's always seemed so silly to me that someone would think that a group of people would all conform to the same opinion.
Some of them are the same.
It's a Venn diagram: there are two camps and there is no doubt some overlap because the number of people involved. GP was obviously talking about the overlap, not literally equating this with two specific people or two groups that are 100% overlapping.
I agree with a lot of the siblings that it's probably not the same people. But for the overlap that probably does exists, I don't think "because it's AI" is their reasoning. If I were to guess, I'd say it's something closer to "exploring the potential of this new thing is worth the risk to me".
Who are these developers that have both been "advocating for best practices" and also "seem to be completely abandoning all of them simply because it’s AI"? Can you point to a dozen blogs/Twitter profiles, or are you just inventing a fictitious "other" to attack?
The person being quoted for one, who is apparently focused on safety and alignment at meta. Safety being handing over your email credentials to the shiny new thing, apparently
They aren't. They're the ones who are resisting the all in thing on AI stuff. What you're seeing is over reactive trend followers.
And likely massive amounts of marketing spending pushing for people to bend over and accept AI anything anywhere.
I'm enthusiastic about AI (it's gone from the 2nd most important thing to happen in my career to tied for first, with the Internet) and I am baffled by OpenClaw.
You must not say his name. If you say it, you will summon him.
Was building a claw clone the other day when for debugging I added a bash shell. So I type arbitrary text into a Telegram bot and then it runs it as bash commands on my laptop.
Naturally I was horrified by what I had created.
But suddenly I realized, wait a minute... strictly this is less bad than what I had before, which is the same thing except piped through a LLM!
Funny how that works, subjectively...
(I have it, and all coding agents, running as my "agent" user, which can't touch my files. But I appear to be in the minority, especially on the discord, where it's popular to run it as the main admin user on Windows.)
As for what could go wrong, that is an interesting question. RCE aside, the agentic thing is its own weird security situation. Like people will run it sandboxed in Docker, but then hook it up to all their cloud accounts. Or let it remote control their browser for hours unattended...
https://xkcd.com/1200/
It's greed.
Developers with and without devops experience.
This isn't any different than pre-Claude. We've always had people that wrote code, but had no clue about systems. Not everyone is a CS major. I've seen people do the strangest things that you would think a sane person would never do, yet, their the strangeness is happening by someone you would otherwise consider sane/smart. Not everyone is a sysadmin banging perl to automate things.
> Why would you ever let a non deterministic program god level access to everything?
If they don't their jobs are going to get replaced by AI
To the extent that anyone can be replaced they will be replaced and nothing they do now will save them. The good news is that so far I haven't seen companies having much success outright replacing workers with AI chatbots.
They don't have the successes but they do replace them. I've seen a couple of examples of that in the last couple of months, there is just no way to avoid these abominations any more.
it's not successfully replacing them with AI that is the problem; it's firing them to then replace them with AI which, when it doesn't work is either too late or at best incredibly disruptive for the people impacted.
That's certain true. Lots of letting workers go only to hire new ones at much lower pay
They're getting replaced by AI anyway, these bleeding edge agents are just surfboards for the wave.
Learn fast or die trying, lol.
This person’s title is “Safety and alignment at Meta Superintelligence”. It must be satire.
Giving OpenClaw permissions on a non-sandboxed account seems like it would massively fragilize my digital life
Small upside: it saves a few minutes here and there on some tasks (eg. checking into flights)
Massive tail-risk downside: it does something like what's linked in the tweet (eg. deletes my entire inbox)
It doesn’t matter what you’re “supposed to do”. People don’t read manuals or warnings.
Are people really running OpenClaw on their primary machine?
Anyone security-conscious would isolate it on dedicated hardware (old laptop, Raspberry Pi, etc.) with a separate network and chat surface.
Brother people watch porn on their company laptop, you think people are using protection for their openclaw's?
You'd be amazed at the corporate IT world where any extra equipment like that would just not be available and/or allowed. Besides, if it were a corporate machine and not my personal machine and work was forcing me to use AI, I'd have no qualms. They get what they ask for with the equipment provided!
How did the question become “which corporate device can I install OpenClaw on?” Who is doing that?
> Anyone security-conscious
Most people aren't, including many professional developers.
Exhibit A: Homebrew.
"We're geniuses! God's gift to Mac open source and software development! but sudo is leeeee haaaaard, so we'll just add a ton of directories in /opt that are owned and writable by the user account, and then add them to the user's path, with higher priority than system binary paths! What could possibly go wrong? YOOOOLOOOOOOO!!!!!!"
There are definitely problems with homebrew, but user-owned directories isn’t high on the list, imo. Your ssh private keys, startup scripts, and any number of other things that can do serious damage are all owned by your user. Frankly, if install vim as my user, I want it to execute instead of the built-in version, unless I’m running a command with sudo, in which case the system binaries take precedence. So I don’t even see path order as a major issue here. If someone has compromised your user, you’re compromised whether you’ve used homebrew or not.
Did Hegseth install OpenClaw in the pentagon yet?
So... stupid question, if this is true, why isn't it downloaded as a docker image?
What's the fun in that? Also I think /stop would help here.
This post exists in that Poe's law purgatory of it being impossible for someone without the proper context to know whether this is sarcastically mocking OpenClaw or an attempt at defending OpenClaw against some of the bad press it has received due to people not understanding the risks involved. Because the comments here are responding of if this post is a sane reasonable take, but I read it and just see a laundry list of restrictions you need to put on OpenClaw listed one after another until you get to the point in which the software is effectively useless.
I agree - but what exactly are you supposed to do with it if it has its own email, phone #, etc?
I mean if you are not connecting it to the real things why even bother, just chatgpt or Claude online at that point.
We have enough assistants, the key idea with opeclaw is it can do stuff instead of talk with what you have. It’s terrible security but that’s the only way it makes sense. Otherwise it’s just a lot of hoops to combine cron jobs with a AI agent on the cloud that can do things an report back.
Not that I think anyone should do it, it’s a recipe for disaster
Yeah, it's like saying you can hire a con artist as your personal assistant as long as they work from a sealed box and just pass little reviewed paper slips back and forth through a slit. Why have one at that point? Very difficult to be 'assisted' without granting access.
Didn't all vendors directly or indirectly ban the use of *claw? Why are there still articles about this? Are they unable to detect users?
Has OpenAI banned it's use already? I hadn't seen that one come through yet.
API usage is not banned.
Director of Safety and Alignment at Meta gives full access to a LLM to theirs email
after anthropic publishes research how a model tried to blackmail an executive with emails about an affair to not be shut down
and justification in thread is "I tried it on a toy inbox, it worked well, so I trusted it with my real email"
CLOWN WORLD
pretty clear the facebook safety and alignment role is just for show if she couldn't figure this out
its like they hired the worst person they could get their hands on
Safety and Alignment is just the same old trust & safety people from social media platforms, they somehow managed to convince the people with money of their relevance. I'll never understand that move - the slightest pause for consideration of necessary personnel by those in charge should have nixed any such hiring, but they're spending billions in stock and salary on these folks. Good for them, I guess.
LinkedIn says she was a researcher. Joined as part of the Meta <> Scale deal with Alexandr Wang.
This is the sanest take I've seen from anyone using the claws.
I would still not want the LLM to have read access to email. Email is a primary vector for prompt injection and also used for password resets.
Agreed, I wouldn't even trust it with read-only access to my email
I'd trust it as much as I would a VA from Fiverr
Want it to check you into a flight? Forward the check-in email to its own inbox
Read-only access to my calendar; it can invite me to meetings
No permissions beyond that
And also, when it says "You're absolutely right! I disobeyed your direct instructions causing irreparable damage, so sorry, that totes won't happen again, pinky promise!", those are just some words, not actually a meaningful apology or promise to not disobey future instructions.
Personally, I question the usefulness of an AI assistant that can't even be trusted to add an entry to my calendar.
No, like I pointed out, a new hire has signed an employment agreement filled with legalese and is subject to legal ramifications if they delete all my emails while I'm screaming "stop what you are doing!". And if they say "oh, sorry, I totally misunderstood your instructions, that won't happen again" and then do it again, they're committing a crime.What's the point of hiring a personal assistant who is incapable of sending email? Isn't that precisely what you hire a PA to do?
No. And I also wouldn't hire that person as a PA.