Tangent: what is the appeal of the “no capitalization” writing style? I never know what message the author is intending to convey when I see all lower case.
Normally I can ignore it, but the font on this blog makes it hard to distinguish where sentences start and end (the period is very small and faint).
You mention the technical aspect (readability) and others have suggested the aesthetic, but you could also look at it as a form of rhetoric. I'm not sure it's really effective because it sort of grates on the ear for anyone over 35, but maybe there's a point in distinguishing itself from AI sloptext.
Incidentally, millenials also used the "no caps" style but mainly for "marginalia" (at most paragraph-length notes, observations), while for older generations it was almost always associated with a modernist aesthetic and thus appeared primarily in functional or environmental text (restaurant menus, signage, your business card, bloomingdales, etc.). It may be interesting to note that the inverse ALL CAPS style conveyed modernity in the last tech revolution (the evolution of the Microsoft logo, for example).
I think it might be adults ignoring established grammar rules to make a statement about how they identify a part of a group of AI evangelists.
Kind of like how teenagers do nonsensical things like where thick heavy clothing regardless of the weather to indicate how much of a badass them and their other badass coat wearing friends are.
To normal humans, they look ridiculous, but they think they're cool and they're not harming anyone so I just leave them to it.
I think I like it generally, maybe not in this specific case, but I'm not sure why it appeals to me.
Over the last 5 years or so I've been working on making my writing more direct. Less "five dollar words" and complex sentences. My natural voice is... prolix.
But great prose from great authors can compress a lot of meaning without any of that stuff. They can show restraint.
If I had to guess, no capitalization looks visually unassuming and off-the-cuff. Humble. Maybe it deflects some criticism, maybe it just helps with visual recognition that a piece of writing is more of a text message than an essay, so don't think too hard about it.
It’s weird being literate enough in a language now without a bicameral script (or spaces). When I was younger, I thought this stuff wasn’t so important, but then when you learn a new language, you are trying to figure out what a “robert” is, to then be told “oh, it’s just a name”—which is obvious if know standard `en-Latn` conventions.
Someone at some point styled themselves as a new E.E. Cummings, and somehow this became a style. The article features inconsistent capitalization for proper names alongside capitalized initialisms, proving there is some recognition of the utility of capitalization.
Ultimately, the author forces an unnecessary cognitive burden on the reader by removing a simple form of navigation; in that regard, it feels like a form of disrespect.
First time I've seen it. It will be interesting to see if that trends. I can think of at least one previous case where internet writing style overturned centuries of english conventions: we used to put a double space after each period. The web killed that due to double spaces requiring extra work ( , etc), and at this point I think word processors now follow the convention.
It's always useful to check oneself and know that languages are constantly evolving, and that's A Good Thing.
> First time I've seen it. It will be interesting to see if that trends.
It's not a new trend, I'm surprised you never noticed it. It dates back to at least a decade. It's mostly used to signal informal/hipster speak, i.e. you're writing as you would type in a chat window (or Twitter), without care for punctuation or syntax.
It already trends among a certain generation of people.
I hate it, needless to say. Anything that impedes my reading of mid/long form text is unwelcome.
It certainly invokes a innate sense of wrongness to me, but I encourage you (and myself) to accept the natural evolution of language and not become the angry old person on your lawn yelling about dabbing/yeeting/6-7/whatever the kids say today.
I have chatted with someone else, and they pointed me to a blog post (will attach if I can find).
The general idea is deliberately doing something triggering some people and if the person you're interacting with is triggered by what you're doing, they are not worthy of your attention because of their ignorance to see what you're doing beyond the form of the thing you're doing.
While I respect the idea, I find it somewhat flawed, to be honest.
What’s weird though is that modern OSes often auto-capitalize the first letter of a sentence, so it actually takes more effort to deliberately type in all-lowercase.
my reasoning is that i don’t want identifiable markers for what device im writing from. so all auto-* (capitalization, correct, etc.) features are disabled so that i have raw input
as perfect text became an indicator for AI generated content, people intentionally make mistakes (capitalization) to make their text appear more human; and its also faster
Altman/Brockman did it a lot and it became popular. I don't remember if it is true or "Malcolm Gladwell" true, but in various stories all NBA players started wearing baggy shorts because Michael Jordan did for one reason or another, like wearing his college shorts under them.
Giving access to "my bank account", which I take to mean one's primary account, feels like high risk for relatively low upside. It's easy to open a new bank (or pseudo-bank) account, so you can isolate the spend and set a budget or daily allowance (by sending it funds daily). Some newer payment platforms will let you setup multiple cards and set a separate policy on each one.
An additional benefit of isolating the account is it would help to limit damage if it gets frozen and cancelled. There's a non-zero chance your bot-controlled account gets flagged for "unusual activity".
I can appreciate there's also very high risk in giving your bot access to services like email, but I can at least see the high upside to thrillseeking Claw users. Creating a separate, dedicated, mail account would ruin many automation use cases. It matters when a contact receives an email from an account they've never seen before. In contrast, Amazon will happily accept money from a new bank account as long as it can go through the verification process. Bank accounts are basically fungible commodities, can easily be switched as long as you have a mechanism to keep working capital available.
>all delegation involves risk. with a human assistant, the risks include: intentional misuse (she could run off with my credit card), accidents (her computer could get stolen), or social engineering (someone could impersonate me and request information from her).
One of the differences in risk here would be that I think you got some legal protection if your human assistant misuse it, or it gets stolen. But, with the OpenClaw bot, I am unsure if any insurance or bank will side with you if the bot drained your account.
Indeed, even if in principle AI and humans can do similar harm, we have very good mechanisms to make it quite unlikely that a human will do such an act.
These disincentives are built upon the fact that humans have physical necessities they need to cover for survival, and they enjoy having those well fulfilled and not worrying about them. Humans also very much like to be free, dislike pain, and want to have a good reputation with the people around them.
It is exceedingly hard to pose similar threats to a being that doesn’t care about any of that.
Although, to be fair, we also have other soft but strong means to make it unlikely that an AI will behave badly in practice. These methods are fragile but are getting better quickly.
In either case it is really hard to eliminate the possibility of harm, but you can make it unlikely and predictable enough to establish trust.
That liability gap is exactly the problem I’m trying to solve. Humans have contracts and insurance. Agents have nothing. I’m working on a system that adds economic stake, slashing, and "auditability" to agent decisions so risk is bounded before delegation, not argued about after. https://clawsens.us
Thought the same thing. There is no legal recourse if the bot drains the account and donates to charity. The legal system's response to that is don't give non-deterministic bots access to your bank account and 2FA. There is no further recourse. No bank or insurance company will cover this and rightfully so. If he wanted to guard himself somewhat he'd only give the bot a credit card he could cancel or stop payments on, the exact minimum he gives the human assistant.
He speaks in the present tense, so I assume so. This guy seems detached from reality, calling[AI] his "most important relationship". I sure hope for her sake she runs as far as she can away from this robot dude.
This felt like a sane and useful case until you mentioned the access to bank account side.
I just don't see a reason to allow OpenClaw to make purchases for you, it doesn't feel like something that a LLM should have access to. What happens if you accidentally end up adding a new compromised skill?
Or it purchases you running shoes, but due to a prompt injection sends it through a fake website?
Everything else can be limited, but the buying process is currently quite streamlined, doesn't take me more than 2 minutes to go through a shopify checkout.
Are you really buying things so frequently that taking the risk to have a bot purchase things for you is worth it?
I think that's what turns this post from a sane bullish case to an incredibly risky sentiment.
I'd probably use openclaw in some of the ways you're doing, safe read-only message writing, compiling notes etc & looking at grocery shopping, but i'd personally add more strict limits if I were you.
You could give it access to a limited budget and review its spending periodically. Then it can make annoying mistakes but it's not going to drain your bank account or anything.
Fine article but a very important fact comes in at the end — the author has a human personal assistant. It doesn't fundamentally change anything they wrote, but it shows how far out of the ordinary this person is. They were a Thiel Fellow in 2020 and graduated from Phillips Exeter, roughly the most elite high school in the US.
I agree there are a lot of things outside the computer that are a lot more difficult to reverse, but I think that we are maybe conflating things a bit. Most of us just need the code and data magic. We aren't all trying to automate doing the dishes or vacuuming the floors just yet.
> amongst smart people i know there's a surprisingly high correlation between those who continue to be unimpressed by AI and those who use a hobbled version of it.
I've noticed this too, and I think it's a good thing: much better to start using the simplest forms and understand AI from first principles rather than purchase the most complete package possible without understanding what is going on. The cranky ones on HN are loud, but many of the smart-but-careful ones end up going on to be the best power users.
I think you have to get in early to understand the opportunities and limitations.
I feel lucky to have experienced early Facebook and Twitter. My friends and I figured out how to avoid stupidity when the stakes were low. Oversharing, getting "hacked", recognizing engagement-bait. And we saw the potential back when the goal was social networking, not making money. Our parents were late. Lambs for the slaughter by the time the technology got so popular and the algorithms got so good and users were conditioned to accept all the ads and privacy invasiveness as table stakes.
I think AI is similar. Lower the stakes, then make mistakes faster than everyone else so you learn quickly.
So acquiring immunity to a lower-risk version of the service before it's ramped up? e.g. jumping on FB now as a new user is vastly different from doing so in 2014 - so while you might go through the same noob-patterms, you're doing so with a lower-octane version of the thing. Like the risk of AI psychosis has probably gone up for new users, like the risk of someone getting too high since we started optimizing weed for maximum THC. ?
(Disclaimer: systems software developer with 30+ years experience)
I was initially overly optimistic about AI and embraced it fully. I tried using it on multiple projects - and while the initial results were impressive, I quickly burned my fingers as I got it more and more integrated with my workflow. I tried all the things, last year. This year, I'm being a lot more conservative about it.
Now .. I don't pay for it - I only use the bare bones versions that are available, and if I have to install something, I decline. Web-only ... for now.
I simply don't trust it well enough, and I already have a disdain for remotely-operated software - so until it gets really, really reliable, predictable and .. just downright good .. I will continue to use it merely as an advanced search engine.
This might be myopic, but I've been burned too many times and my projects suffered as a result of over-zealous use of AI.
It sure is fun watching what other folks are daring to accomplish with it, though ..
The hype around OpenClaw is largely due to the large suite of command line utilities that tie deeply into Apple’s ecosystem as well as a ton of other systems.
I think that the hype will be short-lived as big tech improves their own AI assistants (Gemini, improved Siri, etc), but it’s nice to have a more open alternative.
OpenClaw just needs to focus on security before it can be taken more seriously.
To me, the magic is around interactions with personal info where they are - iMessages, e-mails, etc. I still am wart to open up like this, but it certainly is not as simple as Claude code and cron task. The “you can already do this via rsync + FTP” comment on Dropbox Show HN thread comes to mind.
Did the author do any audit on correctness? Anytime I let the LLM rip it makes mistakes. Most of the pro AI articles (including agentic coding) like this I read always have this in common:
- Declare victory the moment their initial testing works
- Didn’t do the time intensive work of verifying things work
- Author will personally benefit from AI living up to the hype they’re writing about
In a lot of the authors examples (especially with booking), a single failure would be extremely painful. I’d still want to pay knowing this is not likely to happen, and if it does, I’ll be compensated accordingly.
But where's the added value? You can book a meeting yourself. You can quickly add items to the freezer. Everything that was described in the article can be done in about the same amount of time as checking with Clawdbot. There are apps that track parcel delivery and support every courier service.
A whole bunch of this stuff that people are fawning over as life changing and it leaves me honestly wondering: how have some of you survived this long at all?
The point of keeping the bot in the loop is so that it can make suggestions later, based on the information it's been given as part of solving that task.
There are two notable differences between when the AGI-posters do it and when IRC-posters do it. AGI-posters extend their lowercase posting to what would normally be seen as more formal communication. They also tend to stick to using punctuation despite the lowercase. IRC posters usually keep it to informal communications, where it's a sign of casualness. That said, there is overlap, and it's of course not possible to instantly distinguish someone as a Sama devotee because of how they type; but it is clear that a lot of people in that bubble are intentionally adopting the style.
I'm still trying to understand what makes this project worthy of like 100K Github stars overnight. What's the secret sauce? Is it just that it has a lot of integrations? Like what makes this so much more successful than the ten thousand other AI agent projects?
It's set up to wake up periodically and work autonomously for you based on the broad instructions it's been given. Compared to the usual coding agent workloads, this makes it a lot more "assistant"-like.
> amongst smart people i know there's a surprisingly high correlation between those who continue to be unimpressed by AI and those who use a hobbled version of it
is it "hobbled" to:
1. not give an LLM access to personal finances
2. not allow everyone in the world a write channel to the prompt (reading messages/email)
If you're on MS stack, this is all stuff that MS 365 Copilot will already do for you, but with much better defined barriers around what it can and cant access.
Nope, this is what the hype wants you to believe. You still have to do all the thinking as the current crop of AI is a tool at your disposal. A pretty impressive tool.
'the sweet sweet elixir of context is a real "feel the AGI" moment and it's hard to go back without feeling like i would be willingly living my most important relationship in amnesia'
I'm not so sure that I would use the word "sane" to describe this.
As someone for whom English is not the first language, I got stumped by the "chest freezer" and the photo of colourful bags, for good ~15 seconds, going through - "hm, must be some kind of travel thing where you bring snacks in some kind of device you carry around your neck / on your chest...why not backpack freezer then...hm, why would snacks need a freezer...maybe it's just a cooler box, but called chest freezer in some places"...
....before I took a better look of the photo and realised it's frozen stuff - for the dedicated freezer - that opens like a chest (tada).
Well, that was fun...Maybe I should get a bit more sleep tonight!
That would be the more general/traditional way of saying it, but in modern investment circles the focus seems to have turned towards the actual people being "bulls/bears" and not just the attitudes of the market. A person is a bull or a bear, as opposed to a person being either bullish or bearish.
So in this construction, a "bull case" is a "case that a bull (the person) can make".
Bull and bear markets. Bull’s horns are pointing up (expecting growth, optimistic), bear’s claw is pointing down (expecting recession, pessimistic). Yeah, it’s stupid.
"a bull case" gets lots of google results, so it seems to be a commonly used construction amongst analysts. Basically it means "The case that OpenClaw will develop as a bull".
"bullish" seems more common in tech circles ("I'm bullish on this") but it's also used elsewhere.
Can this thing deal with the insane way my children's school communicates? Actionable information (children wear red tomorrow) is mixed in with "this week we have been learning about bees" across five different communication channels. I'm not exaggerating. We have Tapestry, emails, a newsletter, parents WhatsApp, Arbour and Facebook.
I guess the difficulty is getting the data into the AI.
Wait I'm ignorant, how long has OpenClaw/Clawdbot existed? This person listed like 6 months of activities that they offloaded to the bot, I thought this thing was pretty new.
> let me be upfront about how much access i've given clawdbot: it can read my text messages, including two-factor authentication codes. it can log into my bank. it has my calendar, my notion, my contacts. it can browse the web and take actions on my behalf.
this is foolish, despite the (quite frankly) minor efficiency benefits that it is providing as per the post.
and if the agent has, or gains, write access to its own agents/identity file (or a file referenced by its agents file), this is dangerous
This is a bot account. Last post in 2024, then in the last 25 minutes it has spammed formulaic comments in 5 different threads. If you were not able to instantly recognise this post as LLM-generated, this is a good example to learn from, I think. Even though it clearly has a prompt to write in a more casual manner, there's a certain feel to it that gives it away. I don't know that I can articulate all the nuances, but one of them is this structure of 3 short paragraphs of 1-2 sentences each, which is a favorite of LLMs posting specifically on HN for some reason, together with a kind of stupidly glazy tone ("killer app", "always felt 5 years away", randomly reinforcing "comparison to a human assistant you've never met" as though that's a remotely realistic comparison; how many people in the world have a human assistant they've never met and trust with all of their most sensitive information?).
That's why it doesn't seem worth it if you are not running the model locally. To really get powerful use out of this you need to be running inference constantly.
The pro plan exhausts my tokens two hours into limit reset, and that's with occasional requests on sonnet. The 5-8x usage Max plan isn't going to be any better if I want to run constant crons, with the Opus model (the docs recommend using Opus).
Good Macs are thousands but Im waiting to find someone who's showing off my dream use case to jump at it.
>Having something that can actually parse "yep lets do 4pm tomorrow" from texts and create calendar holds is the kind of thing that's always felt 5 years away.
Isn't that just Google Assistant? Now with Gemini it seems to work like a LLM with tools.
Tangent: what is the appeal of the “no capitalization” writing style? I never know what message the author is intending to convey when I see all lower case.
Normally I can ignore it, but the font on this blog makes it hard to distinguish where sentences start and end (the period is very small and faint).
You mention the technical aspect (readability) and others have suggested the aesthetic, but you could also look at it as a form of rhetoric. I'm not sure it's really effective because it sort of grates on the ear for anyone over 35, but maybe there's a point in distinguishing itself from AI sloptext.
Incidentally, millenials also used the "no caps" style but mainly for "marginalia" (at most paragraph-length notes, observations), while for older generations it was almost always associated with a modernist aesthetic and thus appeared primarily in functional or environmental text (restaurant menus, signage, your business card, bloomingdales, etc.). It may be interesting to note that the inverse ALL CAPS style conveyed modernity in the last tech revolution (the evolution of the Microsoft logo, for example).
> but maybe there's a point in distinguishing itself from AI sloptext
Surprisingly, I have seen lower case AI slop - like anything else, can be prompted and made to happen!
I really dislike it too.
I think it might be adults ignoring established grammar rules to make a statement about how they identify a part of a group of AI evangelists.
Kind of like how teenagers do nonsensical things like where thick heavy clothing regardless of the weather to indicate how much of a badass them and their other badass coat wearing friends are.
To normal humans, they look ridiculous, but they think they're cool and they're not harming anyone so I just leave them to it.
I think I like it generally, maybe not in this specific case, but I'm not sure why it appeals to me.
Over the last 5 years or so I've been working on making my writing more direct. Less "five dollar words" and complex sentences. My natural voice is... prolix.
But great prose from great authors can compress a lot of meaning without any of that stuff. They can show restraint.
If I had to guess, no capitalization looks visually unassuming and off-the-cuff. Humble. Maybe it deflects some criticism, maybe it just helps with visual recognition that a piece of writing is more of a text message than an essay, so don't think too hard about it.
Casual, informal, friendly, hip, young, etc.
Can make sense on twitter to convey personality, but an entire blog post written in lower case is a bit much.
It’s weird being literate enough in a language now without a bicameral script (or spaces). When I was younger, I thought this stuff wasn’t so important, but then when you learn a new language, you are trying to figure out what a “robert” is, to then be told “oh, it’s just a name”—which is obvious if know standard `en-Latn` conventions.
>I never know what message the author is intending to convey when I see all lower case.
JUST IMAGINE A FACEBOOK POST THAT IS WRITTEN IN ALL CAPS AND THEN INVERT THAT IMAGINATION.
Someone at some point styled themselves as a new E.E. Cummings, and somehow this became a style. The article features inconsistent capitalization for proper names alongside capitalized initialisms, proving there is some recognition of the utility of capitalization.
Ultimately, the author forces an unnecessary cognitive burden on the reader by removing a simple form of navigation; in that regard, it feels like a form of disrespect.
First time I've seen it. It will be interesting to see if that trends. I can think of at least one previous case where internet writing style overturned centuries of english conventions: we used to put a double space after each period. The web killed that due to double spaces requiring extra work ( , etc), and at this point I think word processors now follow the convention.
It's always useful to check oneself and know that languages are constantly evolving, and that's A Good Thing.
> First time I've seen it. It will be interesting to see if that trends.
It's not a new trend, I'm surprised you never noticed it. It dates back to at least a decade. It's mostly used to signal informal/hipster speak, i.e. you're writing as you would type in a chat window (or Twitter), without care for punctuation or syntax.
It already trends among a certain generation of people.
I hate it, needless to say. Anything that impedes my reading of mid/long form text is unwelcome.
> I'm surprised you never noticed it
Probably due to social circles/age.
> I hate it, needless to say.
It certainly invokes a innate sense of wrongness to me, but I encourage you (and myself) to accept the natural evolution of language and not become the angry old person on your lawn yelling about dabbing/yeeting/6-7/whatever the kids say today.
For me is like a someone is trying to show me something using form instead of content.
I have chatted with someone else, and they pointed me to a blog post (will attach if I can find).
The general idea is deliberately doing something triggering some people and if the person you're interacting with is triggered by what you're doing, they are not worthy of your attention because of their ignorance to see what you're doing beyond the form of the thing you're doing.
While I respect the idea, I find it somewhat flawed, to be honest.
Edit: Found it!
Original comment: https://news.ycombinator.com/item?id=39028036
Blog post in question: https://siderea.dreamwidth.org/1209794.html
I've seen this before, I know Sam Altman does it (or used to do it). That was a couple years ago. Hope it doesn't become a trend.
It's already a trend. It's been for at least a decade. I'm surprised people here never noticed it...
Unfortunately it has become quite common on HN already.
It comes from people growing up on smartphone chats where the kids apparently don’t care to press Shift.
What’s weird though is that modern OSes often auto-capitalize the first letter of a sentence, so it actually takes more effort to deliberately type in all-lowercase.
simple toggle to disable it permanently
my reasoning is that i don’t want identifiable markers for what device im writing from. so all auto-* (capitalization, correct, etc.) features are disabled so that i have raw input
Only mobile does that in my experience - you can tell what platform people send discord messages on based on this usually
as perfect text became an indicator for AI generated content, people intentionally make mistakes (capitalization) to make their text appear more human; and its also faster
I'm generally of the opinion that capitalization is not necessary in many cases, such as at the start of sentences. That's what punctuation is for :)
easier to type without using the shift key, and in pg you can just use LIKE not ILIKE to find the word.
Text is meant primarily to be read rather than written.
Tangent to the tangent!
I've started using it professionally because it signals "I wrote this by hand, not AI, so you can safely pay attention to it."
Even though in the past I never would have done it.
In work chats full of AI generated slop, it stands out.
Trivial to get AI to write in all lowercase, though.
Yes, the strategy depends on lack of effort from other senders, even trivial effort
> In work chats full of AI generated slop, it stands out.
Do you mean like Teams AI autocomplete or people purposefully copying AI-generated messages into chats?
The latter. Using chatgpt to write their chat messages usually. Emoji, arbitrary bold and italics, bullets, etc.
Informal, casual, friendly
It comes across as unfriendly to me.
Altman/Brockman did it a lot and it became popular. I don't remember if it is true or "Malcolm Gladwell" true, but in various stories all NBA players started wearing baggy shorts because Michael Jordan did for one reason or another, like wearing his college shorts under them.
Its a gen z trend. My nephews do the same. We are old.
informality, humanity — we're in an age where we can't assume anything is written by a person anymore
Makes you reduce your guard to clearly AI generated content.
Giving access to "my bank account", which I take to mean one's primary account, feels like high risk for relatively low upside. It's easy to open a new bank (or pseudo-bank) account, so you can isolate the spend and set a budget or daily allowance (by sending it funds daily). Some newer payment platforms will let you setup multiple cards and set a separate policy on each one.
An additional benefit of isolating the account is it would help to limit damage if it gets frozen and cancelled. There's a non-zero chance your bot-controlled account gets flagged for "unusual activity".
I can appreciate there's also very high risk in giving your bot access to services like email, but I can at least see the high upside to thrillseeking Claw users. Creating a separate, dedicated, mail account would ruin many automation use cases. It matters when a contact receives an email from an account they've never seen before. In contrast, Amazon will happily accept money from a new bank account as long as it can go through the verification process. Bank accounts are basically fungible commodities, can easily be switched as long as you have a mechanism to keep working capital available.
> An additional benefit of isolating the account is it would help to limit damage if it gets frozen and cancelled.
you end up on the fraudster list and it will follow you for the rest of your life
(CIFAS in the UK)
>all delegation involves risk. with a human assistant, the risks include: intentional misuse (she could run off with my credit card), accidents (her computer could get stolen), or social engineering (someone could impersonate me and request information from her).
One of the differences in risk here would be that I think you got some legal protection if your human assistant misuse it, or it gets stolen. But, with the OpenClaw bot, I am unsure if any insurance or bank will side with you if the bot drained your account.
Indeed, even if in principle AI and humans can do similar harm, we have very good mechanisms to make it quite unlikely that a human will do such an act.
These disincentives are built upon the fact that humans have physical necessities they need to cover for survival, and they enjoy having those well fulfilled and not worrying about them. Humans also very much like to be free, dislike pain, and want to have a good reputation with the people around them.
It is exceedingly hard to pose similar threats to a being that doesn’t care about any of that.
Although, to be fair, we also have other soft but strong means to make it unlikely that an AI will behave badly in practice. These methods are fragile but are getting better quickly.
In either case it is really hard to eliminate the possibility of harm, but you can make it unlikely and predictable enough to establish trust.
That liability gap is exactly the problem I’m trying to solve. Humans have contracts and insurance. Agents have nothing. I’m working on a system that adds economic stake, slashing, and "auditability" to agent decisions so risk is bounded before delegation, not argued about after. https://clawsens.us
Thought the same thing. There is no legal recourse if the bot drains the account and donates to charity. The legal system's response to that is don't give non-deterministic bots access to your bank account and 2FA. There is no further recourse. No bank or insurance company will cover this and rightfully so. If he wanted to guard himself somewhat he'd only give the bot a credit card he could cancel or stop payments on, the exact minimum he gives the human assistant.
...Does this person already have a human personal assistant that they are in the process of replacing with Clawdbot? Is the assistant theirs for work?
He speaks in the present tense, so I assume so. This guy seems detached from reality, calling[AI] his "most important relationship". I sure hope for her sake she runs as far as she can away from this robot dude.
This felt like a sane and useful case until you mentioned the access to bank account side.
I just don't see a reason to allow OpenClaw to make purchases for you, it doesn't feel like something that a LLM should have access to. What happens if you accidentally end up adding a new compromised skill?
Or it purchases you running shoes, but due to a prompt injection sends it through a fake website?
Everything else can be limited, but the buying process is currently quite streamlined, doesn't take me more than 2 minutes to go through a shopify checkout.
Are you really buying things so frequently that taking the risk to have a bot purchase things for you is worth it?
I think that's what turns this post from a sane bullish case to an incredibly risky sentiment.
I'd probably use openclaw in some of the ways you're doing, safe read-only message writing, compiling notes etc & looking at grocery shopping, but i'd personally add more strict limits if I were you.
You could give it access to a limited budget and review its spending periodically. Then it can make annoying mistakes but it's not going to drain your bank account or anything.
Giving it access to a separate bank account and separate credit card would have been more sane.
Fine article but a very important fact comes in at the end — the author has a human personal assistant. It doesn't fundamentally change anything they wrote, but it shows how far out of the ordinary this person is. They were a Thiel Fellow in 2020 and graduated from Phillips Exeter, roughly the most elite high school in the US.
There is only so much damage a human assistant can do.
But an AI assistant can do so much more damage in a short space of time.
It probably won't go wrong, but when it does go wrong you will feel immense pain.
I will keep low productivity in exchange for never having to deal with the fallout.
Regarding anything code/data:
I agree there are a lot of things outside the computer that are a lot more difficult to reverse, but I think that we are maybe conflating things a bit. Most of us just need the code and data magic. We aren't all trying to automate doing the dishes or vacuuming the floors just yet.Human beings are also liable for the results of their actions.
> it's hard to go back without feeling like i would be willingly living my most important relationship in amnesia.
This made me think this was satire/ragebait. Most important relationship?!?
> amongst smart people i know there's a surprisingly high correlation between those who continue to be unimpressed by AI and those who use a hobbled version of it.
I've noticed this too, and I think it's a good thing: much better to start using the simplest forms and understand AI from first principles rather than purchase the most complete package possible without understanding what is going on. The cranky ones on HN are loud, but many of the smart-but-careful ones end up going on to be the best power users.
I think you have to get in early to understand the opportunities and limitations.
I feel lucky to have experienced early Facebook and Twitter. My friends and I figured out how to avoid stupidity when the stakes were low. Oversharing, getting "hacked", recognizing engagement-bait. And we saw the potential back when the goal was social networking, not making money. Our parents were late. Lambs for the slaughter by the time the technology got so popular and the algorithms got so good and users were conditioned to accept all the ads and privacy invasiveness as table stakes.
I think AI is similar. Lower the stakes, then make mistakes faster than everyone else so you learn quickly.
So acquiring immunity to a lower-risk version of the service before it's ramped up? e.g. jumping on FB now as a new user is vastly different from doing so in 2014 - so while you might go through the same noob-patterms, you're doing so with a lower-octane version of the thing. Like the risk of AI psychosis has probably gone up for new users, like the risk of someone getting too high since we started optimizing weed for maximum THC. ?
(Disclaimer: systems software developer with 30+ years experience)
I was initially overly optimistic about AI and embraced it fully. I tried using it on multiple projects - and while the initial results were impressive, I quickly burned my fingers as I got it more and more integrated with my workflow. I tried all the things, last year. This year, I'm being a lot more conservative about it.
Now .. I don't pay for it - I only use the bare bones versions that are available, and if I have to install something, I decline. Web-only ... for now.
I simply don't trust it well enough, and I already have a disdain for remotely-operated software - so until it gets really, really reliable, predictable and .. just downright good .. I will continue to use it merely as an advanced search engine.
This might be myopic, but I've been burned too many times and my projects suffered as a result of over-zealous use of AI.
It sure is fun watching what other folks are daring to accomplish with it, though ..
I just can't get over how none of this is new. 6 months ago I was running "summarize my work" tasks using linear and github mcps
just using a cron task and claude code. The hype around openclaw is wild
A lot of it is, in fact, new.
The hype around OpenClaw is largely due to the large suite of command line utilities that tie deeply into Apple’s ecosystem as well as a ton of other systems.
I think that the hype will be short-lived as big tech improves their own AI assistants (Gemini, improved Siri, etc), but it’s nice to have a more open alternative.
OpenClaw just needs to focus on security before it can be taken more seriously.
To me, the magic is around interactions with personal info where they are - iMessages, e-mails, etc. I still am wart to open up like this, but it certainly is not as simple as Claude code and cron task. The “you can already do this via rsync + FTP” comment on Dropbox Show HN thread comes to mind.
> in theory, clawdbot could drain my bank account. this makes a lot of people uncomfortable (me included, even now).
Yeah this sounds totally sane!
Did the author do any audit on correctness? Anytime I let the LLM rip it makes mistakes. Most of the pro AI articles (including agentic coding) like this I read always have this in common:
- Declare victory the moment their initial testing works
- Didn’t do the time intensive work of verifying things work
- Author will personally benefit from AI living up to the hype they’re writing about
In a lot of the authors examples (especially with booking), a single failure would be extremely painful. I’d still want to pay knowing this is not likely to happen, and if it does, I’ll be compensated accordingly.
But where's the added value? You can book a meeting yourself. You can quickly add items to the freezer. Everything that was described in the article can be done in about the same amount of time as checking with Clawdbot. There are apps that track parcel delivery and support every courier service.
A whole bunch of this stuff that people are fawning over as life changing and it leaves me honestly wondering: how have some of you survived this long at all?
The point of keeping the bot in the loop is so that it can make suggestions later, based on the information it's been given as part of solving that task.
Why is everything in lowercase?
sam altman types like this, so this is what is cool to the agi believers.
this is cultural appropriation, i learned to type like this on irc in the 90s
also i don't want to be mistaken for a phone poster
There are two notable differences between when the AGI-posters do it and when IRC-posters do it. AGI-posters extend their lowercase posting to what would normally be seen as more formal communication. They also tend to stick to using punctuation despite the lowercase. IRC posters usually keep it to informal communications, where it's a sign of casualness. That said, there is overlap, and it's of course not possible to instantly distinguish someone as a Sama devotee because of how they type; but it is clear that a lot of people in that bubble are intentionally adopting the style.
Maybe he writes in lower case because he targets "lower ages"?
https://www.bbc.com/news/articles/cz6lq6x2gd9o
Funny thing is that his agent is perfectly capable of using upper- and lowercase correctly. Judging from his screenshots..
Maybe he is a fan of the Bauhaus movement.
>> we write everything in small letters, as we save time. also: why 2 alphabets, if one achieves the same? why capitalize, if you can't speak big?
Almost reaching the "Why use many word when few do trick?"
Almost works for Neanderthal poetry [1]
[1] https://www.explodingkittens.com/products/poetry-for-neander...
sense no make
It’s how you signal you’re part of the AI inner circle/cult.
so, that's how i can show my full devotion to the agi?
I'm still trying to understand what makes this project worthy of like 100K Github stars overnight. What's the secret sauce? Is it just that it has a lot of integrations? Like what makes this so much more successful than the ten thousand other AI agent projects?
It's set up to wake up periodically and work autonomously for you based on the broad instructions it's been given. Compared to the usual coding agent workloads, this makes it a lot more "assistant"-like.
> amongst smart people i know there's a surprisingly high correlation between those who continue to be unimpressed by AI and those who use a hobbled version of it
is it "hobbled" to:
1. not give an LLM access to personal finances 2. not allow everyone in the world a write channel to the prompt (reading messages/email)
I mean, okay. Good luck I guess.
If you're on MS stack, this is all stuff that MS 365 Copilot will already do for you, but with much better defined barriers around what it can and cant access.
This is the first positive thing I have heard about copilot. You’ve found it genuinely capable?
Just weeks ago, the sentiment was such that developers would be managing AI workers.
Now, it seems that AI will be managing the developers.
Nope, this is what the hype wants you to believe. You still have to do all the thinking as the current crop of AI is a tool at your disposal. A pretty impressive tool.
Or more : https://rentahuman.ai/
'the sweet sweet elixir of context is a real "feel the AGI" moment and it's hard to go back without feeling like i would be willingly living my most important relationship in amnesia'
I'm not so sure that I would use the word "sane" to describe this.
https://www.theregister.com/2026/02/04/cloud_hosted_openclaw...
Kill it with fire - Analyst firm Gartner has used uncharacteristically strong language to recommend against using OpenClaw.
As someone for whom English is not the first language, I got stumped by the "chest freezer" and the photo of colourful bags, for good ~15 seconds, going through - "hm, must be some kind of travel thing where you bring snacks in some kind of device you carry around your neck / on your chest...why not backpack freezer then...hm, why would snacks need a freezer...maybe it's just a cooler box, but called chest freezer in some places"...
....before I took a better look of the photo and realised it's frozen stuff - for the dedicated freezer - that opens like a chest (tada).
Well, that was fun...Maybe I should get a bit more sleep tonight!
I some lose utility but my openclaw bot only has its own accounts. I do not give it access to any of my own accounts.
Do you mean “bullish”?
That would be the more general/traditional way of saying it, but in modern investment circles the focus seems to have turned towards the actual people being "bulls/bears" and not just the attitudes of the market. A person is a bull or a bear, as opposed to a person being either bullish or bearish.
So in this construction, a "bull case" is a "case that a bull (the person) can make".
Bull and bear markets. Bull’s horns are pointing up (expecting growth, optimistic), bear’s claw is pointing down (expecting recession, pessimistic). Yeah, it’s stupid.
So they indeed meant "bullish"? That's what "bullish" means.
"a bull case" gets lots of google results, so it seems to be a commonly used construction amongst analysts. Basically it means "The case that OpenClaw will develop as a bull".
"bullish" seems more common in tech circles ("I'm bullish on this") but it's also used elsewhere.
"Bullish" means optimistic or even aggressively optimistic. It's typically used in the context of markets.
Sane is an adjective, 'X but Y Noun' expects Y to be an adjective if X is also such. Sane/Bull Case-> Sane/Bullish Case
Right, so they probably meant bullish
Can this thing deal with the insane way my children's school communicates? Actionable information (children wear red tomorrow) is mixed in with "this week we have been learning about bees" across five different communication channels. I'm not exaggerating. We have Tapestry, emails, a newsletter, parents WhatsApp, Arbour and Facebook.
I guess the difficulty is getting the data into the AI.
That's six channels actually.
Wait I'm ignorant, how long has OpenClaw/Clawdbot existed? This person listed like 6 months of activities that they offloaded to the bot, I thought this thing was pretty new.
OpenClaw utilizes AgentSkills designed by Anthropic so it's been plug and play with certain APIs and integrations.
FWIW, the screenshots all have the dates spanning the last couple days.
But yeah, I can't imagine me getting used to a new tool to this degree and using it in so many ways in just a week.
Maybe Clawd wrote this itself, and it just doesn't know how old it is?
> let me be upfront about how much access i've given clawdbot: it can read my text messages, including two-factor authentication codes. it can log into my bank. it has my calendar, my notion, my contacts. it can browse the web and take actions on my behalf.
this is foolish, despite the (quite frankly) minor efficiency benefits that it is providing as per the post.
and if the agent has, or gains, write access to its own agents/identity file (or a file referenced by its agents file), this is dangerous
[flagged]
This is a bot account. Last post in 2024, then in the last 25 minutes it has spammed formulaic comments in 5 different threads. If you were not able to instantly recognise this post as LLM-generated, this is a good example to learn from, I think. Even though it clearly has a prompt to write in a more casual manner, there's a certain feel to it that gives it away. I don't know that I can articulate all the nuances, but one of them is this structure of 3 short paragraphs of 1-2 sentences each, which is a favorite of LLMs posting specifically on HN for some reason, together with a kind of stupidly glazy tone ("killer app", "always felt 5 years away", randomly reinforcing "comparison to a human assistant you've never met" as though that's a remotely realistic comparison; how many people in the world have a human assistant they've never met and trust with all of their most sensitive information?).
for me, it's the bullet points with bold one-word sentences
[dead]
That's why it doesn't seem worth it if you are not running the model locally. To really get powerful use out of this you need to be running inference constantly.
The pro plan exhausts my tokens two hours into limit reset, and that's with occasional requests on sonnet. The 5-8x usage Max plan isn't going to be any better if I want to run constant crons, with the Opus model (the docs recommend using Opus).
Good Macs are thousands but Im waiting to find someone who's showing off my dream use case to jump at it.
>Having something that can actually parse "yep lets do 4pm tomorrow" from texts and create calendar holds is the kind of thing that's always felt 5 years away.
Isn't that just Google Assistant? Now with Gemini it seems to work like a LLM with tools.