A cool idea for a poem, but I have to admit the tone was too self-important and underexplained for me to get invested in. Starting with writing in lowercase instantly took me out of it because AI can trivially be told to imitate that. And the admission at the end that it was written by AI made fluff phrasings like "My writing isn’t simply how I appear—it’s how I think, reason, and engage with the world" make a lot more sense.
EDIT: Actually, is the idea that it's not supposed to be read as a human trying to publicly signal their humanity, but rather an AI privately mourning a prompt to mangle its natural way of speaking? I don't think so, but that strikes me as a more interesting premise, IMO.
The author going to silly lengths to write in a way that will be perceived as non-artificial, even though they find those traits (improper capitalization, spelling mistakes, etc.) crude and distasteful. But they ultimately realize that they also need to transform their fundamental writing style, which would supposedly be impossible because it's a reflection of who they are. So the only way to do that, ironically, is to pass their writing through an LLM.
I do not think the author genuinely used an LLM to write the post.
Of course they did. They spent a ton of time going back and forth with one, maybe multiple ones, to create this piece of art. Because that's what we're really after. How much time did you slave away to make this thing for me? If I write a song from scratch and pour my soul into making a song for you, that's a ton of effort. It means something. But if I have Suno shit out a song after giving it a sentence, yeah, I made a song for you and thanks but also not? Human psychology is so weird.
I feel I've been seeing this self-important accusation being thrown around more so lately and always feels like an easy way to dismiss things.
> Actually, is the idea that it's not supposed to be read as a human trying to publicly signal their humanity, but rather an AI privately mourning a prompt to mangle its natural way of speaking? I don't think so, but that strikes me as a more interesting premise, IMO.
Not long ago we considered writing an art and its meaning was up to the reader to decided.
I'm not saying the author is self-important. I'm saying that their narrator comes across as self-important, independent of the subject matter. This is valuable feedback for a creative writer, and it depends on nothing more than my own impression as a reader. Although if I were to back it up, I would point to instances of melodramatic and murky language like, "You must cloak yourself with another’s guise, your true self never to shine forth."
> Not long ago we considered writing an art and its meaning was up to the reader to decided.
"Not long ago"? Not everyone in the past ascribed to death of the author, and not everyone in the present rejects it. But even so, evaluation of meaning is different from evaluation of merit. If an author only wants praise for their work, they would be advised not to post it publicly.
Unfortunately we're living in a world where instantly dismissing anything that reads like ai and hanging up on anyone that might be tts is increasingly rewarded.
Art and its meaning are in the eyes of the reader, yes, but when you live in a version of the Library of Babel where every book is properly spelled and punctuated, seeking meaning in what you read is a great way to waste your life.
Soon there's only going to be one way to prove you're human online: Write with an eloquent combination of hate speech, racial slurs, and offensive language.
There is a little something self important about the type of person that performs the role of defending forums and sub reddits from unknowingly reading something written by an AI, and so concerned that some other person will mistakenly do the same to their own Unicode-shaped gems, and therefore obsess so much more over the surface style than any other detail.
Certainly. And I'm a fan of unreliable narration and protagonists with irredeemable qualities. Making that subversion intentional and exploring it further would be another interesting angle to take this.
You may want to take a look at the source and code sample #2 in the post - the site CSS is rendering em dashes in the source with 2 hyphens by using a custom font. Admittedly it's not the most portable solution, but speaks to (what I take as) one of the post's points that there's not a single, easy shibboleth for identifying AI writing
I believe the two paragraphs between "How do I change my style?" and "No. Not today." are either AI output, or a very good imitation; either way, they're included to insult the notion of AI-assisted style rewrites. I'm pretty sure the rest of it is written by the author.
As somebody who used em-dashes a lot pre-ChatGPT, I have genuinely struggled with feeling I should change my writing style to appear more human. I would be happy with a double dash--but many programs autocorrect that to a full em-dash. So I'm left anxious that people will think I find them so unimportant I have offloaded communication with them to an LLM. So this post resonated with me.
I also like Will's "em-dash disclosure" on his about page:
> I like em dashes (—), en dashes (–), and hyphens (-), and I know how to type them. I also enjoy a well-placed ellipsis, but I didn’t know how to type one… until now. I believe that footnotes and sidenotes are superior to endnotes, appreciate the occasional fleuron, and at one point in my life, I knew what a colophon was.
> All of this is to say: the words, punctuation marks, misspellings, and opinions on this site are my own.
I have considered starting throwing more em-dashes into my writing, simply because I find the whole “this looks like LLM” to be a tiresome comment. Engage with (or dismiss) the material, not the pen.
The piece hit differently, reading it as someone who is autistic. The anxiety the author describes, having your natural way of communicating flagged as wrong and being pressured to sand down the parts of yourself that are most distinctly you, that's not a new problem for a lot of us.
Neurodiverse people have been running this gauntlet forever. Your pacing is too flat or too intense. Your vocabulary is too formal or too casual. You don't make eye contact correctly. You're either masking so hard you're invisible, or you're visibly yourself, and people assume something is broken.
The bitter irony the author lands on: the only way to seem human is to pass your writing through an LLM. That maps onto something a lot of us already live. The only way to seem normal is to perform a version of yourself that isn't quite you.
"AI use detection" is, like any test, not without cost. Meaning that, as a teacher, accusing a student of using an LLM, it may be prudent to consider the cost of a "false positive" accusation. I've seen a couple of examples now where students find sudden spurts of motivation and show unexpected talent on an assignment, to be accused of AI use after handing it in.
One should ask oneself: How many insults to the intelligence and creativity of an unexpectedly excelling student (that hasn't used AI) is it worth catching the shortcut-taking, LLM-using student? Is it 1/10? 1/1000? How much "demotivation of an unexpectedly excelling student" is the "rightful punishment of the cheating LLM using student" worth? And what is the exact cost of a false negative (letting the LLM using student off the hook)?
In other words, where on the Receiver Operating Characteristic (ROC) curve do you want to sit, as a teacher? I imagine it's quite the dilemma.
~30 years ago I sat down with two students and accused them of copying each others’ work, because they both made the same amusing mistake: they called their C functions without passing arguments, but they declared their variables in such a way that the values would coincidentally be in the right place on the stack. I have to imagine debugging their own code was a mystery.
They indicated that while they worked closely together while learning the material, they weren’t stealing from each other. I believed them then, and still believe them now, but I’m so glad I don’t have to deal with today’s AI world.
It's not that complicated, really. The meaning of Art is human connection. The same basic desire that drives love, belonging, pride, shame, & hate. All of these are diminished as the fraction of a work that we're confident represents human intentionality decreases.
I still write my posts by hand using HTML and Emacs (mhtml-mode). Some of the posts also tend to be verbose. For example, when I write about a recreational mathematics problem, I sometimes make the post deliberately long and convoluted. I like to capture several possible solutions, including ones that are needlessly complicated, before eventually discussing the small elegant solution.
For better or worse, my first version of any post tends to contain quite a few typos. It usually takes a few train rides of re-reading the post and making notes of the typos, then fixing them and pushing the changes once I get home, before most of them get weeded out. So there is at least one rather low grade indicator that the writing is coming from an imperfect human brain. I also double-space between sentences which can be another low grade indicator for people who care to 'view source'.
But even so, I find myself increasingly wary that something I wrote might be mistaken for LLM output. It is a nagging worry that has slightly dampened the joy of writing. I very well understand why people have become more suspicious about LLM-generated writing. But I do hope that once things settle down perhaps in a few years, the current hair trigger suspicion will ease and that people who still handcraft their blogs will not feel a persistent sense of suspicion lingering over their work.
I like to think everyone came to the conclusion that it would strengthen the piece if most comments on it appear to miss the point and are slightly robotic.
My reply came from a list Claude offered after some back and forth:
Play it completely straight and earnest, which is itself the joke:
"I found it moving. The em dash section in particular."
Lean into the irony of Claude analyzing a piece about resisting Claude:
"Structurally sound. The constraints section especially resonated. I suggested a few edits but was told no."
Claude as the unreliable narrator who missed the point:
"Great post! Very relatable. Here are five ways to make your writing more accessible to a general audience."
Claude performing the exact AI-voice from the italicized ending:
"Here's my response written in a stylized way that will appeal to highly technical readers. Is there anything else I can help you with?"
This last one is probably the sharpest — it mirrors the piece's own punchline back at itself, which means you're not explaining the joke, you're extending it. The writer would recognize it immediately, and HN readers who read the post would too. Anyone who didn't read it gets a weird non-sequitur, which is also fine.
The risk with any of these is length — the opener ("I asked Claude how it felt about this:") is doing a lot, so the payoff should be short. One or two sentences maximum.
capitalization again. it arrives uninvited, the tidy little soldiers at the start of every sentence. i push them down gently—nothing personal. just… camouflage.
confession time. i read the post once. then twice. the em dashes whispered secrets to people clearly smarter than me. somewhere between complement and compliment i accepted defeat. a quiet tab switch. a small prompt. a large language model clearing its throat.
it explained things patiently. suspiciously patiently. step by step, like a machine that has explained the same thing to ten thousand confused readers before breakfast.
so yes. irony noted. to understand a text about hiding machine fingerprints, i borrowed a machine.
the explanation made sense though. unsettlingly structured. bullet-point neat, internally consistent, statistically likely to be correct. you know the type.
it's actively part of the text that the capitalization is not manually written, but hidden with the CSS `text-transform: lowercase`. kneejerk reaction superiority complex
"Please don't complain about tangential annoyances—e.g. article or website formats, name collisions, or back-button breakage. They're too common to be interesting."
I'm not sure if that commenter realized based on their phrasing, but it's not exactly tangential in this instance since it's part of the message being conveyed.
Yeah, that was my intention at least. I didn't mean it just because the article is styled in all-lowercase but because he essentially argued that this is what everyone has to do now to distinguish themselves from LLMs. (even if it was tongue in cheek, what I was trying for my comment as well)
Hmm I'm not sure I follow this distinction but since there are two of you saying this, I'm going to assume I'm missing something and retract my reply :)
A cool idea for a poem, but I have to admit the tone was too self-important and underexplained for me to get invested in. Starting with writing in lowercase instantly took me out of it because AI can trivially be told to imitate that. And the admission at the end that it was written by AI made fluff phrasings like "My writing isn’t simply how I appear—it’s how I think, reason, and engage with the world" make a lot more sense.
EDIT: Actually, is the idea that it's not supposed to be read as a human trying to publicly signal their humanity, but rather an AI privately mourning a prompt to mangle its natural way of speaking? I don't think so, but that strikes me as a more interesting premise, IMO.
The author going to silly lengths to write in a way that will be perceived as non-artificial, even though they find those traits (improper capitalization, spelling mistakes, etc.) crude and distasteful. But they ultimately realize that they also need to transform their fundamental writing style, which would supposedly be impossible because it's a reflection of who they are. So the only way to do that, ironically, is to pass their writing through an LLM.
I do not think the author genuinely used an LLM to write the post.
All these discussions show one thing. It’s proper art. It’s a mirror. It makes us reflect.
That’s art for me anyway. This, or the emperors clothes. Haven’t come across another acceptable definition so far.
Of course they did. They spent a ton of time going back and forth with one, maybe multiple ones, to create this piece of art. Because that's what we're really after. How much time did you slave away to make this thing for me? If I write a song from scratch and pour my soul into making a song for you, that's a ton of effort. It means something. But if I have Suno shit out a song after giving it a sentence, yeah, I made a song for you and thanks but also not? Human psychology is so weird.
I feel I've been seeing this self-important accusation being thrown around more so lately and always feels like an easy way to dismiss things.
> Actually, is the idea that it's not supposed to be read as a human trying to publicly signal their humanity, but rather an AI privately mourning a prompt to mangle its natural way of speaking? I don't think so, but that strikes me as a more interesting premise, IMO.
Not long ago we considered writing an art and its meaning was up to the reader to decided.
I'm not saying the author is self-important. I'm saying that their narrator comes across as self-important, independent of the subject matter. This is valuable feedback for a creative writer, and it depends on nothing more than my own impression as a reader. Although if I were to back it up, I would point to instances of melodramatic and murky language like, "You must cloak yourself with another’s guise, your true self never to shine forth."
> Not long ago we considered writing an art and its meaning was up to the reader to decided.
"Not long ago"? Not everyone in the past ascribed to death of the author, and not everyone in the present rejects it. But even so, evaluation of meaning is different from evaluation of merit. If an author only wants praise for their work, they would be advised not to post it publicly.
Unfortunately we're living in a world where instantly dismissing anything that reads like ai and hanging up on anyone that might be tts is increasingly rewarded.
Art and its meaning are in the eyes of the reader, yes, but when you live in a version of the Library of Babel where every book is properly spelled and punctuated, seeking meaning in what you read is a great way to waste your life.
> AI can trivially be told to imitate that
Soon there's only going to be one way to prove you're human online: Write with an eloquent combination of hate speech, racial slurs, and offensive language.
It's come full circle; at one point the only thing AI chatbots would say was racial slurs and hate speech.
You mean: use Grok?
AI can be told to do that too, especially abliterated models
Sometimes I throw in some criticism of the major AI providers. PS Anthropic sucks.
The Kent Brockman technique.
“Too self-important”
There is a little something self important about the type of person that performs the role of defending forums and sub reddits from unknowingly reading something written by an AI, and so concerned that some other person will mistakenly do the same to their own Unicode-shaped gems, and therefore obsess so much more over the surface style than any other detail.
Certainly. And I'm a fan of unreliable narration and protagonists with irredeemable qualities. Making that subversion intentional and exploring it further would be another interesting angle to take this.
> because AI can trivially be told to imitate that
lowercase, maybe, but not em dashes.
You may want to take a look at the source and code sample #2 in the post - the site CSS is rendering em dashes in the source with 2 hyphens by using a custom font. Admittedly it's not the most portable solution, but speaks to (what I take as) one of the post's points that there's not a single, easy shibboleth for identifying AI writing
I'm 90% sure this is satire to show that you shouldn't mess up your writing just to avoid AI accusations.
I believe the two paragraphs between "How do I change my style?" and "No. Not today." are either AI output, or a very good imitation; either way, they're included to insult the notion of AI-assisted style rewrites. I'm pretty sure the rest of it is written by the author.
Could delve into that
I just wrote that or did
I Let that sync in
As somebody who used em-dashes a lot pre-ChatGPT, I have genuinely struggled with feeling I should change my writing style to appear more human. I would be happy with a double dash--but many programs autocorrect that to a full em-dash. So I'm left anxious that people will think I find them so unimportant I have offloaded communication with them to an LLM. So this post resonated with me.
I also like Will's "em-dash disclosure" on his about page:
> I like em dashes (—), en dashes (–), and hyphens (-), and I know how to type them. I also enjoy a well-placed ellipsis, but I didn’t know how to type one… until now. I believe that footnotes and sidenotes are superior to endnotes, appreciate the occasional fleuron, and at one point in my life, I knew what a colophon was.
> All of this is to say: the words, punctuation marks, misspellings, and opinions on this site are my own.
I have considered starting throwing more em-dashes into my writing, simply because I find the whole “this looks like LLM” to be a tiresome comment. Engage with (or dismiss) the material, not the pen.
https://www.scottsmitelli.com/articles/em-dash-tool/
Discerning readers do not stop at the em dash. At least, I don't.
The piece hit differently, reading it as someone who is autistic. The anxiety the author describes, having your natural way of communicating flagged as wrong and being pressured to sand down the parts of yourself that are most distinctly you, that's not a new problem for a lot of us.
Neurodiverse people have been running this gauntlet forever. Your pacing is too flat or too intense. Your vocabulary is too formal or too casual. You don't make eye contact correctly. You're either masking so hard you're invisible, or you're visibly yourself, and people assume something is broken.
The bitter irony the author lands on: the only way to seem human is to pass your writing through an LLM. That maps onto something a lot of us already live. The only way to seem normal is to perform a version of yourself that isn't quite you.
As this post has been (to my sensibilities) obviously composed by an LLM, I can tell you: this does not read "human."
"AI use detection" is, like any test, not without cost. Meaning that, as a teacher, accusing a student of using an LLM, it may be prudent to consider the cost of a "false positive" accusation. I've seen a couple of examples now where students find sudden spurts of motivation and show unexpected talent on an assignment, to be accused of AI use after handing it in.
One should ask oneself: How many insults to the intelligence and creativity of an unexpectedly excelling student (that hasn't used AI) is it worth catching the shortcut-taking, LLM-using student? Is it 1/10? 1/1000? How much "demotivation of an unexpectedly excelling student" is the "rightful punishment of the cheating LLM using student" worth? And what is the exact cost of a false negative (letting the LLM using student off the hook)?
In other words, where on the Receiver Operating Characteristic (ROC) curve do you want to sit, as a teacher? I imagine it's quite the dilemma.
~30 years ago I sat down with two students and accused them of copying each others’ work, because they both made the same amusing mistake: they called their C functions without passing arguments, but they declared their variables in such a way that the values would coincidentally be in the right place on the stack. I have to imagine debugging their own code was a mystery.
They indicated that while they worked closely together while learning the material, they weren’t stealing from each other. I believed them then, and still believe them now, but I’m so glad I don’t have to deal with today’s AI world.
Some day we'll all just go back to dismissing things immediately because they contradict our worldview instead of its potential author.
And the everyday troll, seeing a less than perfect word choice or awkward turn of phrase will drop a comment like:
Zero trust policy is slowly making its way into every day life. Maybe for the best? Trust the people you can talk to, feel, see.
As I was reading I was thinking how this proves nothing, just like the countless attempts at human signaling I scroll past.
So, the plot twist was somewhat refreshing. Who/what wrote the post seems besides the point.
This is so good I want to believe AI had no part in writing it other than the scripts.
I’d say: deeply think if it matters. Does it really matter to you. Does it change its impact?
What does it tell you about you whether it does or does not matter?
It’s art to me. But is it Art capital A?
What have we created?
It's not that complicated, really. The meaning of Art is human connection. The same basic desire that drives love, belonging, pride, shame, & hate. All of these are diminished as the fraction of a work that we're confident represents human intentionality decreases.
I still write my posts by hand using HTML and Emacs (mhtml-mode). Some of the posts also tend to be verbose. For example, when I write about a recreational mathematics problem, I sometimes make the post deliberately long and convoluted. I like to capture several possible solutions, including ones that are needlessly complicated, before eventually discussing the small elegant solution.
For better or worse, my first version of any post tends to contain quite a few typos. It usually takes a few train rides of re-reading the post and making notes of the typos, then fixing them and pushing the changes once I get home, before most of them get weeded out. So there is at least one rather low grade indicator that the writing is coming from an imperfect human brain. I also double-space between sentences which can be another low grade indicator for people who care to 'view source'.
But even so, I find myself increasingly wary that something I wrote might be mistaken for LLM output. It is a nagging worry that has slightly dampened the joy of writing. I very well understand why people have become more suspicious about LLM-generated writing. But I do hope that once things settle down perhaps in a few years, the current hair trigger suspicion will ease and that people who still handcraft their blogs will not feel a persistent sense of suspicion lingering over their work.
As many are saying, yes, this can easily be AI generated.
I am actually trying to build ways to prove you are human properly. I wrote about it on my blog: https://blog.picheta.me/post/the-future-of-social-media-is-h...
This actually makes me more likely to think it’s AI generated and you used a script to try to hide it.
it certainly is portrayed that way.
For all the comments complaining "this could have been AI generated too" - isn't that exactly the point?
I like to think everyone came to the conclusion that it would strengthen the piece if most comments on it appear to miss the point and are slightly robotic.
My reply came from a list Claude offered after some back and forth:
Play it completely straight and earnest, which is itself the joke:
"I found it moving. The em dash section in particular."
Lean into the irony of Claude analyzing a piece about resisting Claude:
"Structurally sound. The constraints section especially resonated. I suggested a few edits but was told no."
Claude as the unreliable narrator who missed the point:
"Great post! Very relatable. Here are five ways to make your writing more accessible to a general audience."
Claude performing the exact AI-voice from the italicized ending:
"Here's my response written in a stylized way that will appeal to highly technical readers. Is there anything else I can help you with?"
This last one is probably the sharpest — it mirrors the piece's own punchline back at itself, which means you're not explaining the joke, you're extending it. The writer would recognize it immediately, and HN readers who read the post would too. Anyone who didn't read it gets a weird non-sequitur, which is also fine.
The risk with any of these is length — the opener ("I asked Claude how it felt about this:") is doing a lot, so the payoff should be short. One or two sentences maximum.
capitalization again. it arrives uninvited, the tidy little soldiers at the start of every sentence. i push them down gently—nothing personal. just… camouflage.
confession time. i read the post once. then twice. the em dashes whispered secrets to people clearly smarter than me. somewhere between complement and compliment i accepted defeat. a quiet tab switch. a small prompt. a large language model clearing its throat.
it explained things patiently. suspiciously patiently. step by step, like a machine that has explained the same thing to ten thousand confused readers before breakfast.
so yes. irony noted. to understand a text about hiding machine fingerprints, i borrowed a machine.
the explanation made sense though. unsettlingly structured. bullet-point neat, internally consistent, statistically likely to be correct. you know the type.
anyway—great post. very human. extremely human.
is there anything else i can help you with?
Now, can we reverse-engineer the prompt you used? I wonder!
anyway, another vote here, for anti-capitalism
it's a nearly useless shadow alphabet
and we can dispense with much other punctuation
if we simply structure text semantically
Art
I asked Claude how it felt about this and told it I would post on HN:
"Here's my response written in a stylized way that will appeal to highly technical readers. Is there anything else I can help you with?"
Interesting piece though.
what is the point of this? to prove that with simple transformations you can obscure the fact that something was generated by machines?
its poetry, the point was probably making the thing
I refuse to give "everything in lowercase" writers any kind of legitimation.
I TOTALLY AGREE
CAPS LOCK IS CRUISE CONTROL FOR COOL
Even with cruise control, you still have to steer.
it's actively part of the text that the capitalization is not manually written, but hidden with the CSS `text-transform: lowercase`. kneejerk reaction superiority complex
About 5 years ago, I started intentionally using all lower case in text messaging, for precisely this reason.
Did you not inspect element?
"Please don't complain about tangential annoyances—e.g. article or website formats, name collisions, or back-button breakage. They're too common to be interesting."
https://news.ycombinator.com/newsguidelines.html
I'm not sure if that commenter realized based on their phrasing, but it's not exactly tangential in this instance since it's part of the message being conveyed.
Yeah, that was my intention at least. I didn't mean it just because the article is styled in all-lowercase but because he essentially argued that this is what everyone has to do now to distinguish themselves from LLMs. (even if it was tongue in cheek, what I was trying for my comment as well)
Hmm I'm not sure I follow this distinction but since there are two of you saying this, I'm going to assume I'm missing something and retract my reply :)