Here's a Reuters report from June 2, which includes a link to a May 14 SEC filing:
> Cryptocurrency exchange Coinbase knew as far back as January about a customer data leak at an outsourcing company connected to a larger breach estimated to cost up to $400 million, six people familiar with the matter told Reuters.
> On May 11, 2025, Coinbase, Inc., a subsidiary of Coinbase Global, Inc. (“Coinbase” or the “Company”), received an email communication from an unknown threat actor claiming to have obtained information about certain Coinbase customer accounts, as well as internal Coinbase documentation, including materials relating to customer-service and account-management systems.
Very interesting... January 7th is when I reported it to them so that lines up. I suspect I wasn't the very first person, the person I spoke with on the phone had the confidence I wouldn't expect on the first try.
Once did some programming/networking work for a company that did the networking of a office sharing building that Coinbase was running out of. Early in my work there I noticed that the company had its admin passwords written on a whiteboard -- visible from the hallway because they had glass for walls. So I sent them an email to ask that they remove it (I billed them for it).
Their fix was to put a piece of paper over the passwords.
GP is saying that they were already one of Cloudflare's vendors (they did the networking/IT setup for Cloudflare's office). Whether you'd tolerate that kind of behavior from a vendor is one thing, but for an existing vendor relationship I think adding a few billable hours for "I found this issue in your network and documented and reported it for you" to an existing contract is not particularly unreasonable.
More likely, this is a spectacular version of CYA. By billing the hours, there is a paper trail so that when the inevitable breach occurs, you can point to having done the appropriate thing.
They are lucky they just got a bill and not a terminated contract. Consulting companies I have worked for would have dropped them immediately because we don't want clients with that kind of risk. Massive red flag that signals management is non-existent, incompetent, or checked out. That is egregious negligence.
"With Bitcoin you do not get government bailouts" -- yeah maybe not yet? Is it beyond belief that a government with leadership deeply invested in crypto currencies might take action if something super disruptive happens?
Possible. But Bitcoin is hard capped at 21 million coins. The government can peint more paper money to bail a company out if it makes stupid decisions, but they cannot print more Bitcoin. This will devalue the paper currency even more and also increase the value of Bitcoin. Bitcoin is called a hedge against inflation for a reason.
Bitcoin is not an immutable law of nature. If the coin minting cap is reached, all that needs to happen is for miners to start running a fork with a higher cap. Tada, more coins conjured out of the ether, just like all the previous ones. If you want enforced scarcity, you need to be tied to something physically scarce.
The miners can totally start mining a fork, in fact they can start doing so today, but it doesn't matter because nobody will use their fork and then they will have lost out on their hundreds of millions of dollars of investments into mining equipment.
The node operators play just as critical of a role in Bitcoin as the miners.
It's not the node operators either, it's the people who transact on the chain that determine the value of the coins. The miners can disrupt the ability of the chain to transact to some degree, but they can't make people think their fork is worthwhile (why anyone still thinks BTC has much long-term value is beyond me, but...).
all that needs to happen is for countries to destroy their nuclear weapons
all that needs to happen is for governments to stop burning fossil fuels
all that needs to happen is for researchers to publish boring papers replicating others results
all that needs to happen is for fishermen to stop overfishing
Coordination problems seem easy but never really are. The chance of all the miners just suddenly agreeing to do something all at once is pretty low to impossible.
It would require the market to move as well to consider those new coins worth anything, though. Miners do not have enough control of the chain to make such changes on their own.
At present BTC is usually denominated in USD. Until I start to see BTC used as the cross-rate I'm sceptical. Presuming it occurs, it would occur relatively quickly?
> With Bitcoin you do not get government bailouts like what happened during the beyond reckless banks in 2008
It is not beyond imagination that the most popular Bitcoin blockchain (and thus, the label of being the "real" Bitcoin) could change at some point in the future.
"Bitcoin" is not immune from the implications of political fuckery.
By what mechanism? The whole point of bitcoin is that you can’t force a consensus change. This is enforced by the algorithm and the laws of thermodynamics.
If, for whatever reason, all the mining power switches to the other chain, it will become the de facto "Bitcoin".
I don't know what the specific mechanism would be, but I would bet that it relates to the billions of dollars backing the current ecosystem, and the interests of the people behind them. If the right event or crisis comes along, then people could be compelled to switch over to something else.
I'm sure there's someone out there still mining blocks on that chain with the exploit from 2010, but that's not where the mining power is. If the right series of events occurs, the miners will switch.
Bitcoin has forked a few times it's creation: https://en.wikipedia.org/wiki/List_of_bitcoin_forks
The determining factor for which fork is successfully is bases on the Bitcoin node runners and miners choosing which fork they devote their resources to.
Governments around the world are 100% attempting different plans to destabilize or destroy Bitcoin because it harms their interests and ability to print money from thin air. But at the end of the day it's a distributed ledger, so even if they do find a way to manipulate or damage or takeover the network the Bitcoin users can just fork it from before they did their damage and continue from there. That is the ultimate power of a decentralized blockchain, nobody has ultimate power and everyone votes with their resources.
I don't see a reference to a government bailout in the article you listed. The chain was forked by the community to the state before the hack and most users switched over this supporting this fork and calling it Etherium going forward.
That's a silly assumption to make. I'm clearly talking about the poor security offered by cryptocurrency, in practice, as evidenced by the frequent hacks impacting cryptocurrency companies.
I got rung in the UK I think a month ago from someone claiming to be from Coinbase. I told them I only had about £5 of Bitcoin cash in my account (which was true), and they immediately lost interest and said a forthcoming email would handle the matter.
They also asked if I had cold storage. I told them I had a fridge (also true).
Not sure if the op is reading, but I also detected the same Coinbase hack around the same timeline. From what I can tell, literally everything was compromised because even their Discord channel's api keys were compromised and were finally reset around April or May. This means their central secrets manager was likely compromised too.
The author got a phishing call and reported it. Coinbase likely has a deluge of phishing complaints, as criminals know their customers are vulnerable and target their customers regularly. The caller knowing account details is likely not unique in those complaints; customers accidentally leak those all the time. Some of the details the attacker knew could have been sourced from other data breaches. At the time of complaint, the company probably interpreted the report as yet another customer handling their own data poorly.
Phishing is so pervasive that I wouldn't be surprised if the author was hit by a different attack.
My first thought was someone they tied a blockchain transaction to my name and then traced it backwards. But they also knew my ETH and BTC balances, and date the account was opened. You might be able to figure out the open date by looking at the blockchain but I could never determine how they would know balances for two unrelated cryptos without some kind of coinbase compromise.
true, I can’t rule those out entirely. I access via iPhone to limit attack surface area, the info was never printed, present in emails, or disclosed to 3rd parties
The "recordings" are of a phisher attempting to get information from the author. It proves nothing about what Coinbase knew.
The author turned the information over to Coinbase, but that doesn't prove Coinbase knew about their breach. The customer could have leaked their account details in some other way.
I sent the phone recording and emails to coinbase, and they acknowledged them saying "This report is super robust and gives us a lot to look into. We are investigating this scammer now."
I'm not trying to be recalcitrant, rather I am genuinly curious. The reason I ask is that no one talks like a LLM, but LLMs do talk like someone. LLMs learned to mimic human speech patterns, and some unlucky soul(s) out there have had their voice stolen. Earlier versions of LLMs of LLMs that more closely followed the pattern and structure of a wikipedia entry were mimicking a style that that was based of someone elses style and given some wiki users had prolific levels of contributions, much of their naturally generated text would register as highly likely to be "AI" via those bullshit ai detector tools.
So, given what we know of LLMs (transformers at least) at this stage it seems more likely to me that current speech patterns again are mimicry of someones style rather than an organically grown/developed thing that is personal to the LLM.
Looks like AI to me too. Em dashes (albeit nonstandard) and the ‘it’s not just x, it’s y’ ending phrases were everywhere. Harder to put into words but there’s a sense of grandiosity in the article too.
Not saying the article is bad, it seems pretty good. Just that there are indications
This blog post isn't human speech, it's typical AI slop. (heh, sorry.)
Way too verbose to get the point across, excessive usage of un/ordered bullets, em dashes, "what i reported / what coinbase got wrong", it all reeks of slop.
Once you notice these micro-patterns, you can't unsee them.
Would you like me to create a cheat sheet for you with these tell tale signs so you have it for future reference?
Just chiming in here - any time I've written something online that considers things from multiple angles or presents more detailed analysis, the liklihood that someone will ask if I just used ChatGPT go way up. I worry that people have gotten really used to short, easily digestible replies, and conflate that with "human". Because of course it would be crazy for a human to expend "that much effort" on something /s.
EDIT: having said that, many of the other articles on the blog do look like what would come from AI assistance. Stuff like pervasive emojis, overuse of bulleted lists, excessive use of very small sections with headers, art that certainly appears similar in style to AI generated assets that I've seen, etc. If anything, if AI was used in this article, it's way less intrusive than in the other articles on the blog.
Author here - yes, this was written using guided AI. I consider this different than giving a vague prompt and telling it to write an article. My process was to provide all the information, for example I used AI to:
1. transcribe the phone call into text using whisper model
2. review all the email correspondence
3. research industry news about the breach
4. brainstorm different topics and blog structures to target based on the information, pick one
5. Review the style of my other blog articles
6. write the article and redact any personal info
7. review the article and suggest iterate on changes multiple times.
To me this is more akin to having a writer on staff who can save you a lot of time. I can do all the above in less than 30mins, where it could take a full day to do it manually. I had a blog 20 years ago but since then I never had time to write content again (too time consuming and no ROI) - so the alternative would be nothing.
There are some still some signs you can tell content is AI written based on verbosity, use of bold, specific HTML styling, etc. I see no issues with the approach. I noticed some people have an allergic reaction to any hint of AI, and when the content produced is "fluff" with no real content I get annoyed too - however that isn't the case for all content.
The issue is that the article is excessively verbose; the time you saved in writing end editing comes at the cost of wasting readers' time. There is nothing wrong with using AI to improve writing, but using it to insert fluff that came at no cost to you and no benefit to me feels like a violation of social contract.
Please, at least put a disclaimer on top so I can ask an AI to summarize the article and complete the cycle of entropy.
I don't know if he wrote it via AI, but he repeats himself over and over again. It could have been 1/3 the length and still conveyed the same amount of information.
I know I shouldn’t pile on with respect to the AI Slop Signature Style, but in the hopes of helping people rein in the AI-trash-filter excesses and avoid reactions like these…
The sentence-level stuff was somewhat improved compared to whatever “jaunty Linked-In Voice” prompt people have been using. You know, the one that calls for clipped repetitive phrases, needless rhetorical questions, dimestore mystery framing, faux-casual tone, and some out-of-proportion “moral of the story.” All of that’s better here.
But there’s a good ways left to go still. The endless bullet lists, the “red flags,” the weirdly toothless faux drama (“The Call That Changed Everything”, “Data Catastrophe: The 2025 Cyber Fallout”), and the Frankensteined purposes (“You can still protect yourself from falling victim to the scams that follow,” “The Timeline That Doesn't Make Sense,” etc.)…
The biggest thing that stands out to me here (besides the essay being five different-but-duplicative prompt/response sessions bolted together) are the assertions/conclusions that would mean something if real people drew them, but that don’t follow from the specifics. Consider:
“The Timeline That Doesn't Make Sense
Here's where the story gets interesting—and troubling:
[they made a report, heard back that it was being investigated, didn’t get individual responses to their follow-ups in the immediate days after, the result of the larger investigation was announced 4 months later]”
Disappointing, sure. And definitely frustrating. But like… “doesn’t make sense?” How not so? Is it really surprising or unreasonable that it takes a large organization time, for a major investigation into a foreign contractor, with law enforcement and regulatory implications, as well as 9-figure customer-facing damages? Doesn’t it make sense (even if it’s disappointing), when stuff that serious and complex happens, that they wait until they’re sure before they say something to an individual customer?
I’m not saying it’s good customer service (they could at least drop a reply with “the investigation is ongoing and we can’t comment til it’s done”). There’s lots of words we could use to capture the suckage besides “doesn’t make sense.” My issue is more that the AI presents it as “interesting—and troubling; doesn’t make sense” when those things don’t really follow directly from the bullet list of facts afterward.
Each big categorical that the AI introduced this way just… doesn’t quite match what it purports to describe. I’m not sure exactly how to pin it down, but it’s as if it’s making its judgments entirely without considering the broader context… which I guess is exactly what it’s doing.
The post just repeats things over and over again, like the Brett Farmer thing, the "four months", telling us three times that they knew "my BTC balance and SSN" and repeatedly mentioning that it was a Google Voice number.
Almost sounds like the posts of people whining about LLMs.
Of course, unlike those people, LLMs are capable of expressing novel ideas that add meaningful value to diverse conversations beyond loudly and incessantly ensuring everyone in the thread is aware of their objection to new technology they dislike.
LLMs are definitely capable of helping with writing, connecting the dots, and sometimes now of genuine insight. They're also still very capable of producing time-wasting slop.
It's the task of anybody presenting their output to third parties to read (at least without a disclaimer about a given text being unvetted LLM output) to make damn sure it's the former and not the latter.
Thankfully, the 8 millionth post whining about LLMs with zero additional value added to the conversation is far less time-wasting than a detailed blog post about a real-world security incident in a major corporation that isn't being widely covered by other outlets.
The article isn't paywalled. Nobody was forced to read it. Nobody was prohibited from asking an LLM to summarize the article.
Whining about LLM written text is whining about one's own deliberate choice to read an article. There is no implied contract or duty between the author and the people who freely choose to read or not read the author's (free) publication.
It's like walking into a (free) soup kitchen, consuming an entire bowl of free soup, and then whining loudly to everyone else in the room about the soup being too salty.
I think the feedback that LLMs were used not very successfully in the making of TFA is valid criticism and might even help other/future authors.
We're probably reading LLM-assisted or even generated texts many times per day at this point, and as long as I don't notice that my time is being wasted by bad writing or hallucinated falsehoods, I'm perfectly fine with it.
Interesting timeline, but nothing here proves, or even strongly indicates, that Counbase “knew about the breach” from this one report.
Screenscraping malware is fairly common, and it’s not unreasonable for an analyst to look at a report like this and assume that the customer got popped instead of them.
Customers get popped all the time, and have a tendency to blame the proximate corporation…
That's true, but in this case I got a response from the head of trust and safety after I sent the phone recording, email + email headers, saying "This report is super robust and gives us a lot to look into. We are investigating this scammer now."
We use Coinbase as an org, we were targeted in early Feb 2025. Caught by person handling the accounts who is paranoid enough to reach out to the org contact on the other side.
True - but be very careful. Roughly 10–18% of all BTC are believed gone forever due to lost keys/wallets. That is more than all hacks and exchange blowups combined. If you take your wallet offline it can be hard not to lose your keys over a long period of time, including across death to your next of kin.
The entire web3 scene is a clusterfuck filled with scammers. Recently i got hacked by web3 interview which is a common vector nowadays.
They send github repo and as soon as you run it they send rejection after stealing tokens and installing keylogger. Pretty sophisticated and the frontend of the codebase looked polished as well.
Has anyone demonstrated that agentic AI systems can be bribed with money, or is that vulnerability still strictly relegated to unrealiable, untrustworthy biological intelligence?
Yes, I did briefly touch on that in the article. "SEC rules require timely reporting of material cybersecurity incidents."
Looking into this more now I see SEC Rule requiring disclosure within 4 business days of determining a cybersecurity incident is "material"
There is a big list of SEC violations as a result:
1. Late Disclosure (Item 1.05)
If materiality was determinable in January → 4-day rule violated
Penalty: Fines, enforcement actions
2. Misleading Statements/Omissions (Rule 10b-5)
Any public statements about security between Jan-May could be problematic
Omitting known material risks = securities fraud
3. Inadequate Internal Controls (SOX)
Failure to properly investigate and escalate user reports
Inadequate breach detection systems
4. Failure to Maintain Adequate Disclosure Controls
My report should have triggered disclosure review
Going silent suggests broken escalation process
My Coinbase account got caught up in this and I'm so glad I used something like coinbase_jridi46@example.com as my email address with them because emails to that address can be treated as hostile in the wake of the breach. if I'd just used coinbase@example.com as my email address with them, I'd be fucked.
In January 2025, I was targeted by scammers who knew my exact Bitcoin balance, SSN, DL, and other private Coinbase account details. I immediately reported this to Coinbase's Head of Trust & Safety with recordings and technical evidence. Despite repeated follow-ups asking how attackers had my data, Coinbase went silent for 4 months. They only disclosed the breach in May after attackers demanded $20M ransom. The breach involved overseas contractors at TaskUs being bribed for customer data. This article documents the timeline with emails, recordings, and evidence showing Coinbase was aware of the breach months before their official "discovery" date.
You mentioned that the DKIM headers "passed validation for coinbase.com". How could that have been possible, if the email was a phishing email? I'm not sure I understood that part, especially because you didn't provide any examples of the header data you received from the attacker.
Yeah this is very confusing for me too, how could the attackers create a valid DKIM signature for coinbase.com? Either there is a huge misconfiguration or it's not possible. Am I missing something?
You’d need to prove harm, which is somewhat nebulous here.*
Matt Levine has a prescient and depressing quote about the only recourse for being being shareholder lawsuits:
> I find all of this so weird because of how it elevates finance. [Various cases] imply that we are not entitled to be protected from pollution as citizens, or as humans. [Another] implies that we are not entitled to be told the truth as citizens. (Which: is true!) Rather, in each case, we are only entitled to be protected from lies as
shareholders. The great harm of pollution, or of political dishonesty, is that it might lower the share prices of the companies we own.
* To be clear, I don’t think it is nebulous, and you’re right to feel harmed. But, legally, I don’t know the harm in “they didn’t respond to my emails” after there’s no concrete damage.
I've never looked at the Coinbase agreement that's presented when you open an account, but chances are you would have to go through arbitration first. That's not necessarily a bad thing.
Here's a Reuters report from June 2, which includes a link to a May 14 SEC filing:
> Cryptocurrency exchange Coinbase knew as far back as January about a customer data leak at an outsourcing company connected to a larger breach estimated to cost up to $400 million, six people familiar with the matter told Reuters.
https://www.reuters.com/sustainability/boards-policy-regulat...
> On May 11, 2025, Coinbase, Inc., a subsidiary of Coinbase Global, Inc. (“Coinbase” or the “Company”), received an email communication from an unknown threat actor claiming to have obtained information about certain Coinbase customer accounts, as well as internal Coinbase documentation, including materials relating to customer-service and account-management systems.
https://www.sec.gov/Archives/edgar/data/1679788/000167978825...
Very interesting... January 7th is when I reported it to them so that lines up. I suspect I wasn't the very first person, the person I spoke with on the phone had the confidence I wouldn't expect on the first try.
> an outsourcing company
From what I've seen, this is going to be a common subheading to a lot of these stories.
Once did some programming/networking work for a company that did the networking of a office sharing building that Coinbase was running out of. Early in my work there I noticed that the company had its admin passwords written on a whiteboard -- visible from the hallway because they had glass for walls. So I sent them an email to ask that they remove it (I billed them for it).
Their fix was to put a piece of paper over the passwords.
What a time.
> So I sent them an email to ask that they remove it (I billed them for it)
Sending unsolicited bills for unrequested services is a great way to make sure nobody takes your email seriously
GP is saying that they were already one of Cloudflare's vendors (they did the networking/IT setup for Cloudflare's office). Whether you'd tolerate that kind of behavior from a vendor is one thing, but for an existing vendor relationship I think adding a few billable hours for "I found this issue in your network and documented and reported it for you" to an existing contract is not particularly unreasonable.
More likely, this is a spectacular version of CYA. By billing the hours, there is a paper trail so that when the inevitable breach occurs, you can point to having done the appropriate thing.
s/cloudflare/coinbase/
They are lucky they just got a bill and not a terminated contract. Consulting companies I have worked for would have dropped them immediately because we don't want clients with that kind of risk. Massive red flag that signals management is non-existent, incompetent, or checked out. That is egregious negligence.
Top notch start-up speed! Let's brrrrrrrrr...
This doesn’t surprise me at all.
Bitcoin, and really fintech as a whole, are beyond reckless.
You say that but I work in fintech (granted, one of the larger more corporate ones, after an acquisition) and we are heavily regulated, and audited.
Bitcoin is a crypto-currency/blockchain. Coinbase is a corporation that allows users to buy/trade crypto-currencies.
With Bitcoin you do not get government bailouts like what happened with the beyond reckless banks in 2008.
"With Bitcoin you do not get government bailouts" -- yeah maybe not yet? Is it beyond belief that a government with leadership deeply invested in crypto currencies might take action if something super disruptive happens?
Possible. But Bitcoin is hard capped at 21 million coins. The government can peint more paper money to bail a company out if it makes stupid decisions, but they cannot print more Bitcoin. This will devalue the paper currency even more and also increase the value of Bitcoin. Bitcoin is called a hedge against inflation for a reason.
> But Bitcoin is hard capped at 21 million coins
Bitcoin is not an immutable law of nature. If the coin minting cap is reached, all that needs to happen is for miners to start running a fork with a higher cap. Tada, more coins conjured out of the ether, just like all the previous ones. If you want enforced scarcity, you need to be tied to something physically scarce.
The miners can totally start mining a fork, in fact they can start doing so today, but it doesn't matter because nobody will use their fork and then they will have lost out on their hundreds of millions of dollars of investments into mining equipment.
The node operators play just as critical of a role in Bitcoin as the miners.
It's not the node operators either, it's the people who transact on the chain that determine the value of the coins. The miners can disrupt the ability of the chain to transact to some degree, but they can't make people think their fork is worthwhile (why anyone still thinks BTC has much long-term value is beyond me, but...).
Yes! Thank for that correction.
all that needs to happen is for countries to destroy their nuclear weapons all that needs to happen is for governments to stop burning fossil fuels all that needs to happen is for researchers to publish boring papers replicating others results all that needs to happen is for fishermen to stop overfishing
Coordination problems seem easy but never really are. The chance of all the miners just suddenly agreeing to do something all at once is pretty low to impossible.
It would require the market to move as well to consider those new coins worth anything, though. Miners do not have enough control of the chain to make such changes on their own.
At present BTC is usually denominated in USD. Until I start to see BTC used as the cross-rate I'm sceptical. Presuming it occurs, it would occur relatively quickly?
Square just pushed out the ability to pay in Bitcoin to millions of retailers this last week: https://www.forbes.com/sites/digital-assets/2025/11/11/bitco...
We're right on the corner of that very day that you're talking about.
> With Bitcoin you do not get government bailouts like what happened during the beyond reckless banks in 2008
It is not beyond imagination that the most popular Bitcoin blockchain (and thus, the label of being the "real" Bitcoin) could change at some point in the future.
"Bitcoin" is not immune from the implications of political fuckery.
By what mechanism? The whole point of bitcoin is that you can’t force a consensus change. This is enforced by the algorithm and the laws of thermodynamics.
If, for whatever reason, all the mining power switches to the other chain, it will become the de facto "Bitcoin".
I don't know what the specific mechanism would be, but I would bet that it relates to the billions of dollars backing the current ecosystem, and the interests of the people behind them. If the right event or crisis comes along, then people could be compelled to switch over to something else.
I'm sure there's someone out there still mining blocks on that chain with the exploit from 2010, but that's not where the mining power is. If the right series of events occurs, the miners will switch.
Bitcoin has forked a few times it's creation: https://en.wikipedia.org/wiki/List_of_bitcoin_forks The determining factor for which fork is successfully is bases on the Bitcoin node runners and miners choosing which fork they devote their resources to.
Governments around the world are 100% attempting different plans to destabilize or destroy Bitcoin because it harms their interests and ability to print money from thin air. But at the end of the day it's a distributed ledger, so even if they do find a way to manipulate or damage or takeover the network the Bitcoin users can just fork it from before they did their damage and continue from there. That is the ultimate power of a decentralized blockchain, nobody has ultimate power and everyone votes with their resources.
Power comes from the barrel of a gun.
Yes. That is why the Second Amendment is so important. It reminds those in the government not to overstep their bounds.
There was a government* bailout in Ethereum, however. https://en.wikipedia.org/wiki/The_DAO
The government of Ethereum is not the US government.
I don't see a reference to a government bailout in the article you listed. The chain was forked by the community to the state before the hack and most users switched over this supporting this fork and calling it Etherium going forward.
Ah yes, I remember all the times they hacked bitcoin
It's been a while, but it has happened:
https://nvd.nist.gov/vuln/detail/CVE-2010-5139
There's a great index of hacks here https://www.web3isgoinggreat.com/?theme=hack
It's breathtaking how frequent these are.
That’s like saying the $USD was hacked when a bank gets breached.
That's a silly assumption to make. I'm clearly talking about the poor security offered by cryptocurrency, in practice, as evidenced by the frequent hacks impacting cryptocurrency companies.
No. It's like saying that cash is risky when a bunch of cash gets stolen or lost.
Are banks breached at the same rate as bitcoin brokers? I think that was op’s point.
lol monero in username
I got rung in the UK I think a month ago from someone claiming to be from Coinbase. I told them I only had about £5 of Bitcoin cash in my account (which was true), and they immediately lost interest and said a forthcoming email would handle the matter.
They also asked if I had cold storage. I told them I had a fridge (also true).
Hahaha, i'm using this next time I get a spam call
Not sure if the op is reading, but I also detected the same Coinbase hack around the same timeline. From what I can tell, literally everything was compromised because even their Discord channel's api keys were compromised and were finally reset around April or May. This means their central secrets manager was likely compromised too.
This doesn't seem like proof to me.
The author got a phishing call and reported it. Coinbase likely has a deluge of phishing complaints, as criminals know their customers are vulnerable and target their customers regularly. The caller knowing account details is likely not unique in those complaints; customers accidentally leak those all the time. Some of the details the attacker knew could have been sourced from other data breaches. At the time of complaint, the company probably interpreted the report as yet another customer handling their own data poorly.
Phishing is so pervasive that I wouldn't be surprised if the author was hit by a different attack.
My first thought was someone they tied a blockchain transaction to my name and then traced it backwards. But they also knew my ETH and BTC balances, and date the account was opened. You might be able to figure out the open date by looking at the blockchain but I could never determine how they would know balances for two unrelated cryptos without some kind of coinbase compromise.
> but I could never determine how they would know balances for two unrelated cryptos
There's tons of options. Malware, evil maid, shoulder surfing, email compromise, improper disposal of printouts, prior phishing attack, accidental disclosure.
true, I can’t rule those out entirely. I access via iPhone to limit attack surface area, the info was never printed, present in emails, or disclosed to 3rd parties
This is an extremely clickbaity headline.
The "recordings" are of a phisher attempting to get information from the author. It proves nothing about what Coinbase knew.
The author turned the information over to Coinbase, but that doesn't prove Coinbase knew about their breach. The customer could have leaked their account details in some other way.
I sent the phone recording and emails to coinbase, and they acknowledged them saying "This report is super robust and gives us a lot to look into. We are investigating this scammer now."
You apparently did not read the article. What you are looking for is right there.
Wild tale, but very annoying that he wrote it with an AI. It's horribly jarring to read.
How do you know?
I'm not trying to be recalcitrant, rather I am genuinly curious. The reason I ask is that no one talks like a LLM, but LLMs do talk like someone. LLMs learned to mimic human speech patterns, and some unlucky soul(s) out there have had their voice stolen. Earlier versions of LLMs of LLMs that more closely followed the pattern and structure of a wikipedia entry were mimicking a style that that was based of someone elses style and given some wiki users had prolific levels of contributions, much of their naturally generated text would register as highly likely to be "AI" via those bullshit ai detector tools.
So, given what we know of LLMs (transformers at least) at this stage it seems more likely to me that current speech patterns again are mimicry of someones style rather than an organically grown/developed thing that is personal to the LLM.
Looks like AI to me too. Em dashes (albeit nonstandard) and the ‘it’s not just x, it’s y’ ending phrases were everywhere. Harder to put into words but there’s a sense of grandiosity in the article too.
Not saying the article is bad, it seems pretty good. Just that there are indications
It's also strange to suggest readers use ChatGPT or Claude to analyze email headers.
Might as well say "You can tell by the way it is".
I don’t understand this comment. I’ve found AI a great tool for identifying red flags in scam emails and wanted to share that.
This blog post isn't human speech, it's typical AI slop. (heh, sorry.)
Way too verbose to get the point across, excessive usage of un/ordered bullets, em dashes, "what i reported / what coinbase got wrong", it all reeks of slop.
Once you notice these micro-patterns, you can't unsee them.
Would you like me to create a cheat sheet for you with these tell tale signs so you have it for future reference?
Just chiming in here - any time I've written something online that considers things from multiple angles or presents more detailed analysis, the liklihood that someone will ask if I just used ChatGPT go way up. I worry that people have gotten really used to short, easily digestible replies, and conflate that with "human". Because of course it would be crazy for a human to expend "that much effort" on something /s.
EDIT: having said that, many of the other articles on the blog do look like what would come from AI assistance. Stuff like pervasive emojis, overuse of bulleted lists, excessive use of very small sections with headers, art that certainly appears similar in style to AI generated assets that I've seen, etc. If anything, if AI was used in this article, it's way less intrusive than in the other articles on the blog.
Author here - yes, this was written using guided AI. I consider this different than giving a vague prompt and telling it to write an article. My process was to provide all the information, for example I used AI to: 1. transcribe the phone call into text using whisper model 2. review all the email correspondence 3. research industry news about the breach 4. brainstorm different topics and blog structures to target based on the information, pick one 5. Review the style of my other blog articles 6. write the article and redact any personal info 7. review the article and suggest iterate on changes multiple times. To me this is more akin to having a writer on staff who can save you a lot of time. I can do all the above in less than 30mins, where it could take a full day to do it manually. I had a blog 20 years ago but since then I never had time to write content again (too time consuming and no ROI) - so the alternative would be nothing.
There are some still some signs you can tell content is AI written based on verbosity, use of bold, specific HTML styling, etc. I see no issues with the approach. I noticed some people have an allergic reaction to any hint of AI, and when the content produced is "fluff" with no real content I get annoyed too - however that isn't the case for all content.
The issue is that the article is excessively verbose; the time you saved in writing end editing comes at the cost of wasting readers' time. There is nothing wrong with using AI to improve writing, but using it to insert fluff that came at no cost to you and no benefit to me feels like a violation of social contract.
Please, at least put a disclaimer on top so I can ask an AI to summarize the article and complete the cycle of entropy.
I have attempted to condense it based on your feedback, and added some more info about email headers.
I don't know if he wrote it via AI, but he repeats himself over and over again. It could have been 1/3 the length and still conveyed the same amount of information.
'I don't know if he wrote it via AI, but he repeats himself'.
Supporting evidence required.
https://news.ycombinator.com/item?id=45948625
I know I shouldn’t pile on with respect to the AI Slop Signature Style, but in the hopes of helping people rein in the AI-trash-filter excesses and avoid reactions like these…
The sentence-level stuff was somewhat improved compared to whatever “jaunty Linked-In Voice” prompt people have been using. You know, the one that calls for clipped repetitive phrases, needless rhetorical questions, dimestore mystery framing, faux-casual tone, and some out-of-proportion “moral of the story.” All of that’s better here.
But there’s a good ways left to go still. The endless bullet lists, the “red flags,” the weirdly toothless faux drama (“The Call That Changed Everything”, “Data Catastrophe: The 2025 Cyber Fallout”), and the Frankensteined purposes (“You can still protect yourself from falling victim to the scams that follow,” “The Timeline That Doesn't Make Sense,” etc.)…
The biggest thing that stands out to me here (besides the essay being five different-but-duplicative prompt/response sessions bolted together) are the assertions/conclusions that would mean something if real people drew them, but that don’t follow from the specifics. Consider:
“The Timeline That Doesn't Make Sense
Here's where the story gets interesting—and troubling:
[they made a report, heard back that it was being investigated, didn’t get individual responses to their follow-ups in the immediate days after, the result of the larger investigation was announced 4 months later]”
Disappointing, sure. And definitely frustrating. But like… “doesn’t make sense?” How not so? Is it really surprising or unreasonable that it takes a large organization time, for a major investigation into a foreign contractor, with law enforcement and regulatory implications, as well as 9-figure customer-facing damages? Doesn’t it make sense (even if it’s disappointing), when stuff that serious and complex happens, that they wait until they’re sure before they say something to an individual customer?
I’m not saying it’s good customer service (they could at least drop a reply with “the investigation is ongoing and we can’t comment til it’s done”). There’s lots of words we could use to capture the suckage besides “doesn’t make sense.” My issue is more that the AI presents it as “interesting—and troubling; doesn’t make sense” when those things don’t really follow directly from the bullet list of facts afterward.
Each big categorical that the AI introduced this way just… doesn’t quite match what it purports to describe. I’m not sure exactly how to pin it down, but it’s as if it’s making its judgments entirely without considering the broader context… which I guess is exactly what it’s doing.
Many people find whining about coherent, meaningful text based on the source identity to be far more annoying than reading coherent, meaningful text.
But I guess you knew that already, which is why you just made a fresh burner account to whine on rather than whining from your real account.
Coherent? It's really annoying to read.
The post just repeats things over and over again, like the Brett Farmer thing, the "four months", telling us three times that they knew "my BTC balance and SSN" and repeatedly mentioning that it was a Google Voice number.
Almost sounds like the posts of people whining about LLMs.
Of course, unlike those people, LLMs are capable of expressing novel ideas that add meaningful value to diverse conversations beyond loudly and incessantly ensuring everyone in the thread is aware of their objection to new technology they dislike.
LLMs are definitely capable of helping with writing, connecting the dots, and sometimes now of genuine insight. They're also still very capable of producing time-wasting slop.
It's the task of anybody presenting their output to third parties to read (at least without a disclaimer about a given text being unvetted LLM output) to make damn sure it's the former and not the latter.
Thankfully, the 8 millionth post whining about LLMs with zero additional value added to the conversation is far less time-wasting than a detailed blog post about a real-world security incident in a major corporation that isn't being widely covered by other outlets.
The article isn't paywalled. Nobody was forced to read it. Nobody was prohibited from asking an LLM to summarize the article.
Whining about LLM written text is whining about one's own deliberate choice to read an article. There is no implied contract or duty between the author and the people who freely choose to read or not read the author's (free) publication.
It's like walking into a (free) soup kitchen, consuming an entire bowl of free soup, and then whining loudly to everyone else in the room about the soup being too salty.
I think the feedback that LLMs were used not very successfully in the making of TFA is valid criticism and might even help other/future authors.
We're probably reading LLM-assisted or even generated texts many times per day at this point, and as long as I don't notice that my time is being wasted by bad writing or hallucinated falsehoods, I'm perfectly fine with it.
Interesting timeline, but nothing here proves, or even strongly indicates, that Counbase “knew about the breach” from this one report.
Screenscraping malware is fairly common, and it’s not unreasonable for an analyst to look at a report like this and assume that the customer got popped instead of them.
Customers get popped all the time, and have a tendency to blame the proximate corporation…
That's true, but in this case I got a response from the head of trust and safety after I sent the phone recording, email + email headers, saying "This report is super robust and gives us a lot to look into. We are investigating this scammer now."
We use Coinbase as an org, we were targeted in early Feb 2025. Caught by person handling the accounts who is paranoid enough to reach out to the org contact on the other side.
FWIW, this is why "not your keys, not your coins."
Coinbase is good for on-ramping, bad for storage. You know, the entire point of cryptocurrency.
True - but be very careful. Roughly 10–18% of all BTC are believed gone forever due to lost keys/wallets. That is more than all hacks and exchange blowups combined. If you take your wallet offline it can be hard not to lose your keys over a long period of time, including across death to your next of kin.
People doing self-custody also get hacked and phished all the time.
Founder mode.
The entire web3 scene is a clusterfuck filled with scammers. Recently i got hacked by web3 interview which is a common vector nowadays.
They send github repo and as soon as you run it they send rejection after stealing tokens and installing keylogger. Pretty sophisticated and the frontend of the codebase looked polished as well.
Has anyone demonstrated that agentic AI systems can be bribed with money, or is that vulnerability still strictly relegated to unrealiable, untrustworthy biological intelligence?
I'm shocked, shocked to find that a cryptocurrency company did something shady.
Your employer doesn't utilize low-cost overseas labor to pad margins?
Not parent but mine doesn't let them handle client social security numbers.
I've read that blockchain can be used to eliminate the risk of crypto companies doing shady things. /s
Isn't there a new law from the Biden era that forces a company disclose breaches to their customers and the SEC within a few weeks ?
If so and if the US had a sane administration maybe, this would be acted upon, but these days, anything goes as long as you 'donate' to the ballroom.
Yes, I did briefly touch on that in the article. "SEC rules require timely reporting of material cybersecurity incidents."
Looking into this more now I see SEC Rule requiring disclosure within 4 business days of determining a cybersecurity incident is "material"
There is a big list of SEC violations as a result: 1. Late Disclosure (Item 1.05) If materiality was determinable in January → 4-day rule violated Penalty: Fines, enforcement actions
2. Misleading Statements/Omissions (Rule 10b-5) Any public statements about security between Jan-May could be problematic Omitting known material risks = securities fraud
3. Inadequate Internal Controls (SOX) Failure to properly investigate and escalate user reports Inadequate breach detection systems
4. Failure to Maintain Adequate Disclosure Controls My report should have triggered disclosure review Going silent suggests broken escalation process
My Coinbase account got caught up in this and I'm so glad I used something like coinbase_jridi46@example.com as my email address with them because emails to that address can be treated as hostile in the wake of the breach. if I'd just used coinbase@example.com as my email address with them, I'd be fucked.
Why couldn't you treat coinbase@example.com as hostile?
In January 2025, I was targeted by scammers who knew my exact Bitcoin balance, SSN, DL, and other private Coinbase account details. I immediately reported this to Coinbase's Head of Trust & Safety with recordings and technical evidence. Despite repeated follow-ups asking how attackers had my data, Coinbase went silent for 4 months. They only disclosed the breach in May after attackers demanded $20M ransom. The breach involved overseas contractors at TaskUs being bribed for customer data. This article documents the timeline with emails, recordings, and evidence showing Coinbase was aware of the breach months before their official "discovery" date.
You mentioned that the DKIM headers "passed validation for coinbase.com". How could that have been possible, if the email was a phishing email? I'm not sure I understood that part, especially because you didn't provide any examples of the header data you received from the attacker.
Yeah this is very confusing for me too, how could the attackers create a valid DKIM signature for coinbase.com? Either there is a huge misconfiguration or it's not possible. Am I missing something?
Are you going to be suing?
I would consider it but I'm not sure what my options are on this.
You’d need to prove harm, which is somewhat nebulous here.*
Matt Levine has a prescient and depressing quote about the only recourse for being being shareholder lawsuits:
> I find all of this so weird because of how it elevates finance. [Various cases] imply that we are not entitled to be protected from pollution as citizens, or as humans. [Another] implies that we are not entitled to be told the truth as citizens. (Which: is true!) Rather, in each case, we are only entitled to be protected from lies as shareholders. The great harm of pollution, or of political dishonesty, is that it might lower the share prices of the companies we own.
* To be clear, I don’t think it is nebulous, and you’re right to feel harmed. But, legally, I don’t know the harm in “they didn’t respond to my emails” after there’s no concrete damage.
Were you harmed?
I've never looked at the Coinbase agreement that's presented when you open an account, but chances are you would have to go through arbitration first. That's not necessarily a bad thing.