> AfterPack approaches this differently. Instead of layering reversible transforms on top of each other, AfterPack uses non-linear, irreversible transforms — closer to how a hash function works than how a traditional obfuscator works. The output is functionally equivalent to the input, but the transformation destroys semantic meaning in a way that cannot be reversed — even by AfterPack itself. There's no inverse function. No secret key that unlocks the original.
That’s probably fun when trying to analyze bugs occurring in production. :)
JS was never really obfuscated - it wasn't the goal of minification. Minifiers especially struggle with ES6 classes/etc, outputting code that is almost human readable.
Proper obfuscation libraries exist, typically at the cost of a pretty notable amount of performance that I'd wager most are not willing to sacrifice
And like even the best of client-side DRM, everything can be reverse engineered. All the code has been downloaded to the user's machine. It's one of the (IMO terrible) excuses for the SaaSification of all software
Minification is not obfuscation and obfuscation is not security, but no amount of deobfuscation will recover the comments in the source, which are often more insightful than the source itself.
Obfuscation is meant to slow someone by making it difficult to understand. Slowing an attacker down is often employed as a form of security, that is why castles had walls, moats and multiple layers once you got inside to hinder progress.
It has been often used by companies, malware authors etc. to make it difficult for someone else to understand what is internally happening.
It's a cat and mouse game, it provides the desired level of security for people who use it. It isn't used to prevent people from finding vulnerabilities (not mostly at least). It's used to deter competition, prevent clones of the application,etc.. it's make-shift "DRM". There are ways to defeat even AI-assisted analysis running in a proper browser. But I think it's not a good idea to give anyone ideas on this subject. proper-DRM is hellish enough.
Was there ever an obfuscated JS code a human couldn't reverse given enough time? It's like most people's doors, it won't stop someone with a battering ram, but it will ideally slow them down enough for you to hide or get your guns. in this case, it won't even slow them down, until it does (hence: cat and mouse game).
I successfully did this the other day. There was a web app I used quite a bit with an annoying performance issue (in some cases its graphics code would spin my CPU at 100% constantly, fans full-blast). I asked Claude to fetch the code and fed it a few performance traces I took through Firefox, and it cut through all those obfuscated variables like they weren't even there, easily re-interpreting what each function actually did, finding a plausible root cause and workaround (which worked).
Can you generally trust it to de-obfuscate reliably? No idea. My sample size is 1.
I did something similar yesterday. I'm playing a little idle game, and wanted to optimise my playthrough. I pointed claude at the game's data files, and in a few short minutes it reverse engineered the game data and extracted it to CSV / JSON files for analysis.
In this case, it turned out the data - and source code for the game - was in a big minified javascript file. Claude extracted all the data I wanted in about 2 minutes.
The _any_ part is not clear to me. Obfuscation is an arms race. Reverse engineers have always been tool-assisted. Now they just have new tools and the obfuscators need to catch up.
Huh? Their justification for "ofuscation isn't security" is by pointing out that the Claude source wasn't obfuscated, it was minified. And it could be "deobfuscated by claude itself" - even though, again, they said the code wasn't obfuscated.
So I guess, ask Claude to deobfuscate some code that's ACTUALLY OBFUSCATED if you want to claim obfuscation provides ZERO additional security.
>We analyzed this file at AfterPack as part of a deobfuscation case study. What we found: it's minified, not obfuscated.
>Here's the difference. Minification — what every bundler (esbuild, Webpack, Rollup) does by default — shortens variable names and removes whitespace. It makes code smaller for shipping. It was never designed to hide anything.
>Here's where it gets interesting. We didn't need source maps to extract Claude Code's internals. We asked Claude — Anthropic's own model — to analyze and deobfuscate the minified cli.js file.
> No one talks about this. There's no VentureBeat headline about GitHub shipping email addresses in their JS bundles. No Hacker News thread about internal URLs exposed in Anthropic's CDN scripts
That's a huge sign none of that information is truly sensitive. What is being implied here?
> AI Makes This Urgent
No it doesn't. This is blogspam and media hype nobody is interested in. Unless the demographics have really shifted that much in the last few years, HN is one of the worst places to attempt this marketing style.
slight historical note, it might be interesting to see how the brief period of "white box cryptography" stands up to AI today. At the time there were a few companies with products that had trouble finding fit (for straightforward security reasons) but they were essentially commercial obfuscators that made heavy use of lookup tables, miniature virtual machines, and esolang concepts that worked mainly against human reverse engineers.
> AfterPack approaches this differently. Instead of layering reversible transforms on top of each other, AfterPack uses non-linear, irreversible transforms — closer to how a hash function works than how a traditional obfuscator works. The output is functionally equivalent to the input, but the transformation destroys semantic meaning in a way that cannot be reversed — even by AfterPack itself. There's no inverse function. No secret key that unlocks the original.
That’s probably fun when trying to analyze bugs occurring in production. :)
What they describe is snake oil. Even if you assume it is mathematically possible in the general case (which is debatable!), it'll likely have a huge performance overhead. See https://en.wikipedia.org/wiki/Indistinguishability_obfuscati...
What they’re describing is a polymorphic virus. A great analogy for SV startups.
It works great in assembly, not so much for higher level languages.
Is all polymorphic code virii?
Not at all, but in practice no one has any use for the technique except to obfuscate viruses, with the exception of academic research.
JS was never really obfuscated - it wasn't the goal of minification. Minifiers especially struggle with ES6 classes/etc, outputting code that is almost human readable.
Proper obfuscation libraries exist, typically at the cost of a pretty notable amount of performance that I'd wager most are not willing to sacrifice
And like even the best of client-side DRM, everything can be reverse engineered. All the code has been downloaded to the user's machine. It's one of the (IMO terrible) excuses for the SaaSification of all software
Minification is not obfuscation and obfuscation is not security, but no amount of deobfuscation will recover the comments in the source, which are often more insightful than the source itself.
Often like 1 in 100 js files?
Obfuscation is meant to slow someone by making it difficult to understand. Slowing an attacker down is often employed as a form of security, that is why castles had walls, moats and multiple layers once you got inside to hinder progress.
It has been often used by companies, malware authors etc. to make it difficult for someone else to understand what is internally happening.
If the comments were in the original source that the model trained on... Then sure, those are recoverable too.
It's a cat and mouse game, it provides the desired level of security for people who use it. It isn't used to prevent people from finding vulnerabilities (not mostly at least). It's used to deter competition, prevent clones of the application,etc.. it's make-shift "DRM". There are ways to defeat even AI-assisted analysis running in a proper browser. But I think it's not a good idea to give anyone ideas on this subject. proper-DRM is hellish enough.
Was there ever an obfuscated JS code a human couldn't reverse given enough time? It's like most people's doors, it won't stop someone with a battering ram, but it will ideally slow them down enough for you to hide or get your guns. in this case, it won't even slow them down, until it does (hence: cat and mouse game).
I successfully did this the other day. There was a web app I used quite a bit with an annoying performance issue (in some cases its graphics code would spin my CPU at 100% constantly, fans full-blast). I asked Claude to fetch the code and fed it a few performance traces I took through Firefox, and it cut through all those obfuscated variables like they weren't even there, easily re-interpreting what each function actually did, finding a plausible root cause and workaround (which worked).
Can you generally trust it to de-obfuscate reliably? No idea. My sample size is 1.
I did something similar yesterday. I'm playing a little idle game, and wanted to optimise my playthrough. I pointed claude at the game's data files, and in a few short minutes it reverse engineered the game data and extracted it to CSV / JSON files for analysis.
In this case, it turned out the data - and source code for the game - was in a big minified javascript file. Claude extracted all the data I wanted in about 2 minutes.
The _any_ part is not clear to me. Obfuscation is an arms race. Reverse engineers have always been tool-assisted. Now they just have new tools and the obfuscators need to catch up.
Huh? Their justification for "ofuscation isn't security" is by pointing out that the Claude source wasn't obfuscated, it was minified. And it could be "deobfuscated by claude itself" - even though, again, they said the code wasn't obfuscated.
So I guess, ask Claude to deobfuscate some code that's ACTUALLY OBFUSCATED if you want to claim obfuscation provides ZERO additional security.
>We analyzed this file at AfterPack as part of a deobfuscation case study. What we found: it's minified, not obfuscated.
>Here's the difference. Minification — what every bundler (esbuild, Webpack, Rollup) does by default — shortens variable names and removes whitespace. It makes code smaller for shipping. It was never designed to hide anything.
>Here's where it gets interesting. We didn't need source maps to extract Claude Code's internals. We asked Claude — Anthropic's own model — to analyze and deobfuscate the minified cli.js file.
And read through native code as well
> No one talks about this. There's no VentureBeat headline about GitHub shipping email addresses in their JS bundles. No Hacker News thread about internal URLs exposed in Anthropic's CDN scripts
That's a huge sign none of that information is truly sensitive. What is being implied here?
> AI Makes This Urgent
No it doesn't. This is blogspam and media hype nobody is interested in. Unless the demographics have really shifted that much in the last few years, HN is one of the worst places to attempt this marketing style.
slight historical note, it might be interesting to see how the brief period of "white box cryptography" stands up to AI today. At the time there were a few companies with products that had trouble finding fit (for straightforward security reasons) but they were essentially commercial obfuscators that made heavy use of lookup tables, miniature virtual machines, and esolang concepts that worked mainly against human reverse engineers.
An example was this early AES proposal: https://link.springer.com/chapter/10.1007/3-540-36492-7_17
Whitebox cryptography is widely deployed, in browser plugins for DRM.
write your blog yourself if ppl are supposed to read it not this llm slop
isn't it fair for an article about AI deobfuscating code to be written by AI?
If it’s too hard to read ask your ai to deobfuscate it :D
Fair? No. Par for the course? Unfortunately yes.
I expect it these days but it’s still disrespectful slop pushing out real work.
Not really, no.