Meanwhile Apple made a choice to leave iOS 18 vulnerable on the devices that receive updates to iOS 26. If you want security, be ready to sacrifice UI usability.
If you set Liquid Glass to the more opaque mode in settings I find iOS usability to be fine now, and some non-flashy changes such as moving search bars to the bottom are good UX improvements.
The real stinker with Liquid Glass has been macOS. You get a half-baked version of the design that barely even looks good and hurts usability.
So the exploiters have deprecated that version of spyware and moved on I see. This has been the case every other time. The state actors realize that there's too many fingers in the pie (every other nation has caught on), the exploit is leaked and patched. Meanwhile, all actors have moved on to something even better.
Remember when Apple touted the security platform all-up and a short-time later we learned that an adversary could SMS you and pwn your phone without so much as a link to be clicked.
Each time NSO had the next chain ready prior to patch.
I recall working at a lab a decade ago where we were touting full end-to-end exploit chain on the same day that the target product was announcing full end-to-end encryption -- that we could bypass with a click.
It's worth doing (Apple patching) but a reminder that you are never safe from a determined adversary.
My iOS devices have been repeatedly breached over the last few years, even with Lockdown mode and restrictive (no iCloud, Siri, Facetime, AirDrop ) MDM policy via Apple Configurator. Since moving to 2025 iPad Pro with MIE/eMTE and Apple (not Broadcom & Qualcomm) radio basebands, it has been relatively peaceful. Until the last couple of weeks, maybe due to leakage of this zero day and PoC as iOS 26.3 was being tested.
>It's worth doing (Apple patching) but a reminder that you are never safe from a determined adversary.
I hate these lines. Like yes NSA or Mossad could easily pwn you if they want. Canelo Alvarez could also easily beat your ass. Is he worth spending time to defend against also?
decade-old vulns like this are why the 'you're not interesting enough to target' argument falls apart. commercial spyware democratized nation-state capabilities - now any mediocre threat actor with budget can buy into these exploits. the Pegasus stuff proved that pretty clearly. and yeah memory safety helps but the transition is slow - you've got this massive C/C++ codebase in iOS that's been accumulating bugs for 15+ years, and rewriting it all in Swift or safe-C is a multi-decade project. meanwhile every line of legacy code is a ticking time bomb. honestly think the bigger issue is detection - if you can't tell you've been pwned, memory safety doesn't matter much.
I wonder what the internal conversations are like around memory safety at Apple right now. Do people feel comfortable enough with Swift's performance to replace key things like dyld and the OS? Are there specific asks in place for that to happen? Is Rust on the table? Or does C and C++ continue to dominate in these spaces?
Well whatever the zero means, it can't be the number of days that the bug has been present, generally. It should be expected that most zero-days concern a bug with a non-zero previous lifespan.
“Zero day” has meant different things over the years, but for the last couple-ish decades it’s meant “the number of days that the vendor has had to fix them” AKA “newly-known”.
This kind of mental model only works if you think of things as made huge shadowy blobs, not people.
dyld has one principal author, who would 100% quit and go to the press if he was told (by who?) to insert a back door. The whole org is composed of the same basic people as would be working on Linux or something. Are you imagining a mass of people in suits who learned how to do systems programming at the institute for evil?
Additionally, do you work in tech? You don’t think bugs appear organically? You don’t think creative exploitation of bugs is a thing?
This vastly overstates both the competence of spy agencies and of software engineers in general. When it comes to memory unsafe code, the potential for exploits is nearly infinite.
It was a complicated product that many people worked in order to develop and took advantage of many pre-existing vulnerabilities as well knowledge of complex and niche systems in order to work.
Yeah, Stuxnet was the absolute worst of the worst the depths of its development we will likely truly never know. The cost of its development we will never truly know. It was an extremely highly, hyper targeted, advanced digital weapon. Nation states wouldn't even use this type of warfare against pedophiles.
Maybe sometimes? With how many bugs are normally found in very complex code, would a rational spy agency spend the money to add a few more? Doing so is its own type of black op, with plenty of ways to go wrong.
OTOH, how rational are spy agencies about such things?
I don't know what "equally annoying" would be for a company and its customers. We need a law requiring companies open source their hardware within X days of end of life support.
And somehow make sure these are meaningful updates. Not feature parity with new hardware, but security parity when it can be provided by a software only update.
Otherwise a company in effect takes back the property, without compensation.
The exploit was always there, you just didn't know about it, but attackers might have. The only thing that changed is that you're now aware that there's a vulnerability.
To what? Write 100% bug free software? I don't think that's actually achievable, and expecting so is just setting yourself up for appointment. Apple does a better job than most other vendors except maybe GrapheneOS. Mainstream Android vendors are far worse. Here's Cellebrite Premium's support matrix from July 2024, for locked devices. iPhones are vulnerable after first unlock (AFU), but Androids are even worse. They can be hacked even if they have been shut down/rebooted.
The problem with that is it runs on a desktop, which means very little in the way of protection against physical attacks. You might be safe from Mossad trying to hack you from half way across the world, but you're not safe from someone doing an evil maid attack, or from seizing it and bruteforcing the FDE password (assuming you didn't set a 20 random character password).
This is a newly-discovered vulnerability (CVE-2026-20700, addressed along with CVE-2025-14174 and CVE-2025-43529).
Note that the description "an attacker with memory write capability may be able to execute arbitrary code" implies that this CVE is a step in a complex exploit chain. In other words, it's not a "grab a locked iPhone and bypass the passcode" vulnerability.
I may well be missing something, but this reads to me as code execution on user action, not lock bypass.
Like, you couldn’t get a locked phone that hadn’t already been compromised to do anything because it would be locked so you’d have no way to run the code that triggers the compromise.
Am I not interpreting things correctly?
[edit: ah, I guess “An attacker with memory write capability” might cover attackers with physical access to the device and external hardware attached to its circuit board that can write to the memory directly?]
Meanwhile Apple made a choice to leave iOS 18 vulnerable on the devices that receive updates to iOS 26. If you want security, be ready to sacrifice UI usability.
If you set Liquid Glass to the more opaque mode in settings I find iOS usability to be fine now, and some non-flashy changes such as moving search bars to the bottom are good UX improvements.
The real stinker with Liquid Glass has been macOS. You get a half-baked version of the design that barely even looks good and hurts usability.
So the exploiters have deprecated that version of spyware and moved on I see. This has been the case every other time. The state actors realize that there's too many fingers in the pie (every other nation has caught on), the exploit is leaked and patched. Meanwhile, all actors have moved on to something even better.
Remember when Apple touted the security platform all-up and a short-time later we learned that an adversary could SMS you and pwn your phone without so much as a link to be clicked.
KSIMET: 2020, FORCEDENTRY: 2021, PWNYOURHOME, FINDMYPWN: 2022, BLASTPASS: 2023
Each time NSO had the next chain ready prior to patch.
I recall working at a lab a decade ago where we were touting full end-to-end exploit chain on the same day that the target product was announcing full end-to-end encryption -- that we could bypass with a click.
It's worth doing (Apple patching) but a reminder that you are never safe from a determined adversary.
How much do you think Lockdown Mode + MIE/eMTE helps? Do you believe state actors work with manufacturers to find/introduce new attack vectors?
My iOS devices have been repeatedly breached over the last few years, even with Lockdown mode and restrictive (no iCloud, Siri, Facetime, AirDrop ) MDM policy via Apple Configurator. Since moving to 2025 iPad Pro with MIE/eMTE and Apple (not Broadcom & Qualcomm) radio basebands, it has been relatively peaceful. Until the last couple of weeks, maybe due to leakage of this zero day and PoC as iOS 26.3 was being tested.
Are you a person of high interest? I was under the impression that these sorts of breaches only happen to journalists, state officials, etc.
[delayed]
How can you tell that you were breached?
[delayed]
Thanks for contributing to our increasing lack of security and anonymity.
>It's worth doing (Apple patching) but a reminder that you are never safe from a determined adversary.
I hate these lines. Like yes NSA or Mossad could easily pwn you if they want. Canelo Alvarez could also easily beat your ass. Is he worth spending time to defend against also?
Yes, because Apple can do it at scale.
Yes. If vendors do not take this seriously, these capabilities trickle down to less sophisticated adversaries.
and if you point out that Apple's approach is security by obscurity with a dollop of PR, you get downvoted by fan bois.
Apple really need to open up so at very least 3rd parties can verify integrity of the system.
decade-old vulns like this are why the 'you're not interesting enough to target' argument falls apart. commercial spyware democratized nation-state capabilities - now any mediocre threat actor with budget can buy into these exploits. the Pegasus stuff proved that pretty clearly. and yeah memory safety helps but the transition is slow - you've got this massive C/C++ codebase in iOS that's been accumulating bugs for 15+ years, and rewriting it all in Swift or safe-C is a multi-decade project. meanwhile every line of legacy code is a ticking time bomb. honestly think the bigger issue is detection - if you can't tell you've been pwned, memory safety doesn't matter much.
I wonder what the internal conversations are like around memory safety at Apple right now. Do people feel comfortable enough with Swift's performance to replace key things like dyld and the OS? Are there specific asks in place for that to happen? Is Rust on the table? Or does C and C++ continue to dominate in these spaces?
Apple is already working on a memory-safe C variant which is already used in iBoot and will be upstream LLVM soon: https://clang.llvm.org/docs/BoundsSafety.html
While not wholesale replacing it, there already is Swift in dyld: https://github.com/search?q=repo%3Aapple-oss-distributions%2...
What's never mentioned in posts like this is whether phones in lockdown mode were vulnerable too.
What does "zero-day" even meant?
> ... decade-old ...
> ... was exploited in the wild ...
> ... may have been part of an exploit chain....
The vulnerability has been present for more than a decade.
There is evidence that some people were aware and exploiting it.
Apple was unaware until right now that it existed, thus is a 'zero day' meaning an exploit that the outside world knows about but they don't.
Meaning unknown to the public/vendor
Well whatever the zero means, it can't be the number of days that the bug has been present, generally. It should be expected that most zero-days concern a bug with a non-zero previous lifespan.
https://en.wikipedia.org/wiki/Zero-day_vulnerability
“Zero day” has meant different things over the years, but for the last couple-ish decades it’s meant “the number of days that the vendor has had to fix them” AKA “newly-known”.
It still weirds me out that a term w@r3z d00dz from the 90s coined is now a part of the mainstream IT security lexicon.
Did MIE/MTE on 2025 iPhones help to detect this longstanding zero day?
My suspicion is that. These "exploits" are planted by spy agencies.
They don't appear there organically.
This kind of mental model only works if you think of things as made huge shadowy blobs, not people.
dyld has one principal author, who would 100% quit and go to the press if he was told (by who?) to insert a back door. The whole org is composed of the same basic people as would be working on Linux or something. Are you imagining a mass of people in suits who learned how to do systems programming at the institute for evil?
Additionally, do you work in tech? You don’t think bugs appear organically? You don’t think creative exploitation of bugs is a thing?
I am not saying this one in particular.
Of course no one can admit it publicly.
But it is something that governments are known to proactively do.
You can get dirt on people a la Jeffrey Epstein. And use that to coerce them.
https://en.wikipedia.org/wiki/Backdoor_(computing)
This vastly overstates both the competence of spy agencies and of software engineers in general. When it comes to memory unsafe code, the potential for exploits is nearly infinite.
> overstates both the competence of spy agencies
Stuxnet was pretty impressive: https://en.wikipedia.org/wiki/Stuxnet
It was also not a bug to be exploited.
It was a complicated product that many people worked in order to develop and took advantage of many pre-existing vulnerabilities as well knowledge of complex and niche systems in order to work.
Yeah, Stuxnet was the absolute worst of the worst the depths of its development we will likely truly never know. The cost of its development we will never truly know. It was an extremely highly, hyper targeted, advanced digital weapon. Nation states wouldn't even use this type of warfare against pedophiles.
Maybe sometimes? With how many bugs are normally found in very complex code, would a rational spy agency spend the money to add a few more? Doing so is its own type of black op, with plenty of ways to go wrong.
OTOH, how rational are spy agencies about such things?
Yes. Of course not all.
But some just happen to work too well.
But governments do have blatant back doors in chips & software.
Some suspect that Apple secretly backs some of these spyware services. I've heard rumors about graykey but only rumors. Thoughts?
>Some suspect ...
>I've heard rumors ...
So like, the comment you're replying to? This is just going in circles.
No updates for ipados17. I guess my ipad pro 10.5 is finally a brick.
Feudalism says: buy new hardware, peasant.
I don't know what "equally annoying" would be for a company and its customers. We need a law requiring companies open source their hardware within X days of end of life support.
And somehow make sure these are meaningful updates. Not feature parity with new hardware, but security parity when it can be provided by a software only update.
Otherwise a company in effect takes back the property, without compensation.
I wonder if Fil-C would have prevented this.
Previously: https://news.ycombinator.com/item?id=46979643
I guess the fix is only for Tahoe?
The zero-day mentioned in the article doesn't affect macOS.
But there were security updates for macOS 14 and macOS 15 released yesterday:
https://support.apple.com/en-us/126350
https://support.apple.com/en-us/126349
There's an update for Sequoia too.
But not for iOS 18, so this is a forced upgrade to the horrors of Liquid Glass.
Can’t wait to see how much battery it eats.
as in I now have to upgrade all my children's ancient iphones...?
I'd much rather not do that
The exploit was always there, you just didn't know about it, but attackers might have. The only thing that changed is that you're now aware that there's a vulnerability.
You’d rather they not release updates to support them?
Open source wins... again.
I am shocked to hear that over these years it was possibl to extract data from a locked iphone. (hardening mode off)
I trusted apple.
>I trusted apple.
To what? Write 100% bug free software? I don't think that's actually achievable, and expecting so is just setting yourself up for appointment. Apple does a better job than most other vendors except maybe GrapheneOS. Mainstream Android vendors are far worse. Here's Cellebrite Premium's support matrix from July 2024, for locked devices. iPhones are vulnerable after first unlock (AFU), but Androids are even worse. They can be hacked even if they have been shut down/rebooted.
https://grapheneos.social/system/media_attachments/files/112...
https://grapheneos.social/system/media_attachments/files/112...
https://grapheneos.social/system/media_attachments/files/112...
These links working for anyone? 403 for me
Updated the links. The original were from discuss.grapheneos.org but it looks like they don't like hot-linking.
Qubes OS does a much better job though, because it relies on security through compartmentalization, not security through correctness.
The problem with that is it runs on a desktop, which means very little in the way of protection against physical attacks. You might be safe from Mossad trying to hack you from half way across the world, but you're not safe from someone doing an evil maid attack, or from seizing it and bruteforcing the FDE password (assuming you didn't set a 20 random character password).
This is a newly-discovered vulnerability (CVE-2026-20700, addressed along with CVE-2025-14174 and CVE-2025-43529).
Note that the description "an attacker with memory write capability may be able to execute arbitrary code" implies that this CVE is a step in a complex exploit chain. In other words, it's not a "grab a locked iPhone and bypass the passcode" vulnerability.
I may well be missing something, but this reads to me as code execution on user action, not lock bypass.
Like, you couldn’t get a locked phone that hadn’t already been compromised to do anything because it would be locked so you’d have no way to run the code that triggers the compromise.
Am I not interpreting things correctly?
[edit: ah, I guess “An attacker with memory write capability” might cover attackers with physical access to the device and external hardware attached to its circuit board that can write to the memory directly?]