And this is why you always encrypt the drive with software. All of these methods seem to put a lot of faith into the drive controller doing what it claim it does, which you can never be all that sure about. Even Microsoft-backed Bitlocker would help here.
Depending on your threat model, your check is insufficient, since dd |hexdump will be all zeros even if you just trim all the blocks for a drive that is trim-to-zero.
Securely erasing flash drives with a threat model of "someone will dump the raw data of the chips" is only fully solvable for self-encrypting drives where you can replace the key. Even if you can issue a block-erase for every single block of the device, block erase doesn't always succeed on NAND.
For Sata HDD shingled writes and SSD sector replacement it can't be cleaned that way.
Tools like dban stopped working once firmware sector re-mapping chips on modern storage became common. If you see the reported spare replacement count drop on your older s.m.a.r.t reports, than partial sectors may no longer be accessed from the OS without vendor specific forensic software. =3
Indeed, LUKS + F2FS for /home with an external key file imported into initrd solves a lot of issues.
Primarily, when an SSD slowly fails the sector replacement allotment has already bled data into read-only areas of the drive. As a user, there is no way to reliably scrub that data.
If the drive suddenly bricks, the warranty service will often not return the original hardware... and just the password protection on an embedded LUKS key is not great.
There are effective disposal methods:
1. shred the chips
2. incinerate the chips
Wiping/Trim sometimes doesn't even work if the Flash chips are malfunctioning. =3
This is exceptionally poor advice. This is why TPM exists. Unfortunately adoption is low with the Linux crowd because they still believe the misinformation from 20 years ago.
It's very common for both NVMe and SATA drives that they'll be locked/frozen during boot and thus will not honor a secure erase command until the drive has been power-cycled, which can usually be accomplished with the system-level sleep/wake cycle. I'm not sure what useful purpose this is meant to serve other than possibly making it hard for malware to instantly and irretrievably wipe your storage.
That doesn't work on any* NAND flash device, be it a flash drive, NVME, SATA, whatever.
The block device you see is an abstraction provided by the SSD controller. In reality, the flash capacity is larger. Pages are swapped out for wear leveling. If a block goes bad, it'll be taken out of commission, and your data may hide in there.
All of this happens on the SSD controller. The kernel doesn't know. You have no way to directly erase or modify specific blocks.
*Okay, there are raw NAND flash chips without controllers, but that is not you're working with when you have a SSD or flash drive. If you do have a raw flash chip, you can more directly control flash contents.
This is what `shred` and other secure wipes do. There is some concern over data stored in sections which the firmware has swapped out and made inaccessible. But if this is a concern to you, then you should be using full disk encryption anyway which makes all of this a non issue.
This is broadly true of cheap thumb drives, but not true of all USB flash drives. The larger ones generally do support secure erase. E.g. the Crucial X6. I don't know if these use secret vendor commands, or if they use the standard SCSI "sanitize" command.
To maximize device performance when wiping a drive to use for something else, I use nvme format with --ses=1.
Which in theory should free all of the blocks on the flash.
Really hard to find good documentation on this stuff. Doesn't help that 95% of internet articles just say "overwrite with zeroes" which is useless advice
Anything that works at the logical block interface will not usefully wipe the device. SES 1 will physically hit every erase block on the device with 20V to blow it away. This happens suspiciously quickly (< 60 seconds typically) but that's just because flash is great.
Gotta love breaking EFI changes. I don't know how many times my work laptop would do that and I couldn't boot anymore, only to remember some stressful time later that Linux would only boot with some of the settings flipped from their defaults. At least I never had to reinstall anything.
Every organization with good security hygiene requires strong-password-protected disk encryption, because when your stuff is stolen from your Tesla at lunch time in broad daylight, no shredder policy will save you, full stop.
I had a drawn out conversation with a friend about erasing NVME drives in a way that met compliance needs. The procedure they were given was to install Windows, with Bitlocker, twice with no effort to retain the key.
But that doesn't even overwrite the visible drive space; you can do a simple PoC to demonstrate that Windows won't get to all the mapped blocks. And that still hasn't gotten to the overprovisioned blocks and wear leveling issues that the article references.
You could use the BIOS or whatever CLI tool to tell the drive to chuck its encryption key, but are you sure that tool meets whatever compliance requirements you're beholden to? Are you sure the drive firmware does?
So they went with paying a company to shred the drives. All of them. It's disgustingly wasteful.
Used to do recycling. Before secure erase was widespread there used to be cheapish 16 and 32GB SSDs for embedded devices, but a few of them made it into the thin/zero client space and a few white labelled low end pc's. they were actually twice the size. Basically 2 16s in a single 16 chassis. And what you would get is that the 2 drives were sort of in sync, I think it was a failover mechanism to deal with shitty drive quality. If drive A failed it would just connect to drive B instead and the user might not know about the failure. But the second drive would not wipe necessarily depending on how you wiped the first one. A few people retrieved data from the second disk under lab conditions, after wiping the first, so we had a report come through that we couldnt certify these disks as erased until they demonstrated compliance with secure erase. So we shredded probably a few thousand of them.
If compliance is the goal, just use FIPS certified self-encrypting drives and trust them to wipe their encryption keys when instructed to do so. At that point, any failure is clearly the vendor's fault, not your own.
And this is why you always encrypt the drive with software. All of these methods seem to put a lot of faith into the drive controller doing what it claim it does, which you can never be all that sure about. Even Microsoft-backed Bitlocker would help here.
Bitlocker can rely on the SSD encryption, so careful there too.
It has been software encryption for a many years now.
For SATA SSDs i've used the hdparm secure erase and then verified that dd | hexdump is all zeros. That was good enough for me.
Depending on your threat model, your check is insufficient, since dd |hexdump will be all zeros even if you just trim all the blocks for a drive that is trim-to-zero.
Securely erasing flash drives with a threat model of "someone will dump the raw data of the chips" is only fully solvable for self-encrypting drives where you can replace the key. Even if you can issue a block-erase for every single block of the device, block erase doesn't always succeed on NAND.
For Sata HDD shingled writes and SSD sector replacement it can't be cleaned that way.
Tools like dban stopped working once firmware sector re-mapping chips on modern storage became common. If you see the reported spare replacement count drop on your older s.m.a.r.t reports, than partial sectors may no longer be accessed from the OS without vendor specific forensic software. =3
https://sourceforge.net/projects/dban/
Indeed, LUKS + F2FS for /home with an external key file imported into initrd solves a lot of issues.
Primarily, when an SSD slowly fails the sector replacement allotment has already bled data into read-only areas of the drive. As a user, there is no way to reliably scrub that data.
If the drive suddenly bricks, the warranty service will often not return the original hardware... and just the password protection on an embedded LUKS key is not great.
There are effective disposal methods:
1. shred the chips
2. incinerate the chips
Wiping/Trim sometimes doesn't even work if the Flash chips are malfunctioning. =3
> an external key file imported into initrd
This is exceptionally poor advice. This is why TPM exists. Unfortunately adoption is low with the Linux crowd because they still believe the misinformation from 20 years ago.
100%. If you’re not encrypting your drive, along with a strong password, you’re fucking around.
Physical destruction as the only sure way? When your hardware is stolen, good luck physically destroying it.
It's very common for both NVMe and SATA drives that they'll be locked/frozen during boot and thus will not honor a secure erase command until the drive has been power-cycled, which can usually be accomplished with the system-level sleep/wake cycle. I'm not sure what useful purpose this is meant to serve other than possibly making it hard for malware to instantly and irretrievably wipe your storage.
Smash it with a hammer and move on. I'd never buy a used storage device anyway, no telling what malware it might contain.
Do you mean malware in the firmware that sticks around after you format the drive?
As far as I know, there is NO way to securely erase a USB flash drive (barring some undocumented vendor specific commands that may exist).
Overwrite every single bit with innocuous files?
That doesn't work on any* NAND flash device, be it a flash drive, NVME, SATA, whatever.
The block device you see is an abstraction provided by the SSD controller. In reality, the flash capacity is larger. Pages are swapped out for wear leveling. If a block goes bad, it'll be taken out of commission, and your data may hide in there.
All of this happens on the SSD controller. The kernel doesn't know. You have no way to directly erase or modify specific blocks.
*Okay, there are raw NAND flash chips without controllers, but that is not you're working with when you have a SSD or flash drive. If you do have a raw flash chip, you can more directly control flash contents.
This is what `shred` and other secure wipes do. There is some concern over data stored in sections which the firmware has swapped out and made inaccessible. But if this is a concern to you, then you should be using full disk encryption anyway which makes all of this a non issue.
This is broadly true of cheap thumb drives, but not true of all USB flash drives. The larger ones generally do support secure erase. E.g. the Crucial X6. I don't know if these use secret vendor commands, or if they use the standard SCSI "sanitize" command.
To maximize device performance when wiping a drive to use for something else, I use nvme format with --ses=1.
Which in theory should free all of the blocks on the flash.
Really hard to find good documentation on this stuff. Doesn't help that 95% of internet articles just say "overwrite with zeroes" which is useless advice
Anything that works at the logical block interface will not usefully wipe the device. SES 1 will physically hit every erase block on the device with 20V to blow it away. This happens suspiciously quickly (< 60 seconds typically) but that's just because flash is great.
Gotta love breaking EFI changes. I don't know how many times my work laptop would do that and I couldn't boot anymore, only to remember some stressful time later that Linux would only boot with some of the settings flipped from their defaults. At least I never had to reinstall anything.
None of these methods are reliable nor should they be trusted.
Every organization with good security hygiene requires physical destruction of SSDs. Full stop, end of negotiation, into the shredder it goes.
Not that it matters much, with the prices of SSDs skyrocketing people are moving back to mechanical disks.
Every organization with good security hygiene requires strong-password-protected disk encryption, because when your stuff is stolen from your Tesla at lunch time in broad daylight, no shredder policy will save you, full stop.
Who said the data wasn't encrypted?
That was way longer than I expected. Wow.
I had a drawn out conversation with a friend about erasing NVME drives in a way that met compliance needs. The procedure they were given was to install Windows, with Bitlocker, twice with no effort to retain the key.
But that doesn't even overwrite the visible drive space; you can do a simple PoC to demonstrate that Windows won't get to all the mapped blocks. And that still hasn't gotten to the overprovisioned blocks and wear leveling issues that the article references.
You could use the BIOS or whatever CLI tool to tell the drive to chuck its encryption key, but are you sure that tool meets whatever compliance requirements you're beholden to? Are you sure the drive firmware does?
So they went with paying a company to shred the drives. All of them. It's disgustingly wasteful.
Used to do recycling. Before secure erase was widespread there used to be cheapish 16 and 32GB SSDs for embedded devices, but a few of them made it into the thin/zero client space and a few white labelled low end pc's. they were actually twice the size. Basically 2 16s in a single 16 chassis. And what you would get is that the 2 drives were sort of in sync, I think it was a failover mechanism to deal with shitty drive quality. If drive A failed it would just connect to drive B instead and the user might not know about the failure. But the second drive would not wipe necessarily depending on how you wiped the first one. A few people retrieved data from the second disk under lab conditions, after wiping the first, so we had a report come through that we couldnt certify these disks as erased until they demonstrated compliance with secure erase. So we shredded probably a few thousand of them.
I heard of similar issues with early nvme drives.
If compliance is the goal, just use FIPS certified self-encrypting drives and trust them to wipe their encryption keys when instructed to do so. At that point, any failure is clearly the vendor's fault, not your own.
sedutil-cli —yesIwantToEraseALLmydata $PSID /dev/sda1 or something like that.
Tip: Get a barcode scanner. The PSID is usually encoded in a bar/matrix code on the drive's label, next to the plaintext PSID.
Smash it with a hammer.
If you insist on erasing the data, overwrite the entire contents of the drive twice with random data.
Doing it twice will blow away any cached as well (probably).