Updated 10 June 2015 to fill in gaps and clarify the ‘design’.
Crazy assertion: We should prefer security systems that people (‘users’,
developers, and testers) can readily create accurate mental models for, even if
they are strictly less powerful than what the state of the art allows.
Please be aware that what follows was written after a long, stressful
workday, and under the influence of a considerable amount of bourbon. It is a Gedankenexperiment
more so than a serious proposal for a design we should use. Send your flames to
chris@ this domain.
Count the caveats in Zdziarksi’s description. It is very hard to know if the
encryption is working; it is very hard to form a mental model of when it is
effective and when it is not. And it is hard for people outside Apple (and
possibly even inside Apple) to test.
I hypothesize that a radically simpler model would be easier to implement,
verify, and test; easier for users to understand and build an accurate-enough
mental model for; but somewhat less convenient to use. I further hypothesize
that the inconvenience can be limited to relatively rare scenarios (system
power-on) whose frequency the user can control. To explore these hypotheses,
I’ll sketch a straw-man storage encryption system. It is purposefully informal
and incomplete; its purpose is to illustrate a thought experiment more so than
to prove a point or be a system you would necessarily want to use in real
The hypothetical system has the following components:
The TPM stores a bootIntegrityCheckKey that it uses to verify
that the firmware and operating system are what the system vendor intended
(leave the details for now; assume something along the lines of dm-verity).
A bulkStorageEncryptionKey used to encrypt the platform’s
filesystem. It is stored, encrypted, on the mass storage device. (See
A key-encrypting key, KEK1, stored in the TPM. (The TPM will of
course only decrypt objects with KEK1 if the boot integrity check
The user’s storageEncryptionPassword. (Passcode, password, or
passphrase — whichever degree of complexity the user prefers, I’ll call it a
A password-based key derivation function KDF, which is whatever
KDF you like: scrypt, bcrypt, et c.
A second key-encrypting key KEK2 =
bulkStorageEncryptionKey)). It is stored on the mass storage device,
next to the encrypted filesystem.
A screenUnlockCode of the user’s choice (potentially distinct
from the storageEncryptionPassword).
When the machine is powered on and fully booted — the user has provided
KEK2 and the system has mounted the encrypted storage volume — there
is simply no defense against forensic attack at all. This design only defends
against forensic attack when the machine is powered down (or when it is powered
up but before the user has provided KEK2).
This design would allow several benefits. Chiefly, the fact that we can (in
theory) tell users a straightforward story:
People could trade-off convenience and security against offline brute-force
attack on a fine-grained level, by choosing exactly how complex they want their
KEK2 to be.
The mechanisms lend themselves to a clear mental model that most people can
(I hope) immediately understand:
If the system boots at all, it is booting the true operating system.
Once you enter your KEK2, your data is visible to anyone who can
physically control the device, or logically control the device with kernel-level
If you don’t want anyone to be able to read your data, even if they are
about to gain physical control of it, turn the device off.
It would be relatively easy for the developers to implement. Cryptography is
famously difficult to implement correctly, and tiny errors can render the
defense useless (or worse than useless). Simple is always good in cryptographic
It would be relatively easy for the testers to test.
It would be relatively easy for outside researchers to test and verify.
But this design would incur several drawbacks:
Remembering, and entering, a complex passphrase whenever the device boots is
There is no backup key, key escrow, or other second chance. If the user
forgets their storageEncryptionPassword, the device will not fully
boot, and the user can only reset the device and restore their data from a
Indeed, the user can thus model their attacker’s capability: it is the same
as their own capability when they have forgotten
This is an instance of Jim Hebert’s Law Of Information Security: People can
only understand attacks that they can imagine themselves performing.
For this reason, I consider this drawback to actually be a benefit: If you
can’t get your data back, the encryption is working, and you can see
what the attacker would see!
Having to turn the device off to get security against offline or online
brute-force attack is inconvenient.
The inconveniences are so great that many people might choose no or little
Is that good or bad, overall? If someone like Edward Snowden chooses a very
complex passphrase of 9 Diceware words and most people choose 4-digit PINs, is
that a problem? I would argue No: people are empowered to trade benefits and
costs as they see fit, and to have an accurate model of the consequences of
It is all-or-nothing: there is no concept that some applications might have
their storage encrypted beyond the underlying file system, providing some
(untestable) defense against an online brute-force attacker with
less-than-kernel code execution on the device.
People might not understand the need to have
storageEncryptionPassword be distinct from
screenUnlockCode if they need a strong defense against forensic
attackers. In most likely operating system designs, the modes of attack against
screenUnlockCode are very different from, and much likely easier
than, the modes of attack against