A Thought On ‘End-to-End’ Security

There’s a good deal of hullabaloo about Apple’s CSAM detection system(s) for the iPhone, iMessage, and iCloud. There are a lot of complex trade-offs in play, and I am not qualified to say if it’s net-good or net-bad.

Everyone is getting their yell in, pro- or con-, and that is good. To me, the clearest discussion of why Apple’s plan is so hard to love comes from Deirdre Connolly and Matthew Green on the Security. Cryptography. Whatever. podcast.

In “Normalizing Surveillance” (in Lawfare, of all places) Susan Landau raises a key question: What does ‘end-to-end’ security actually mean? Is Apple violating the spirit of the principle? (Has there ever been a clear letter of the principle?)

Apple’s solution for its messaging app works through a redefinition of end-to-end encryption (E2E) with a new meaning that a communication is end-to-end encrypted until it reaches the recipient’s phone. Previously an iPhone (or iPad) user could use the Messages app to send a message to another iPhone or iPad user and it would be E2E encrypted via iMessage, Apple’s E2E encrypted messaging app. But Apple’s new definition of E2E encrypted means that Apple tools could have access to any decrypted contents.

The original ‘letter’ of the principle comes from Saltzer, Reed, and Clark’s paper “End-To-End Arguments In System Design” (local copy). They argue that reliability must come primarily from the application layer, and they use a file transfer application protocol as their example. They don’t find it sufficient to rely on the lower-layer protocols, or their providers, to provide application-semantic reliability (in this case, file integrity). They note the possibility that lower layers might help, or improve efficiency, but lower layers and providers cannot be solely relied upon to provide such reliability.

Saltzer et al. mention encryption only in passing, and don’t really dig into the idea of the lower layer providers as intentional threat actors. The end-to-end argument does readily lend itself to that approach to security, but the original paper does not give us a clear definition of E2E security. It’s about reliability in a world of accidents, mishaps, and misunderstandings.

The spirit of the E2E argument as a security model, though, is surely clear: the application layer must not trust other software, protocols, or communications providers; instead, it must treat them as not only unreliable but as potential threat actors.

But does it make sense to treat the author(s) of the trusted computing base (TCB) — your hardware, firmware, operating system, and core application frameworks — as untrusted? As threat actors?

Landau considers it a “redefinition” of the principle of E2E security to trust the TCB. Essentially, she wishes — we all would wish! — for the “ends” to be the application software instances alone, defending themselves against an untrusted and potentially hostile computing base:

A graphic showing Alice, Bob, Alice’s device, and
Bob’s device, with messaging apps talking to each other and treating the device
OSs as potentially hostile.

Unfortunately, the reality has always been, and must necessarily be, more like this:

A graphic showing Alice, Bob, Alice’s device, and
Bob’s device, with messaging apps talking to each other and treating the device
OSs as potenitally hostile but necessarily trusted.

Every TCB, from every vendor, has at least the power — hopefully unused, or at least ‘benignly’ — to inspect, modify, tootle with, and otherwise perturb any application it hosts. None of Apple, iOS, iMessage are unique in this way. You have to trust Ubuntu not to frobulate your Signal Desktop. You have to trust Android not to discrombulize your WhatsApp.

Whether the trusted computing base is trustworthy is an entirely separate question.

Another separate question: Does it make sense to treat the application itself as a threat actor, other than by simply rejecting it? Part of Apple’s system(s) for CSAM involves, presumably, integration between iOS, iMessage, and iCloud — presumably, the system(s) are implemented partially inside iMessage and partially with API hooks between the OS, app, and iCloud. Apple is the author of all that software, and runs the services.

In theory, at least, any TCB could reach into any application it hosts to do the same thing. I’m not saying I think Apple or any other OS vendor will go so far as to do their CSAM scanning in apps they didn’t author.

(That said, substantial content inspection, code injection into applications and into the kernel, and reporting to the mothership has long been common in the anti-virus (AV) industry. If you install such software, be aware that you are usually placing total trust in it. If you don’t like Apple’s private set intersection stuff, you’re really not going to like what you find out about AV.)

So, I don’t think it’s a redefinition of the letter (such as it is) or spirit of the E2E principle to treat the TCB as trusted. (It’s right there in the name.) A given TCB, or app, might be untrustworthy for your needs, and that can be a problem. But it’s not a problem Apple introduced.