Security And Apparentness

Update 28 October 2015: “Neither Snow Nor Rain Nor MITM… An Empirical Analysis of Email Delivery Security” by Durumeric, et al. confirms my assertion that opportunistic security must be either useless or rarely adopted:

We analyze the mail sent to Gmail from these hosts and find that in seven countries, more than 20% of all messages are actively prevented from being encrypted. In the most severe case, 96% of messages sent from Tunisia to Gmail are downgraded to cleartext.

If you actually need STARTTLS, you can’t count on it doing anything. One solution would be for Gmail to require STARTTLS, and to require some kind of Certificate Transparency or key pinning for STARTTLS certificates. But then the next problem would arise: surfacing the connection failures to people. Due to the asynchronous and ‘it just works’ nature of email, I don’t see an elegant or even minimally workable solution to that problem. I’d love to be proven wrong...

Original post follows:


If an application (or platform, or protocol, or...) cannot communicate a particular security guarantee to the person — perhaps because there is no channel by which to communicate the message — then the mechanism that provides the security guarantee can be at best opportunistic. The mechanism provides the guarantee if conditions are favorable; otherwise, it does not.

With an opportunistic security mechanism, there arises a question: whether or not the effort to develop the mechanism, and the attack surface the mechanism exposes, is worth the benefit — which is likely to be negligible.

The reason the benefit of an opportunistic security mechanism is likely to be negligible is that, because the application cannot communicate failures to the person, the success or failure of the mechanism cannot possibly be part of the person’s mental model of the system. Thus, the person is very likely to rank other benefits — such as availability or performance — above the security benefit that they cannot even perceive.

Thus, opportunistic security mechanisms almost certainly must fail open, rather than fail closed. If an opportunistic security mechanism were to fail closed, but the application could not communicate a particular reason or recourse to the person, people would be likely to reject the application as being flaky and unpredictably unavailable.

Thus, in the presence of an attacker, the opportunistic security mechanism must, by design, be useless or rarely adopted.

An application may also fail to communicate the failure of a security mechanism not because the communication channel to the person is lacking, but because it is too complex. Too much communication can be just as bad as too little.

If the application cannot communicate the security guarantee to the person because the semantics of the security are too complex, the application developers should simplify the security guarantee. Specifically, by simplifying (or “quantizing”) the security guarantee upward. (Or, equivalently, quantizing the semantic complexity downward.)

For example, it can be very difficult to explain in an application’s UX that the person’s communications with their friend are encrypted but not authenticated; or authenticated but not encrypted. Thus, it is better to provide both authentication and encryption together, and clearly label that state ‘secure’; or to provide neither and clearly label that state ‘non-secure’. (Alternately, an application whose users will accept occasional unavailability may instead report a connection error and explain that no ‘secure’ connection is available at the time.)

To paraphrase Ian Grigg, we can characterize the ultimate security quantization as, “There is 1 mode, and it is secure”. Allowing less-secure or non-secure modes complicates the mechanism's semantics and implementation. Such complexity makes it difficult both for the people who use the application and the people who develop the application to model the application’s states accurately.

In fact, there is much less of a bright line between ‘developers’ and ‘users’ than either group believes. Developers, just like users, inevitably create inaccurate models of the application. Developers call it abstraction, and call it a necessary virtue. And they are right.