Thoughts On Platform Security Features
2 January 2015 18:35 UTC
Here are some off-the-cuff thoughts on security features that are
available, and which I would like to see.
We need a superset of a subset of the union of the security features of
“mobile” platforms and “desktop” platforms. Although these are not
clearly-defined terms, I’ll try to roughly characterize them by naming
examples. Desktop platforms as of 2015 include:
- Mac OS X
- Windows
- Desktop and server Linux and BSD distributions, e.g. FreeBSD,
Ubuntu
Mobile platforms include:
- ChromeOS
- iOS (Apple, not Cisco)
- Android
The web
platform seems to straddle the line in some ways.
The key differentiators between the 2 classes of platform are security
features and userland APIs. (And the hardware they run on.) Obviously, I’ll
focus on security features, and touch on userland APIs only insofar as they
affect security.
Here are the security features of mobile platforms that I think we need
in all platforms going forward:
- A 2-part principal: (user, source of code); the code source must be
cryptographically authenticated. For example, Android gives each package
(or, package signing key) its own Linux user ID, isolating it from other
packages. (More details.) iOS puts each app in a sandbox and
isolates its storage; again all code is signed. The open web uses the origin model,
with optional cryptographic code authentication (HTTPS).
- Usable ways to share resources between 2-part principals (strongest on
Android; OK on iOS; rather ad hoc on the web). This is mostly a consequence
of the userland APIs that the platform makes available to applications;
Android is rich here.
- Tamper-evident storage, verified at least on boot (“secure boot”, e.g.
dm-verity).
- Encrypted storage, preferably on by default, preferably
whole-device.
- The integrity checking and the encryption should both be backed by
hardware, e.g. a TPM.
- Privilege reduction, a way for userland programs to reduce their own
access to the kernel. ChromeOS, Mac OS X, iOS, and someday soon Android,
have such mechanisms: Seccomp-BPF and Seatbelt.
By contrast, there are security features desktop platforms have that
mobile platforms lack:
- Considerably greater owner control over the device — debuggers, root and
ring 0 access, et c. “Digital rights management” seems to have caught on
more strongly on mobile platforms. ChromeOS has a Developer Mode; I wish more closed
platforms would follow suit.
- Memory and CPU powerful enough to rebut the (usually, but not always,
mistaken) arguments against using type-safe or at least memory-safe code. Current mobile devices are as powerful
or moreso as desktop computers of a decade ago, so we do technically have
the horsepower to run e.g. Java, C#, F#, Haskell, et c. in these devices. In
fact, Android, iOS, and the web all make heavy use of languages with
expensive features like late binding, object orientation, run-time type
checking, interpreted non-native code, and so on. Yet it has proven hard to
actually use those expensive features for safety — people always want to
call into C/C++ code for “efficiency”, and then find out the hard way how
easy it is to write unsafe C/C++. Developers seem happy to traverse many
pointers to finally get to a callable method but are not happy to check the
bounds on arrays. Although unsafe code will always seem marginally faster
than safe code, at some point we have to draw the line: this is
fast enough, that is not safe enough.
Things we still need on both classes of platform, or which I’m not sure
we have yet:
- A secure attention sequence. iOS’ Home button might
actually be one; I don’t know the implementation. I am not certain if
Control-Alt-Delete still is a SAS on Windows — please email me if you know
more. SAS is a simple and powerful idea but it depends crucially on
implementation details that are hard to keep robust as products change over
time.
- UI isolation: each application should only be able to “see” its own
windows, should be able to reliably know when they have the highest z-order,
and should be able to reliably know when input events are really coming from
the user (via the kernel). (See Design Of The EROS Trusted Window System.) Android
almost has this, at least last time I looked. Windows are accessible only
through a capability, but as of Honeycomb (?) there can be windows that
overlap and the active application is not necessarily the highest in
z-order. I could be wrong about that. I also don’t know the iOS
implementation; it may provide some or all of this. (Please email me if you
know more!)
- A kernel with high (...or any) unit test coverage.
- Robust defense against malicious peripherals and I/O devices (e.g. “BadUSB” exploit makes devices turn “evil” and Public Charging Kiosks May Steal Your Data). Device
firmwares, kernel device drivers, and filesystems must all be robust against
malicious inputs, but typically are not.
- Safe, sane firmware written to semi-modern standards of code quality,
including open source solutions. There is CoreBoot, but as far as I know only ChromeOS uses it and successors. Unfortunately I know
nothing of iOS firmware.
- Safe, sane baseband operating systems written to semi-modern standards
of code quality, including open source solutions. All your baseband are
belong to Ralf-Philipp Weinmann (video).
- Error-recovering filesystems or block devices, such as with erasure
coding. Due to their wonderfully high capacities, modern storage devices are highly likely to experience
unrecoverable block errors, making it impossible to read back data
previously stored.
I’m sure I’m forgetting something crucial, and that I got at least 1
thing wrong, and that you’ll let me know. :)