This afternoon, my internet connection was so unusable that I couldn’t even watch non-HD Youtube videos. I decided that before blaming Comcast again, I should at least try to make sure the problem wasn’t on my end. I started by resetting my wifi router to the defaults and reconfiguring it from scratch.
I had long suspected that (a) some neighbor had cracked my WPA password and was wasting all my bandwidth; and/or (b) that the router itself was thoroughly pwned. I am of course extremely lazy, so I let this enjoyable paranoia simmer in the back of my mind, unresolved, for months. Besides, I had forgotten the admin password, so I knew I would have to reset it to factory defaults just to get back into the administration interface. Today was that day.
Note: In this post I’ll discuss some specific vulnerabilities I found in my wifi router, which any competent security engineer could find upon cursory inspection. The reason I describe them is to solidify with specific examples a larger point about the emerging “Internet Of Things” (IoT) market segment and its engineering requirements and limitations. (Especially authentication.) I believe that if we engineers/managers/marketers/business people are going to serve this IoT market, it is our duty to make the products as safe as we can — that is, much safer than they currently are. We are in early days, so now is the time to establish best practice and ratchet up the engineering culture.
For more on wifi router vulnerabilities specifically, see the results of the SOHOpelessly Broken contest at DEFCON 22.
The State Of The (Consumer-Grade) Art
So, while setting my router up, I decided to try the HTTPS option for the administration interface. By default, it’s HTTP-only.
Now, I expected to get the “authority invalid” HTTPS error page when I tried to connect to the router. “Authority invalid” means that no public, well-known (by the browser) certification authority (CA) has vouched for the server’s cryptographic identity (its certificate). This makes perfect sense, since my device is private: no CA could possibly have vetted my little router’s certificate, nor its (non-unique, private) IP address, nor its (made-up by me just now) name.
Given that, clicking through this warning screen would at least maybe make sense:
Before continuing, I decided to take a look at the connection info and the certificate.
For some reason, the router serves a certificate signed with MD5withRSA and a 512-bit RSA key — obsolete algorithm and key size — yet uses an curiously strong 256-bit cipher (presumably some mode of AES) for bulk encryption.
(I say “curiously strong” because usually, cryptography engineers seek to set all crypto parameters to the same security level, as measured in powers-of-2 complexity. AES-256 is many orders of magnitude stronger than RSA 512 and MD5withRSA; mixing algorithms at these varying levels of strength does not make much sense. See this article on key size for example.)
You might imagine that there would be some performance concern with using sufficiently-modern (i.e. 2048-bit or larger) RSA keys; after all, this device is very tiny and doesn’t have much compute power. So a modern key size might cause the machine to establish TLS sessions slowly, due to the cost of the asymmetric crypto. But, on a machine with gigabit Ethernet, and for which the user will only rarely use the management interface, I don’t really think that explains it. And if the engineers were really concerned about compute resources, they might more likely have chosen RC4 and a smaller key for the bulk encryption, instead of AES 256. So, these crypto parameters are a bit mysterious to me. (Not that I advocate the use of RC4, of course.)
(Note also that the certificate is not valid before 8:20 PM PDT; I observed this certificate at 7:27 PM. When I later went to double-check this, I found that the router did have the correct time. However, I found that every time you disable and then re-enable the HTTPS option, the machine generates a new certificate with a Not Valid Before date 1 hour in the future, and with a new, distinct 512-bit RSA key. I suspect a time zone/daylight savings time math mistake in the programming. On the bright side, it’s very good that the machine generates a fresh key every time you re-enable HTTPS: That means that the key is not static, or identical on all the routers of the same make or model.)
Because 512-bit RSA and MD5withRSA are so obsolete, Chrome and Firefox simply refuse, as a matter of policy, to even connect to servers that present such cryptographic configurations. You can’t click through the HTTPS warning page; you get an outright network failure:
Firefox refuses to talk to the server in a similar manner, and for the same reason.
“No problem,” I thought, “I’ll just upgrade this thing’s firmware, which will probably fix this and lots of other things. After all, since I assume this machine is pwned, it needs at least a re-install.” Regular readers know me for my boundless optimism.
So I hit the vendor’s support page for the device, and note that it’s not HTTPS even though it serves a firmware download. (Yay! There is an updated firmware! The release notes refer to “various security vulnerabilities”, with no details. Boo, hiss.)
Even if you manually upgrade the page to HTTPS, it has mixed image content (not too terrible, but not great) and it still serves an HTTP link to the firmware. So, rather than click it, I copy it, paste it into a new tab, and manually upgrade it to HTTPS. Alas:
Still, I downloaded and installed it anyway, over broken HTTPS. For science.
Basic Web Application Safety: A Sidebar
It’s easy to check whether or not an application defends against cross-site request forgery (CSRF). It seems my router’s management interface does not.
To defend against CSRF, an application needs to verify that an incoming request was previously “formulated” by the application itself, and not by a 3rd-party attacker. (See the references in the Wikipedia page, e.g. Jesse Burns’ paper.) To do so, it should include an unpredictable secret value in the request parameters that only the server and the true client know.
Note in this request that the only authentication token is the session_id, a 32 hex digit (16 byte, 128 bit) random-looking number that is a parameter in the URL query string. There is no separate CSRF defense token. Here is some copy-pasta from the Network tab of Chrome’s Developer Tools, from when I changed the device’s name to “noncombatant2” and the secondary DNS server to the fake value “184.108.40.206”:
Request Remote Address:10.0.0.1:80 Request URL:http://10.0.0.1/apply.cgi;session_id=[redacted] Request Method:POST Status Code:200 Ok Request Headers Accept:text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8 Accept-Encoding:gzip, deflate Accept-Language:en-US,en;q=0.8 Cache-Control:max-age=0 Connection:keep-alive Content-Length:819 Content-Type:application/x-www-form-urlencoded Host:10.0.0.1 Origin:http://10.0.0.1 Referer:http://10.0.0.1/index.asp;session_id=[redacted] User-Agent:Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.13 Safari/537.36 Form Data submit_button:index change_action: submit_type: gui_action:Apply hnap_devicename:noncombatant2 ...elided... lan_netmask:255.255.255.0 machine_name:noncombatant2 ...elided... wan_dns0_0:8 wan_dns0_1:8 wan_dns0_2:8 wan_dns0_3:8 wan_dns1_0:8 wan_dns1_1:8 wan_dns1_2:4 wan_dns1_3:3 ...elided... Response Headers Cache-Control:no-cache Connection:close Content-Type:text/html Date:Sun, 12 Oct 2014 04:05:19 GMT Expires:0 Pragma:no-cache Server:httpd
(Some boring stuff elided.) Any attacker who can discover the session_id — such as when it leaks over HTTP when you click on the link to the Linksys web site from the Firmware Upgrade page —
could mount a CSRF attack to, for example, set your DNS servers to be malicious servers that always point to the attacker’s web server or proxy. They could thus intercept, eavesdrop on, and falsify all your non-HTTPS web browsing (among other potential attacks).
If you have this router model, you can work around this vulnerability by setting “Access via Wireless” to Disabled in the Administration page. Assuming you usually browse the web from a computer connected to the wifi interface and only manage the router from the wired interface, that reduces the window of vulnerability. Additionally, since you are rarely logged into the router’s administration interface, the window of vulnerability is further narrowed. (CSRF attacks only apply to users who are logged in to the vulnerable web app at the moment of attack.)
Internet Of Vulnerable Things
Let’s summarize what we’ve learned about my router.
- its obsolete and unusable HTTPS means it can only be managed non-securely
- its management interface is remotely vulnerable to an easily-exploited class of web application attack known since 2001
- the session_id is easily leaked due to its placement
- if the firmware update is secure, that is not apparent to the user
- the firmware update (released “04/24/2014 Ver.1.0.06 (build 2)”) does not resolve the most immediately obvious problems (but does fix other unspecified vulnerabilities)
This is a mass-market, fairly powerful device, from a major vendor. It seems to have been originally sold in late 2009 or early 2010 (according to the reviews on the vendor’s product web page), so we can perhaps assume its hardware and software were designed and implemented/manufactured in late 2008 or early 2009. While not new, this is not quite a prehistoric, pre-security machine; 2008 is well after the Microsoft Trustworthy Computing initiative (but only the beginning of the time when some major internet services started offering HTTPS).
I should probably buy the very latest wifi router from this vendor and repeat the above tests. It’d be interesting to see if anything has improved. I should note that, last night, a friend of mine was setting up his brand-new wifi routers (from a different vendor). He determined they also were at least vulnerable to CSRF.
So, How Could We Improve?
If we are going to live in an “internet of things” (IoT) world, vendors need to improve far beyond the state that my middle-aged wifi router is in.
Consumer appliances like wifi routers, file servers, and printers are all relatively powerful computers that can definitely support the full range of security goodness:
- transport encryption and authentication
- storage encryption (where applicable)
- type-safe implementation languages (at least for code with network-facing attack surface)
- automatic and well-authenticated updates
- modern frameworks for things like the web-based management application
- perhaps even secure boot? Dare I dream?
We can expect these devices to represent the high-end of the IoT, and that smaller devices may not meet such a high engineering quality bar (at least initially).
We might need to upgrade the specifications of lower-end devices to meet a bare minimum, or perhaps apply alternative security strategies. For example, if a device is only marketable if its price point is so low that it cannot be secure, perhaps it should disable itself after some reasonable life-time. That way, at least the devices won’t live on for a long time, making their users vulnerable. Or perhaps such devices can fall back to minimal functionality, automatically reducing their attack surface when they get too old.
The Authentication Problem
Note that even if my wifi router were perfect in every way, there would still be that initial problem: “invalid authority”. That is, we still need a way for TLS clients to authenticate an IoT device’s TLS server (note that I am leaving room for the application protocol to be anything, not just HTTPS). I can think of at least these ways to approach that problem, which I’ll sketch here. I stress that this is currently not a solved problem.
Trust on first use, and remember. Currently, Firefox allows users to click through most HTTPS error screens, with the option to “confirm this exception”, so that Firefox will remember that the user accepts the error for the given site. With Chrome, we are experimenting with variations on this idea. (See chrome://flags/#remember-cert-error-decisions in Chrome 39 Beta.)
Trust and then key-pin on first use. Perhaps IoT is enough like the SSH use case: The device could generate a new key and certificate each time it is reset (or, as my router does, each time the HTTPS server is launched), and the client would leap-of-faith trust it on the first connection (perhaps prompting the user, perhaps not). Thereafter, the client would expect the same public key from that server. (Or expect 1 member of a set of public keys, if the server serves a certificate chain.) If a device by that name ever served a new key — such as because it was reset, or because there was truly a man-in-the-middle attacker — the client would reject the connection. As with SSH, the user would have to affirmatively delete the old name/key association and then re-establish trust.
“Confirm this exception” and TOFU + key pinning are not necessarily great solutions. The easier it is to confirm such exceptions, the more likely it is that users will mistakenly accept authentication errors on the public web. Yet it must be easy to confirm such exceptions, so that users can use the product. Recovering from legitimate key rotation would likely be a pain point for users. (Perhaps clients could incorporate some easy recovery UX flow, but that is still an open problem in secure UX design.)
Dual-mode general-purpose clients. Perhaps general-purpose clients, such as browsers, should be able to go into 2 modes: Public Internet Mode (the current behavior, in which clients discourage self-signed certificates), and IoT Mode (in which clients expect self-signed or alternative trust anchors). The client would need some reliable way to know which mode to use; for example, clients might go into IoT Mode for servers using non-unique private IP addresses, non-ICANN-approved gTLDs, or dotless hostnames.
Dedicated management client, with baked-in trust anchors. Another possibility is to not try to authenticate the IoT in general-purpose clients. Instead, for example, vendors could ship an Android app and an iOS app and a Windows app and a Mac OS X app and a Linux app so that users could use and manage the vendor’s devices. Since the client and server would be more tightly integrated, they could use an alternative, vendor-managed trust anchor, rather than relying on self-signed certificates.
Obviously, developing clients for many platforms is more expensive than developing for just the web platform. But specialized clients can have their advantages.
That doesn’t mean that vendors will necessarily give their customers the full benefit of those advantages. I once had a Drobo (a file server appliance) that worked this way. Both for file service and for management, it could only be used with a dedicated client program (I used the Mac OS X client). If I recall correctly, it did not serve files by open or semi-open standards like SMB/CIFS or NFS. Instead, it actually used a kernel module to implement its own network filesystem. Unfortunately, it did not take advantage of the tight client-server integration to use a vendor-managed trust anchor; all communication was unauthenticated and unencrypted. Still, it could have had authentication and encryption with good usability.
How Do We Get There From Here?
A key problem with IoT is that, in general, the price point for the devices must be very, very low. This puts huge pressure on vendors: We want high engineering quality, including good security and good usability, yet we also want low prices. And low engineering quality could doom the entire IoT product class: It might only take a few news reports of users being surveilled and trolled by their refrigerators before people decide to stop buying IoT things.
I think I bought this wifi router for $50 USD. So how do you get strong authentication and encryption, resilience against native code vulnerabilities, frequent and secure updates, defense against well-known web application attack classes, and so on, for $50? While also getting high performance networking like 802.11ac?
The good news is that there are a few design decisions that vendors can take early on in the product development lifecycle to keep costs lower and bugs fewer. In addition to those, for web-enabled things I’d add a requirement to use a modern web application framework that resolves XSS, CSRF, session fixation, auth token leakage, et c. by design. (Here is 1 example of a bestiary of web application attack classes. A successful web application must resolve all that apply.)
For updates, vendors need to use signed updates (with the public key(s) baked into the firmware), and to automate delivery, and to manage the signing keys extremely well. All of this is hard, and so far only a few software vendors have been able to do it.
There is a long road between where we are now and a secure internet of things. I actually am optimistic that we can make good progress down this road, but it will require engineers and business people to get creative — the sooner the better.