My first post in this series addressed the four critical design factors for IoT project success. These four factors — Trust, Identity, Time and Chain of Custody aren’t typically top of mind during the prototype phases of IoT projects. After all, a prototype is essentially a “proof of concept” to demonstrate that data can flow and that the “thing” works. It is typical that a project is required to reach this level of development to gain approval and funding to move forward.
When in the prototype stage, it doesn’t matter that much if the datapoint timestamps aren’t quite right or if there is an occasional bug, because you’re not yet integrating data from your “thing” into other systems, like an enterprise backend. However, once you are ready to move into production, it’s time to design in the capabilities that will enable your device to operate safely, securely and accurately in a connected environment.
The hardware development platform used to build your prototype may or may not be suitable to translate your device into production. Hardware certification requirements vary by industry and in some, such as oil and gas, rail, and power, the standards required will likely demand a clean design on a more robust development platform. Likewise, the transition from prototype to production is also the time to evaluate if you need a more robust software platform. The first software platform capability that should be evaluated when moving from the prototype stage into production is the foundation of trust.
A Foundation of Trust
In the context of an IoT application, a foundation of trust deals with three primary issues:
- Is the thing communicating with the system that it should be?
- Is the thing really who it claims to be?
- Can the system validate that the thing has not been compromised?
In order to answer these three questions in the affirmative, it is imperative that the right design be used from the start.
Setting the Bar for IoT Connected Devices
Techniques for dealing with these issues have been well understood in the software world, however they have been slow to migrate to embedded devices. Prior to this new world of connected devices, the assumption of physical security limiting access to the device made many of the attack vectors unlikely. In a connected world, this is no longer the case. Fortunately, some large players are starting to enforce requirements that will raise awareness among device manufacturers.
Amazon and Microsoft have begun to elevate the conversation on the requirements for an IoT device to connect safely and securely. With their large reach in the developer community they have dramatically increased awareness in the area of authentication and authorization for IoT devices. Both Amazon and Microsoft require TLS encrypted channels in order for hardware devices to connect to IoT services. The each offer slightly different options for validating device identity. Prior to the establishment of these requirement for their service access, connection standards had been very ad-hoc, and in many cases still are.
Amazon and Microsoft’s push to move hardware manufacturers in this direction reinforces the idea of trusted communications, helping to ensure that the device is talking to the server it thinks it is talking to. This part of the security equation prevents people from snooping on the device that is communicating with the cloud.
To build a trusted communication capability, a device must validate that is it talking to the right server. The server also needs to validate that it is talking to the device it expects, and that someone isn’t pretending to be that device. Additionally, the server needs to trust that the device itself is not compromised, for example a compromised device might be a member of a botnet that is sending spam or viruses across the Internet.
Amazon’s IoT services require device certificates, while Microsoft also offers security tokens as an option. Neither directly provide requirements for devices to prove they aren’t compromised. They both offer hints that the keys should be stored in a Trust Module in the hardware design, but there is no requirement or mechanism to validate and enforce this behavior by device manufacturers.
A classic example of why it is so important to validate the downward communications was widely reported in January 2014 as one of the first cyber attacks involving an IoT device. A smart fridge sent out over 750,000 spam emails, in bursts of 100,000 emails at a time, three times a day, with no more than 10 emails sent from any one IP address. With the possibility of millions of devices connecting to the Internet — from wearables to kitchen appliances to large industrial units to cars — millions of potential attack vectors open up and trusted communications is a mandate.
Did You Lock the Front Door? Beware the Spoof
Often, device makers don’t recognize why someone would want to spoof their device because they are removed from the enterprise level and don’t feel the financial impact of these actions. Still, their device will be vulnerable to spoofing attacks if they don’t build in a foundation of trust.
To spoof a device means to act like the device and call in to the server, causing the server to do actions it should not. For example, if a medical device is monitoring procedures performed, and the enterprise system estimates inventory needs based on the number of these procedures completed at the location, it will automatically ship more inventory when the site meets the threshold for reorder. Simulating one of those devices and reporting that a procedure is complete triggers a response: inventory appears lower than it actually is and causes inventory to ship. The business may not notice until a month or two later when reconciliation happens. In this situation, you can literally steal the goods right off the company’s front steps, because they’re not expecting any new inventory.
Another example might be a commercial farming operation that purchases millions of dollars in grain futures per year to feed their animals. If someone spoofs a device and gains visibility into the load of their various silos, they could potentially game the futures market. The imposter device could gain access to their data, anticipate buys, or manipulate with fake responses and force an early buy at a higher price. In the commodities market, where even a penny change in price can make a difference, the financial impact on the farming enterprise could be substantial.
The home security market is another example where device trust is essential. All those mobile apps and devices that security providers advertise can literally open the door to spoofing. If an attacker can mimic the server, they can unlock someone’s front door. You can change a security setting. You can reset a home’s connected thermostat and change the home’s energy use.
Trust Cuts Both Ways
While Amazon and Microsoft have taken good first steps, they have only raised the bar for devices talking to their cloud, and only raised it in a way that the device knows it is talking to the right server and that the server knows the device has valid credentials. In the current AWS and Microsoft IoT services, the server does not guarantee that a device isn’t compromised and sending false data.
An example of sending false data is the infamous Stuxnet attack. In June of 2010, the Stuxnet computer worm infected the software of at least 14 industrial sites In Iran, including a uranium-enrichment plant. Stuxnet malware was able to compromise the programmable logic controllers and cause the centrifuges to vibrate and damage themselves, thus preventing successful uranium enrichment. The PLC management software could not recognize the hijacked devices and the workers at the plant observed the devices operating normally.
Stuxnet is an example of a highly sophisticated attack. Most attacks don’t come near this level of complexity or coordination, but the point remains: we need to address both directions of the communication to ensure safe operations and validate all the way down to the trust module on the device.
The next step must address the server’s ability to validate the device, with a challenge-response protocol that the device manufacturer implements, and uses a unique set of encryption keys. This enables the device to sign challenges that the server sends out, proving that the server is communicating with the right device. These keys are not directly accessible to any malware that might infiltrate the device. This level of validation gets into areas of device design and firmware implementation, which may be why it is challenging for companies like Amazon and Microsoft to force compliance.
Microsoft learned long ago that a device’s ability to sign challenges, called challenge response authentication (CRA), is the only way to prove if a device is compromised or not. In the days of Windows 95 and 98, there were viruses, but relatively little malware. Overtime attacks became more sophisticated and by the time Windows XP was released, malware was ubiquitous. When all of this malware started hitting, due to their large install base, Microsoft’s operating systems were the hardest hit. As a result, Microsoft learned a lot of lessons. Malware was observed in the wild pretending to do firmware updates to fix the hole that the malware got in through, and then reporting the new version while it was still infected. Today, Microsoft is actively pushing CRA as a standard through the Trusted Computing Group.
You Can’t Do Anything Without Trust
It is only when you know that your device is talking to the right server, and the server can verify the device’s identity and know it has not been corrupted that you can start to talk about the actual data you want to share, and the things you want to measure. Whatever your device, whatever industry or application, you need to build every IoT project on this foundation of trusted communication. You can’t reliably do anything without this foundation. And the right time to build this foundation is before your prototype goes into production.