Skip to the main content
Alt Text

The Future of the Internet: Balancing Security With Openness in the Internet of Things


by Jonathan Zittrain
July 2015

I wrote a book called The Future of the Internet – And How to Stop It. Its thesis was that our amazing three decade run of the modern personal computer and Internet had been fueled by the “generative” characteristics of each – but stood vulnerable to security problems brought about by their very successes.

The PC allowed anyone to write and share (or sell) software for it – with the PC and operating system manufacturers having no role in deciding what would and wouldn’t run on their systems. That was unusual for its time or any time: the PC was introduced to a hobbyist community against a backdrop of non-programmable “information appliances” like dedicated word processors.

The Internet

Same for the Internet. Unlike CompuServe, America Online, and Prodigy – the online services designed for the general public – the Internet allowed anyone to communicate with anyone, without any refereeing of the movement of bits or code. Unlike the proprietary counterparts that it soon eclipsed, the Internet has no main menu, no CEO, and no business plan. Anything could be built on top of it without permission of a central authority, and the resulting applications could, and did, surprise us in their reach and popularity. Foremost among them is the World Wide Web, designed by Tim Berners-Lee, a genius physicist working in his spare time, its protocols gifted to the world. (When Sir Tim appeared in the opening ceremonies of the UK Summer Olympics, tweeting out “This is for everyone” from the stadium, the network television anchors covering the event had no idea who he was.)

My worry in 2007 was that the openness of the PC to new code from anywhere, and the Internet to new applications and sites designed by anyone, was being increasingly abused. The Apple iPhone had just been introduced, and in its first version it brooked no outside code at all. I saw in the iPhone the return of the information appliance, a harbinger not just of dumb flip phones becoming smart, but of a rebooting of our entire information architecture from open to closed, unowned to owned, and innovative to stable – for the cause of better security.

The iPhone was indeed the beginning of a revolution. What made it most interesting was its second version, which introduced the App Store. The App Store represented a hybrid of the original PC, running outside code, and the information appliance, countenancing none. It put Apple in the position of vetting who could code for its products, long after they left the factory. It allows for great innovation – tens of thousands of apps – while permitting velvet ropes to be strung either by category or individually to exclude certain kinds of programs and services that don’t meet the preferences of Apple, or those who can regulate Apple. And we now see app stores across the gamut of information technology – they are in our phones, our tablets, and yes, our PCs, increasingly as the only practical sources for new code. The result is industry concentration in operating systems, and increased interest by regulators in monitoring and controlling what software is permitted to run – and in turn, what content can circulate. As these architectures are exported to states that don’t embrace the rule of law, the implications for state control become more profound.

The Emerging Internet of Things

This is a future I still want to stop, while still taking seriously the security concerns that have largely prompted this enclosure of technology. Looking ahead, we can see the same dynamics shaping up for the emerging Internet of Things. Imagine an Internet-aware shovel. It may seem pointless at first, but it doesn’t take much to imagine some good applications. Perhaps it can report when it’s being used, so Mom can check to see if the kids have dealt with the icy walk yet. It can sound an alert, personalized to the health profile of its wielder, if his handle-measured heart rate is going too high. (Maybe it can summon an ambulance if the hand grows cold.) Data aggregated across shovels can tell the city where to send the plows, on the logic that those shoveling the most must have the deepest snow. Or perhaps it’s the opposite: where people are too daunted to shovel is where the plows should go.

Will the shovel’s features be determined only by its maker, or will there be an application programming environment made available for it? Will its data telemetry be owned and directable by the user, or proprietary to the maker? Our hypothetical shovel invites us to ask generally: will Things be able to talk to one another across vendors, or only to their makers? Who owns a Thing – the purchaser? Or is it more like a service than a product?


These questions remind us that so much is yet to be determined in our information ecosystem, and that the distinctions between owned and unowned, generative and sterile, remain as vital as ever. And they should inspire us to reflect on what we mean when we invoke quality. A quality shovel won’t break down with lots of use and it won’t be made of toxic parts. But a quality Internet-enabled shovel? That’s much murkier. To some, security should be paramount – so having the shovel able to talk to the tea kettle only invites trouble, with little upside. To others, quality is optimized when open-ended populations of coders can try a hand at improving the way things (and Things) work. To see the multi-dimensionality of quality in the information space is to understand the breathtaking array of choices and trade-offs, and to begin working through the puzzle of just who should be making and guiding the answers among consumers, producers, regulators, and communities across each that are yet to gel.

You might also like