Legal Affairs
space


Current Issue

 
 
 
 


printer friendly
email this article
letter to the editor


space space space
space


January|February 2006
Without a Net By Jonathan Zittrain
Digital Borders By Jack Goldsmith and Timothy Wu
Dragon Slayers or Tax Evaders? By Julian Dibbell
My Brain Made Me Do It By Nicholas Thompson
Cool Tools for Tyrants By Derek Bambauer

Without a Net

The Internet is vulnerable to viruses so lethal that they could gravely damage the online world—unless we upgrade law and technology now.

By Jonathan Zittrain

ON THE NIGHT OF NOVEMBER 2, 1988, a 23-year-old Cornell University graduate student named Robert Tappan Morris Jr. used the Internet to transmit a small piece of software from a Cornell computer to an MIT one. Embedded in the MIT machine, Morris's software connected surreptitiously to other computers on the Internet and installed and ran copies of itself on those machines, which in turn passed the software on to other computers. The software accessed the machines by applying a set of minor digital parlor tricks, such as guessing user passwords. It turned out that many users' passwords were identical to their corresponding user names, and if not, were found on a list of 432 common passwords conveniently dispatched with Morris's software.

In designing and unleashing this software, thought to be the first Internet worm, Morris said he wished to gauge the size of the Internet. At the time, only around 60,000 computers worldwide were linked to the Internet, almost all at government and university research centers. By the next morning, it is estimated that from 1,000 to 6,000 computers had been infected by Morris's software. Although Morris later claimed that it was never his intention to have computers infiltrated by the software multiple times, in many cases the worm did not successfully check to see if a machine was already infected before it transmitted itself. As a result, some computers ended up running hundreds and even thousands of copies of the worm, and scores of computer administrators arrived at work the following morning to find their machines operating at a glacial pace.

After an intense day-long flurry of collaborative sleuthing, itself facilitated by e-mail, network administrators around the world discovered what was going on and shared tips on killing the worm. They declared that, going forward, computer administrators challenged by a worm ought to promptly install patches, users ought to pick harder-to-guess passwords, and virus-writers ought to adhere to a code of ethics. Members of Congress asked for a report from the General Accounting Office, and the Department of Defense's research-funding arm sponsored a national Computer Emergency Response Team to track and advise on computer security vulnerabilities.

Morris was convicted in 1990 under the Computer Fraud and Abuse Act and sentenced to three years of probation, 400 hours of community service, and a $10,000 fine. He transferred from Cornell to Harvard to finish his Ph.D., co-founded a dot-com in 1995, earned millions from its sale to Yahoo! in 1998, and now teaches at MIT. The episode was a sobering experience for the technology community. The only thing that stopped the Morris worm from wiping out data on the machines it compromised was the forbearance of Morris himself: he had programmed the virus to propagate rather than to spread and destroy.

The Internet was a digital backwater in 1988, and many people might assume that, while computer viruses are still an inconvenience, computers are vastly more secure than they were back in the primitive days when Morris set his worm loose. But although hundreds of millions of personal computers are now connected to the Internet and ostensibly protected by firewalls and antivirus software, our technological infrastructure is in fact less secure than it was in 1988. Because the current computing and networking environment is so sprawling and dynamic, and because its ever-more-powerful building blocks are owned and managed by regular citizens rather than technical experts, our vulnerability has increased substantially with the heightened dependence on the Internet by the public at large. Well-crafted worms and viruses routinely infect vast swaths of Net-connected personal computers. In January 2003, for instance, the Sapphire/Slammer worm attacked a particular kind of Microsoft server and infected 90 percent of those servers—around 120,000 servers in total—within 10 minutes. In August 2003, the "sobig.f" virus managed, within five days of its release, to account for approximately 70 percent of worldwide e-mail traffic; it deposited 23.2 million virus-laden e-mails on AOL's doorstep alone. In May 2004, a version of the Sasser worm infected more than half a million computers in three days. If any of these pieces of malware had been truly "mal"—for example, programmed to erase hard drives or to randomly transpose numbers inside spreadsheets or to add profanity at random intervals to Word documents found on infected computers—nothing would have stood in the way.

In the absence of a fundamental shift in current computing architecture or practices, most of us stand at the mercy of hackers whose predilections to create havoc have so far fallen short of their casually obtained capacities to ruin our PCs. In an era in which an out-of-the-box PC can be compromised within a minute of being connected to the Internet, such self-restraint is a thin reed on which to rest our security. It is plausible that in the next few years, Internet users will experience a September 11 moment—a system-wide infection that does more than create an upward blip in Internet data traffic or cause an ill-tempered PC to be restarted more often than usual.

How might such a crisis unfold? Suppose that a worm is released somewhere in Russia, exploiting security flaws in a commonly used web server and in a web browser found on both Mac and Windows platforms. The worm quickly spreads through two mechanisms. First, it randomly "knocks" on the doors of Internet-connected machines, immediately infecting the vulnerable web servers that answer. Unwitting consumers, using vulnerable web browsers, visit the infected servers, which infect users' computers. Compromised machines are completely open to instruction by the worm, and some worms ask the machines to remain in a holding pattern, awaiting further direction. Computers like this are known, appropriately enough, as "zombies." Imagine that our worm asks its zombies to look for other nearby machines to infect for a day or two and then tells the machines to erase their own hard drives at the stroke of midnight. (A smart virus would naturally adjust for time zones to make sure the collective crash took place at the same time around the globe.)

This is not science fiction. It is merely a reapplication of the template of the Morris episode, a template that has been replicated countless times. The Computer Emergency Response Team Coordination Center, formed in the wake of the Morris worm, took up the task of counting the number of security incidents each year. The increase in incidents since 1997 has been roughly geometric, doubling nearly every year through 2003. CERT/CC announced in 2004 that it would no longer keep track of the figure, since attacks had become so commonplace and widespread as to be indistinguishable from one another.

Combine one well-written worm of the sort that can evade firewalls and antivirus software with one truly malicious worm-writer, and we have the prospect of a networked meltdown that can blight cyberspace and spill over to the real world: no check-in at some airline counters; no overnight deliveries or other forms of package and letter distribution; the inability of payroll software to produce paychecks for millions of workers; the elimination, release, or nefarious alteration of vital records hosted at medical offices, schools, town halls, and other data repositories that cannot afford a full-time IT staff to perform backups and ward off technological demons.

If the Internet does have a September 11 moment, a scared and frustrated public is apt to demand sweeping measures to protect home and business computers—a metaphorical USA Patriot Act for cyberspace. Politicians and vendors will likely hasten to respond to these pressures, and the end result will be a radical change in the technology landscape. The biggest casualty will likely be a fundamental characteristic that has made both the Internet and the PC such powerful phenomena: their "generativity."

The Internet and the PC were built to allow third parties to dream up and implement new uses for the rest of us. Such openness to outside innovation has allowed each, in its relatively brief life span, to be in a perpetual state of transformation and surprise. Many of the features we now find so commonplace as to be inevitable were not anticipated by network operators, hardware manufacturers, and the general public until someone—remarkably often, an amateur—coded a fascinating experimental application and set it loose through a cascade of floppy disks, and later over the Internet. This generative substrate created the Internet as we know it, yet it is almost certain to be swept away in the aftermath of a global Internet meltdown. We must act now to preserve it, or the future of consumer information technology, though safe, will be bleak.

So what is it that we need to preserve?

BEFORE THE INTERNET ESCAPED ITS LAB IN THE MID-1990s, outsiders hungry for connectivity had to obtain it in a very different way. The services offered by network providers like CompuServe, America Online, and Prodigy were entirely proprietary; they alone controlled the services they offered, and subscribers to one could not connect to subscribers of another. In 1983, a home-computer user with a CompuServe subscription could browse an Associated Press news feed, see bulletins from the National Weather Service, or interact with other subscribers through a "CB simulator" chat room, private e-mail, public bulletin board messaging, or rudimentary multiplayer gaming. Each of these activities was commissioned and tailored by CompuServe. Even if a subscriber knew how to program the modified DEC PDP-10 mainframes on which CompuServe itself ran, or if an outside company wanted to develop on those mainframes a new service that might appeal to CompuServe subscribers, making such applications available was impossible without CompuServe's approval. Between 1984 and 1994, as the service grew from 100,000 subscribers to almost 2 million, its core functionalities remained largely unchanged. CompuServe and its peers assumed that nearly all innovation in their services would take place at the center of the network rather than on its fringes, and they were slow to innovate.

The proprietary networks blew it. They thought that they were competing against one another, and their executives were still pondering whether AOL would beat CompuServe when the Internet subsumed them all.

The Net did so in large part because of its "hourglass" architecture and the philosophy behind it. The Internet's designers were principally university and government researchers and employees of information technology companies who worked on the project pro bono. The network design was an hourglass because it was broad at the bottom and top and narrow in the middle. At the bottom was the physical layer—the actual wires or radio waves through which the network would operate. At the top was the applications layer—the uses to which the network would be put. Here, the designers were agnostic; they were interested in all sorts of uses, and had no business model meant to capitalize on the success of one application over another. In the middle was a tightly designed set of protocols, publicly available to all to adopt, that tied together the top and the bottom. Any device could be taught to "speak" Internet Protocol, be assigned a numeric identity, and then be able to communicate directly with any other device on the network, unmediated by any authority.

Hourglass architecture is extraordinary: It makes network connectivity an invisible background commodity, separating the act of communicating from the applications that can shape and channel this communication, scattering the production of the latter to the four winds. In other words, the Net passes data along, and leaves it to millions of programmers to decide what users might want to do with that data. An open Internet allowed a set of researchers to deploy the World Wide Web and its browsers without any coordination necessary from the Internet's protocol makers or the Internet service providers (ISP) who operated networks built on those protocols. AOL, CompuServe, and the other proprietary services couldn't compete with that, once third parties came up with and shared user-friendly services like e-mail and web browsing and the software to support them.

Interestingly, a near-parallel development of hourglass architecture had unfolded for the flagship device soon connecting to the Internet: the PC. The structure of modern personal computer technology has not changed fundamentally since its introduction in the late 1970s. The typical PC vendor of that time sold a minimal base of hardware that could integrate with existing consumer equipment. For example, Apple, Atari, Timex/Sinclair, and Texas Instruments computers could use television sets as video monitors and audio cassette players to store and retrieve both data and software programs. With the addition of a cheap cassette (and later, a diskette), users could run new software written by others, software unavailable when the consumer obtained the computer. PCs could not only be updated to perform new functions, but they often required some form of externally loaded program to be of any use at all.

This was a natural way to make a PC, because PC makers did not presume to know what their customers would actually do with their products. Manufacturers of other consumer technology had no such uncertainty, so they produced devices that were programmed at the factory and frozen in how they could be used. Consider a standard analog adding machine or a digital calculator or even the computer firmware embedded inside Mr. Coffee so that it can begin to brew the minute its owner wants it to. They are all hardware and no software (or, as some might say, their software is inside their hardware). PCs were different. The manufacturer could write software for new purposes after the computer left the factory, and a consumer needed only to know how to load in the cassette, diskette, or cartridge containing the software to enjoy its benefits. More important, PCs were designed to run software written by entities other than the PC manufacturer or firms with which the manufacturer had special arrangements. Early PC manufacturers went so far as to include programming languages on their computers so that users could learn to write software for use on their (and potentially others') machines.

For all these reasons, PCs were genuinely adaptable to any number of undertakings by people with very different backgrounds and goals. Word processing represented a significant leap over typing; dynamically updated spreadsheets were immensely more powerful than static tables of numbers generated through the use of calculators; database software put index cards and more sophisticated paper-based filing systems to shame. Entirely new applications, like video games, pioneered additional kinds of leisure time.

While computer operating systems tended to change relatively slowly as their manufacturers released occasional updates, the applications running on top of these systems evolved and proliferated at amazing speed. The computer and its operating system were a product, not a service, and while the product life might be extended through upgrades, the value in the computer lay in its ability to be a platform for further innovation—running applications from a variety of sources, including the user, and leveraging the embedded power of the operating system for a breathtaking range of tasks. The real action, in short, was not at the core of the computer but on its periphery, or "end."

WHILE A CONSUMER MIGHT FIRST BUY A PC FOR, SAY, e-mail and word processing, it can do an amazing range of other things. Skype provides free Internet telephone calling. Wikipedia offers encyclopedic knowledge on a staggering array of subjects—information generated as a result of a standing invitation to the world at large to create or edit any page of Wikipedia at any time. A handful of people recently invented "podcasting," the practice of pushing audio content to millions of owners of MP3 players, with listeners able to subscribe to their favorite sources and be automatically updated with new broadcasts. Such backwater experiments are often picked up by corporate heavyweights. Apple has embraced podcasting, incorporating it into an iTunes update, and eBay has purchased Skype for $2.6 billion. Such arcs have been repeated frequently over the 10-year history of the mainstream Net, and the innovations aren't over. The second decade of Internet expansion holds the prospect of citizens as consumers of mass-produced content and contributors to a freewheeling global multilogue, using simple but powerful means like blogging and music- and video-making to remix the cultural foam in which we float.

A profoundly fortuitous convergence of historical factors has led us to today's marvelous status quo, and many of us (with a few well-known exceptions like record company CEOs and cyber-stalkees) have enjoyed the benefits of the generative Internet/PC grid while being at most inconvenienced by its drawbacks. Unfortunately, this quasi-utopia can't last. The explosive growth of the Internet, both in amount of usage and in the breadth of uses to which it can be put, means we now have plenty to lose if our connectivity goes seriously awry. The same generativity that fueled this growth poses the greatest threat to our connectivity. The remarkable speed with which new software from left field can achieve ubiquity means that well-crafted malware from left field can take down Net-connected PCs en masse. In short, our wonderful PCs are fundamentally vulnerable to a massive cyberattack.

To link to the Internet, online consumers have increasingly been using always-on broadband, and connecting with ever more powerful computers—computers that are therefore capable of creating far more mischief should they be compromised. For example, many viruses and worms do more than propagate nowadays, even if they fall short of triggering PC hard drive erasure. Take, for instance, the transmission of spam. It is now commonplace to find viruses that are capable of turning a PC into its own Internet server, sending spam by the thousands or millions to e-mail addresses harvested from the hard disk of the machine itself or randomized Web searches—all this happening in the background as the PC's owner notices no difference in the machine's behavior.

In an experiment conducted in the fall of 2003, a researcher named Luke Dudney connected to the Internet a PC that simulated running an "open proxy," a condition in which a PC acts to forward Internet traffic from others. Within nine hours, the computer had been found by spammers, who began attempting to send mail through it. In the 66 hours that followed, they requested that Dudney's computer send 229,468 individual messages to 3,360,181 would-be recipients. (Dudney's computer pretended to forward the spam, but threw it away.)

A massive set of always-on powerful PCs with high bandwidth run by unskilled users is a phenomenon new to the 21st century. Today's viruses are highly and near-instantly communicable, capable of sweeping through a substantial worldwide target population in a matter of hours. The symptoms may reveal themselves to users instantly, or the virus could spread for a while without demonstrating any symptoms, at the choice of the virus author. Even protected systems can fall prey to a widespread infection, since the propagation of a virus can disrupt network connectivity. Some viruses are programmed to attack specific network destinations by seeking to access them again and again. Such a "distributed denial-of-service" attack can disrupt access to all but the most well-connected and well-defended servers.

If the Internet—more precisely, the set of PCs attached to it—experiences a crisis of this sort, consumers may begin to clamor for the kind of reliability in PCs that they demand of nearly every other appliance, whether a coffeemaker, a television set, a Blackberry, or a mobile phone. This reliability can come only from a clamp on the ability of code to instantly run on PCs and spread to other computers, a clamp applied either by the network or by the PC itself. The infrastructure is already in place to apply such a clamp. Both Apple and Microsoft, recognizing that most PCs these days are Internet-connected, now configure their operating systems to be updated regularly by the companies. This stands to turn vendors of operating-system products into service-providing gatekeepers, possessing the potential to regulate what can and cannot run on a PC. So far, consumers have chafed at clamps that would limit their ability to copy digital books, music, and movies; they are likely to look very differently at those clamps when their PCs are crippled by a worm.

To be effective, a clamp must assume that nearly all executable code is suspect until the operating system manufacturer or some other trusted authority determines otherwise. This creates, in essence, a need for a license to code, one issued not by governments but by private gatekeepers. Like a driver's license, which identifies and certifies its holder, a license to code could identify and certify software authors. It could be granted to a software author as a general form of certification, or it could be granted for individual software programs. Were a licensed software author to create a program that contained a virus, the author's license would be revoked.

While licenses to code would likely not be issued by governments, governments could certainly wield an unwelcome influence over their use. Were an approved piece of software to become a conduit for some form of undesirable behavior, a government could demand that a gatekeeper treat the software as it would a virus—revoking its license and perhaps also the license of its author. Different governments could come to different judgments about the software and ask gatekeepers to block its operation, depending on the geographical location of a host PC. What regulator wouldn't be tempted to instruct Microsoft and Apple to decommission Skype if its usage evades certain long distance tariffs? Or to forbid licensing for software like Grokster, which allows users to share copyrighted music and video files peer to peer without going through a central server, if Grokster should fail to include filters to avoid copyright infringement? (Grokster agreed to shut down its website in November 2005, but existing copies of Grokster software in users' hands are unaffected by the move.)

The downside to licensing may not be obvious, but it is enormous. Clamps and licenses managed by self-interested operating-system makers would have a huge impact upon the ability of new applications to be widely disseminated. What might seem like a gated community—offering safety and stability to its residents, and a predictable landlord to complain to when something goes wrong—would actually be a prison, isolating its users and blocking their capacity to try out and adopt new applications. As a result, the true value of these applications would never be fully appreciated, since so few people would be able to use them. Techies using other operating systems would still be able to enjoy generative computing, but the public would no longer automatically be brought along for the ride.

THE MODERN INTERNET IS AT A WATERSHED MOMENT. Its generativity, and that of the PC, has produced extraordinary progress in the development of information technology, which in turn has led to extraordinary progress in the development of forms of creative and political expression. Regulatory authorities have applauded this progress, but many are increasingly concerned by its excesses. To them, the experimentalist spirit that made the most of this generativity seems out of place now that millions of business and home users rely on the Internet and PCs to serve scores of functions vital to everyday life.

The challenge facing those interested in a vibrant global Internet is to maintain that experimentalist spirit in the face of these pressures.

One path leads to two Internets: a new, experimentalist one that would restart the generative cycle among a narrow set of researchers and hackers, and that would be invisible and inaccessible to ordinary consumers; and a mainstream Internet where little new would happen and existing technology firms would lock in and refine existing applications.

Another, more inviting path would try to maintain the fundamental generativity of the existing grid while solving the problems that tend to incite the enemies of the Internet free-for-all. It requires making the grid more secure—perhaps by making some of the activities to which regulators most object more regulable—while continuing to enable the rapid deployment of the sort of amateur programming that has made the Internet such a stunning success.

How might this be achieved? The starting point for preserving generativity in this new computing environment should be to refine the principle of "end-to-end neutrality." This notion, sacred to Internet architects, holds that the Internet's basic purpose is to indiscriminately route packets of data from point A to point Z, and that any added controls or "features" typically should be incorporated only at the edges of the network, not in the middle. Security, encryption, error checking—all these actions should be performed by smart PCs at the "ends" rather than by the network to which they connect. This is meant to preserve the flexibility of the network and maximum choice for its users.

But the problem with end-to-end neutrality on a consumer Internet is that it places too much responsibility on the people least equipped to safeguard our informational grid: PC users. Mainstream users are not well positioned to painstakingly tweak and maintain their own machines against attack, nor are the tools available to them adequate to the task. People can load up on as much antivirus software as they want, but such software does little before a security flaw is uncovered. It offers no protection when a PC user runs a new program that turns out to be malware, or if one of the many always-running "automatic update" agents for various pieces of PC software should be compromised, allowing a hacker to signal all PCs configured for these updates that they should, say, erase their own hard drives.

End-to-end neutrality should be but an ingredient in a new "generativity principle," a rule of thumb that asks that modifications to the PC/Internet grid be made where they will do the least harm to its generative possibilities. Under this principle, it may be more sensible to try to screen out major viruses through ISP-operated network gateways (violating end-to-end neutrality) than through constantly updated PCs, or to ask ISPs to rapidly quarantine machines that have clearly become zombies, operating outside the control of their users. (Zombie machines can be identified when they undertake activities like sending out tens of thousands of pieces of spam; many ISPs refuse to do anything about it when spam recipients complain.) While this type of network screening theoretically opens the door to additional network filtering that is undesirable to creators and others, that risk should be balanced against the risks and limitations when PCs must be operated as services rather than products—the way that they are becoming, in part because of the virus threat. Now that the PC and the Internet are so inextricably intertwined, it is not enough for network engineers to worry only about network openness and to assume that the endpoints can take care of themselves.

Another way to reduce pressure on such institutional and technological gatekeepers is to make direct responsibility more feasible. We can ease the development of technologies by which people can stand behind particular pieces of code—vouching that they have written it, or that they have found it to function properly. To do this right will likely require building on the rudimentary but successful ratings and classification systems of services like eBay or on some advanced distributed spam filters. That would allow users of the Net to quickly share with each other judgments about the reliability of the bits of code that they each encounter, without placing any one entity in the laborious and inherently corrupting position of having to review and rate code.

Finally, we must take seriously the delicate prospect of re-engineering the way the PC works. Ideas are beginning to be floated about making PCs that can easily simulate several distinct "virtual machines" at the same time. Users might have a green zone section of the PC that runs only approved and tested applications and a red zone where they can try out new and potentially unreliable services. A crash within the red zone could be isolated from the rest of the PC, and a simple procedure offered for reconstituting a malware-ridden red zone. These ideas, still in their infancy, deserve the same intensive investment and collaborative effort that gave rise to Internet 1.0.

Collaborative is the key word. What is needed at this point, above all else, is a 21st century international Manhattan Project which brings together people of good faith in government, academia, and the private sector for the purpose of shoring up the miraculous information technology grid that is too easy to take for granted and whose seeming self-maintenance has led us into an undue complacence. The group's charter would embrace the ethos of amateur innovation while being clear-eyed about the ways in which the research Internet and hobbyist PC of the 1970s and 1980s are straining under the pressures of serving as the world's information backbone.

The transition to a networking infrastructure that is more secure yet roughly as dynamic as the current one will not be smooth. A decentralized and, more important, exuberantly anarchic Internet does not readily lend itself to collective action. But the danger is real and growing. We can act now to correct the vulnerabilities and ensure that those who wish to contribute to the global information grid can continue to do so without having to occupy the privileged perches of established firms or powerful governments, or conduct themselves outside the law.

Or we can wait for disaster to strike and, in the time it takes to replace today's PCs with a 21st-century Mr. Coffee, lose the Internet as we know it.

Jonathan Zittrain is Professor of Internet Governance and Regulation at Oxford University, the Berkman Visiting Professor for Entrepreneurial Legal Studies at Harvard Law School, and a co-founder of Harvard's Berkman Center for Internet & Society. This article (� 2005 Jonathan Zittrain) is adapted from his forthcoming book, The Future of the Internet—And How to Stop It.

printer friendly email this article letter to the editor reprint premissions
space space space












More By Jonathan Zittrain
The Copyright Cage
space
Contact Us