The internet was supposed to be a utopia. 50 years on, what happened?

IT BEGAN – some would say, as it meant to go on – with an error message. Late on the evening of 29 October 1969, student programmer Charles Kline attempted to send some text from a computer at the University of California, Los Angeles (UCLA), to another at the Stanford Research Institute, more than 500 kilometres up the Californian coast.

“LOGIN”, it was supposed to say. Kline got as far as “LO” before the system crashed. The full message was resent an hour later. What would eventually morph into the largest communications network in human history had made its debut: the internet.

It is fair to say that no one there quite appreciated the full scope of what had happened. “We knew we were creating an important new technology that we expected would be of use to a segment of the population, but we had no idea how truly momentous an event it was,” Leonard Kleinrock, Kline’s supervisor, later said. Fifty years on, we are still only just beginning to come to terms with the consequences.

The Advanced Research Projects Agency Network, or ARPANET, as the internet’s precursor is better known, was an academic project intended to allow computers to share information. Funded by the US Department of Defense, the UCLA and Stanford computers were the first two nodes of this network. By December 1969, two others had been installed: at the University of California, Santa Barbara, and the University of Utah in Salt Lake City.

In 1973, ARPANET went international, connecting via satellite to nodes at the Norwegian Seismic Array in Kjeller near Oslo and University College London. Today, a backbone of fibre-optic cables under sea and land, supplemented by satellite links and lower-tech copper telephone wires, ensure near-global coverage (see “The greatest network the world has ever seen: the global internet map”).

Key features of how the modern internet works were there right from these small beginnings. Crucially, there was no centralised control. ARPANET was a distributed “network of networks”. Information, broken into hundreds or thousands of small packets, travelled from node to node through or between these networks. If one node went offline, the information would find another way through, with each packet basing its trajectory on feedback from previous ones.

This concept, known as packet switching, had been developed in the early 1960s by three independent groups of researchers in the UK and US, including Kleinrock’s team. “It made for a very resilient system,” says Johnny Ryan at tech firm Brave, author of A History of the Internet and the Digital Future. “These packets are blindly going through the network trying to find a quick route.”

Shared communication required a shared language. That came in the form of a set of standards known as TCP/IP – the Transmission Control Protocol and Internet Protocol – first made public by computer scientists Vint Cerf and Bob Kahn in 1974 (see “Internet founder Vint Cerf looks to the next 50 years of his creation”). These covered, among other things, the standard format of data packets and a unified system of addressing so that networks could identify one another. Such IP addresses are still assigned to all networked computers today.

“That was a breakthrough,” says Wendy Hall, a computer scientist at the University of Southampton, UK. Open and free, TCP/IP enabled anybody to put a computer on the network, and any computer to talk to another. On 1 January 1983, ARPANET adopted it as its standard for “internetworking”, and the modern internet was born.

Initially, it connected just a small bunch of like-minded academics. “It was extremely useful for transferring data and communicating among dispersed groups of scientists,” says Grant Blank at the Oxford Internet Institute in the UK. There was no formal policing, but people rarely misbehaved. As an MIT computing handbook covering network etiquette noted, “Sending electronic mail over the ARPAnet for commercial profit or political purposes is both anti-social and illegal. By sending such messages, you can offend many people.”

Wider still and wider

It was an “anything goes, free-for-all, good-faith approach”, says Ryan – one that has persisted as the internet has grown. “A lot of the roots of issues that exist today come from that period,” says Black.

The lack of built-in security was one such example. “Basically, the default was to trust everyone else,” says Black. Extending the internet to public use opened it up to fraud and criminal activity. As its use widened, the net’s anonymity, with users identified only by their IP address, also encouraged the spread of misinformation and vitriol.

New Scientist Default Image

The widening of access came about through a few pivotal software developments that took advantage of the internet’s open ethos. Chief among them was the World Wide Web. A system of addressing and publishing protocols that allowed documents sitting on different computers to be publicly visible and linked to one another, the web was created by Tim Berners-Lee, then a researcher at the CERN particle physics centre near Geneva, Switzerland, in 1989. Berners-Lee also wrote the first web browser in 1990, and the web was made publicly available in 1991.

Although we now use “the internet” and “the web” interchangeably, they aren’t the same thing. “The internet is an infrastructure on which so many things sit,” says Ryan. “The web is just one of them.” Others are email, which was an initial driving force behind many people joining the internet, messaging apps and file-sharing services. As these publicly accessible parts of the internet have grown, so too have parallel, shadier “dark net” services (see “The dark side”).

The rest is modern history. Public web use really took off in the mid-1990s, and with it came the need to organise the available information and make it easily accessible. The development of search engines – especially Google’s PageRank model, which uses an algorithm to turn up relevant results from more popular websites – changed the online landscape forever, turning the web into the trove of information it is today.

“Profit, once considered antisocial, has become the internet’s raison d’être”

That, as well as the later explosion of social media, paved the way for commercialisation. The sheer number of eyeballs fixed on sites such as Google and Facebook, and the unprecedented ability to gather data about individuals’ likes, preferences and moods and sell it on to advertisers, have made the internet a gold mine for a select few companies. Last year, digital advertising accounted for more than 85 per cent of the $136.8 billion revenue of Google’s parent company, Alphabet.

The rise of powerful business interests marked a shift in direction for the decentralised, permissive guiding ideals of the internet. At the outset, its egalitarian ethos had flattened power and social hierarchies, but the lack of regulation now enables seemingly limitless commercial growth. Profit, once considered antisocial, has become the internet’s raison d’être. Companies that do things well – Alphabet, Amazon, Facebook, Netflix – can achieve vast economies of scale. “You have a winner-takes-all system where a handful of companies can have cascading monopolies,” says Ryan.

With that concentration of power, the internet’s infrastructure has started to centralise, too. The rise of cloud computing, pioneered by companies such as Amazon, means that more information flows via vast server farms where it is stored and processed.

All this suggests a very different next half-century for the internet. “It’s only with regulation, and enforcement of regulation, that you can see this centralising trend reverse in any way,” says Ryan. The internet’s first 50 years have been a story of freewheeling growth, for good and ill. The great question we now face is whether anyone can and should take control of it – and if so how.

The dark side

Right from the beginning, the internet has had its shadowlands: parts of the network deliberately hidden from public view. The original “dark net” comprised nodes on ARPANET that received messages but didn’t appear in network lists, or acknowledge or respond to messages. Today, perhaps the most prominent example of the dark net is the Tor network, which enables users to disguise their identities and communicate anonymously. An acronym for “the onion router”, Tor involves layers of encryption, analogous to the layers of an onion, that let someone send data without their computer’s unique IP address being revealed.

Just as the internet is often confused with the web, the dark net is often muddled with the deep web, the parts of the web that aren’t typically indexed by search engines such as Google. That has many legitimate uses. Indeed, most of us are part of the deep web if we use webmail, a company intranet or a restricted-access social-media profile.

The dark net and Tor are most often associated in people’s minds with illicit trading in commodities like drugs and arms on online markets such as the now-shuttered Silk Road. But the anonymity the dark net affords can also facilitate whistle-blowing and protect users living under authoritarian regimes from censorship – a not inconsiderable boon, given the pressures the internet is under today (see “Tech giants, states or trolls: Who will control tomorrow’s internet?”).
Read more: https://www.newscientist.com/article/mg24432530-300-the-internet-was-supposed-to-be-a-utopia-50-years-on-what-happened/#ixzz63IfW736T