Outline notes for a critical assesssment of the social experience of the internet. These were compiled as part of my lectures about digital innovation, delivered to first year students in the School of Computing at Dublin City University.
Back in the early times of the World Wide Web, in the early 90s, the founding cohort were hopeful. In a summer 1996 interview with the World Wide Web Journal, in an issue called The Web After Five Years, Tim Berners-Lee was optimistic. He looked forward to greater bi-directionality on the web and less friction in the publishing process. In 1998, Jaron Lanier wrote that, "[t]he Internet has created the most precise mirror of people as as a whole that we've yet had ... we can breathe a sigh of relief. We are basically OK". The new century began with optimism about the world wide web.
Still, the past was usually more complex that people remember. In the same issue of the World Wide Web Journal, Berners-Lee discusses the challenges. One was to maintain the web open, free from exploitation by one dominant commercial player. Berners-Lee acknowledged this but was hopeful that the incentives to keep the web from fragmenting in this way would be enough. That working together would trump fragmentation.
In the beginning, at the turn of the millenium, people were still figuring out how to make money online. And the model that won out - at least until recently - was the surveillance and advertising model. In return for services that cost nothing, or very little, people would consent to sharing their personal data and viewing advertisements.
For example, when Google launched Gmail in 2004, the aim to reach as many people as possible meant that the model of offering a free service supported by advertising won out over a paid service. As a step further, however, Gmail required users to allow access to their personal data, their emails, to allow targetted delivery of advertising. This was controversial at the time, despite Google's assurances that only computer systems would review emails.
And despite Tim Berners-Lee's best hopes, the battle for the Internet is far from settled in favour of a future of openness and egalitarianism. Today, it can be argued that:
Improving behaviour at an individual level and as a society is one goal. This is mostly a structural issue although, as always, personal choice can help. Negative externalities created by corporations must be internalised as costs. Tim Berners-Lee's Contract for the Web is a good statement of intent but nothing less than a technological and social overhaul will have an impact.
Making the internet better is perhaps one of the most important areas for technical innovation for the future, alongside solving the problems of climate change and AI.
Though they compete in different markets, most of the tech giants share at least one business model: surveillance. Technology conglomerates collect information about users from each of their dozens of smaller services, synthesize those data into profiles, and use those profiles to target ads. They also gather information about their competitors through app stores and third-party tracking beacons, then target them for acquisition or destruction.
The problem with email now is that the social conventions have gotten very bad,” Buchheit told me once we’d made contact. “There’s a 24/7 culture, where people expect a response. It doesn’t matter that it’s Saturday at 2 a.m.–people think you’re responding to email. People are no longer going on vacation. People have become slaves to email.-- Paul Buchheit
The question of new business models for content creators on the Internet is a profound and difficult topic in itself, but it must at least be pointed out that writing professionally and well takes time and that most authors need to be paid to take that time. In this regard, blogging is not writing. For example, it's easy to be loved as a blogger. All you have to do is play to the crowd. Or you can flame the crowd to get attention. Nothing is wrong with either of those activities. What I think of as real writing, however, writing meant to last, is something else. It involves articulating a perspective that is not just reactive to yesterday's moves in a conversation.
And then, shortly after the turn of the century, just when the rest of the world was turning on to Web 2.0, Lanier turned against it. With a broadside in Wired called “One-Half of a Manifesto,” he attacked the idea that “the wisdom of the crowd” would result in ever-upward enlightenment. It was just as likely, he argued, that the crowd would devolve into an online lynch mob.
The fiasco I want to talk about is the World Wide Web, specifically, the advertising-supported, “free as in beer” constellation of social networks, services, and content that represents so much of the present day web industry. I’ve been thinking of this world, one I’ve worked in for over 20 years, as a fiasco since reading a lecture by Maciej Cegłowski, delivered at the Beyond Tellerrand web design conference. Cegłowski is an important and influential programmer and an enviably talented writer. His talk is a patient explanation of how we’ve ended up with surveillance as the default, if not sole, internet business model.
Given the network effect – that Uber only works if everyone is on it – a thousand flowers were never going to bloom. There’s only room for one and it’s a Venus fly trap. The same libertarian spirit also instituted the peculiar economics of the internet: software had to be free, because only that way would it be open (“everyone knew that software would eventually become more important than law, so the prospect of a world running on hidden code was dark and creepy”). Yet that meant programmers wouldn’t be paid: they would create free code and make money by solving problems later.
There’s a lot of dark stuff ....
... [Then] there’s tech “addiction,” the rising worry that adults and kids are getting hooked on smartphones and social networks despite our best efforts to resist the constant desire for a fix. And all over the internet, general fakery abounds — there are millions of fake followers on Twitter and Facebook, fake rehab centers being touted on Google and even fake review sites to sell you a mattress.
So who is the central villain in this story, the driving force behind much of the chaos and disrepute online?
We live in an age of manipulation. An extensive network of commercial surveillance tracks our every move and a fair number of our thoughts. That data is fed into sophisticated artificial intelligence and used by advertisers to hit us with just the right sales pitch, at just the right time, to get us to buy a toothbrush or sign up for a meal kit or donate to a campaign. The technique is called behavioral advertising, and it raises the frightening prospect that we’ve been made the subjects of a highly personalized form of mind control.
Across the Atlantic, there is already a model that American reformers could choose to follow. In December, the E.U. and U.K. each proposed sweeping new laws that would force tech companies to make their algorithms more transparent and, eventually, accountable to democratically elected lawmakers.
Even under Biden, one significant roadblock remains: the fact that business as usual is highly profitable for the Big Tech companies. Social media’s appeal is in creating community, Zuboff notes. “But Facebook’s $724 billion market capitalization doesn’t come from connecting us,” she says. “It comes from extracting from our connection.”
Fixes to these problems won’t happen overnight. Phillips, the Syracuse professor, offers a metaphor of the platforms as factories leaking toxic waste into our democracies.
Disinformation and other forms of manipulative, antidemocratic communication have emerged as a problem for Internet policy. While such operations are not limited to electoral politics, efforts to influence and disrupt elections have created significant concerns. Data-driven digital advertising has played a key role in facilitating political manipulation campaigns. Rather than stand alone incidents, manipulation operations reflect systemic issues within digital advertising markets and infrastructures. Policy responses must include approaches that consider digital advertising platforms and the strategic communications capacities they enable. At their root, these systems are designed to facilitate asymmetrical relationships of influence.
Because if we’re asking whether algorithms are a threat to democracy, the answer is surely yes, they can be – but they don’t have to be. Because our democracies have the power to protect themselves, with rules that make sure algorithms work the way that they should. And in the last few months and years, I think a consensus has been growing that the time has come for us to put these rules into place.
Greek police are due to receive gear that allows for real-time face recognition during police patrols. Despite concerns that the system could seriously affect civil liberties, details about the project are scarce.
I broadly see three sources of dysfunction affecting today’s web:
- Deliberate, malicious intent, such as state-sponsored hacking and attacks, criminal behaviour, and online harassment.
- System design that creates perverse incentives where user value is sacrificed, such as ad-based revenue models that commercially reward clickbait and the viral spread of misinformation.
- Unintended negative consequences of benevolent design, such as the outraged and polarised tone and quality of online discourse.
The Web was designed to bring people together and make knowledge freely available. It has changed the world for good and improved the lives of billions. Yet, many people are still unable to access its benefits and, for others, the Web comes with too many unacceptable costs.
Everyone has a role to play in safeguarding the future of the Web. The Contract for the Web was created by representatives from over 80 organizations, representing governments, companies and civil society, and sets out commitments to guide digital policy agendas. To achieve the Contract’s goals, governments, companies, civil society and individuals must commit to sustained policy development, advocacy, and implementation of the Contract text.
The problem does not lie in the lack of effective rules to govern and improve the internet. Instead, the problem lies in human nature. Humans are gullible, cognitively lazy and driven by prejudices. This means they easily fall for any nonsense, provided it is presented in an appealing enough fashion. Social networks the world over have realized this and are happy to exploit it for financial gain — or to further the political agenda of their masters.
Under consideration is Section 230 of the Communications Decency Act—a provision originally designed toencouragetech companies to cleanup "offensive" online content. At the dawn of the commercial internet, federal lawmakerswanted the internet to be open and free, but they realized that such opennessrisked noxious activity. In their estimation, tech companieswere essential partners in any effort to "clean up the Internet."
The problems of corporate concentration and privacy on the Internet are inextricably linked. A new regime of interoperability can revitalize competition in the space, encourage innovation, and give users more agency over their data; it may also create new risks to user privacy and data security. This paper considers those risks and argues that they are outweighed by the benefits. New interoperability, done correctly, will not just foster competition, it can be a net benefit for user privacy rights.