Dr. Niall McMahon
Table of Contents
Dr. Niall McMahon
If you print these slides, think about using two pages per sheet although don't worry too much about it!
In a summer 1996 interview with the World Wide Web Journal, in an issue called The Web After Five Years, Tim Berners-Lee was optimistic. He looked forward to greater bi-directionality on the web and less friction in the publishing process.
In 1998, Jaron Lanier wrote in Taking stock:
So, what's changed in the last five years?, Wired, January 1998, p. 60, that, "[t]he Internet has created the most precise mirror of people as as a whole that we've yet had ... we can breathe a sigh of relief. We are basically OK". The new century began with optimism about the world wide web.
In the same issue of the World Wide Web Journal, Berners-Lee discusses the challenges. One was to maintain the web open, free from exploitation by one dominant commercial player. Berners-Lee acknowledged this but was hopeful that the incentives to keep the web from fragmenting in this way would be enough. That working together would trump fragmentation.
In the beginning, at the turn of the millenium, people were still figuring out how to make money online. And the model that won out - at least until recently - was a surveillance, advertising and social validation model. In return for services that cost nothing, or very little, people would consent to sharing their personal data and viewing advertisements.
When Google launched Gmail in 2004, the aim to reach as many people as possible meant that the model of offering a free service supported by advertising won out over a paid service.
As a step further, however, Gmail required users to allow access to their personal data, their emails, to allow targeted delivery of advertising. This was controversial at the time, despite Google's assurances that only computer systems would review emails.
"I have a new addiction. It is powerful. It is disturbing. It is thefacebook.com". March 25th 2004, The Daily Pennsylvanian. Quoted in Facebook's pitch deck.
Additionally, it seems that trust online seems in low supply, with bad acting and misinformation expected as a matter of course.
Tim Berners-Lee, in 2019, wrote that he broadly sees three sources of dysfunction affecting today's web:
Ted Chiang makes a nice point about frames of thought and how "most of our fears or anxieties about technology are best understood as fears or anxiety about how capitalism will use technology against us. And technology and capitalism have been so closely intertwined that it's hard to distinguish the two" and "that we are sort of unable to question capitalism".
This solution will be a structural one. Perhaps nothing less than a technological and social overhaul will have an impact.
A second lecture, to follow.
I broadly see three sources of dysfunction affecting today's web:
- Deliberate, malicious intent, such as state-sponsored hacking and attacks, criminal behaviour, and online harassment.
- System design that creates perverse incentives where user value is sacrificed, such as ad-based revenue models that commercially reward clickbait and the viral spread of misinformation.
- Unintended negative consequences of benevolent design, such as the outraged and polarised tone and quality of online discourse.
Though they compete in different markets, most of the tech giants share at least one business model: surveillance. Technology conglomerates collect information about users from each of their dozens of smaller services, synthesize those data into profiles, and use those profiles to target ads. They also gather information about their competitors through app stores and third-party tracking beacons, then target them for acquisition or destruction.
"The problem with email now is that the social conventions have gotten very bad," Buchheit told me once we'd made contact. "There's a 24/7 culture, where people expect a response. It doesn't matter that it's Saturday at 2 a.m. - people think you're responding to email. People are no longer going on vacation. People have become slaves to email.-- Paul Buchheit
The question of new business models for content creators on the Internet is a profound and difficult topic in itself, but it must at least be pointed out that writing professionally and well takes time and that most authors need to be paid to take that time. In this regard, blogging is not writing. For example, it's easy to be loved as a blogger. All you have to do is play to the crowd. Or you can flame the crowd to get attention. Nothing is wrong with either of those activities. What I think of as real writing, however, writing meant to last, is something else. It involves articulating a perspective that is not just reactive to yesterday's moves in a conversation.
And then, shortly after the turn of the century, just when the rest of the world was turning on to Web 2.0, Lanier turned against it. With a broadside in Wired called "One-Half of a Manifesto," he attacked the idea that "the wisdom of the crowd" would result in ever-upward enlightenment. It was just as likely, he argued, that the crowd would devolve into an online lynch mob.
The fiasco I want to talk about is the World Wide Web, specifically, the advertising-supported, "free as in beer" constellation of social networks, services, and content that represents so much of the present day web industry. I've been thinking of this world, one I've worked in for over 20 years, as a fiasco since reading a lecture by Maciej Cegłowski, delivered at the Beyond Tellerrand web design conference. Cegłowski is an important and influential programmer and an enviably talented writer. His talk is a patient explanation of how we’ve ended up with surveillance as the default, if not sole, internet business model.
Given the network effect - that Uber only works if everyone is on it - a thousand flowers were never going to bloom. There's only room for one and it's a Venus fly trap. The same libertarian spirit also instituted the peculiar economics of the internet: software had to be free, because only that way would it be open ("everyone knew that software would eventually become more important than law, so the prospect of a world running on hidden code was dark and creepy"). Yet that meant programmers wouldn't be paid: they would create free code and make money by solving problems later.
There's a lot of dark stuff ....
... [Then] there's tech "addiction," the rising worry that adults and kids are getting hooked on smartphones and social networks desite our best efforts to resist the constant desire for a fix. And all over the internet, general fakery abounds - there are millions of fake followers on Twitter and Facebook, fake rehab centers being touted on Google and even fake review sites to sell you a mattress.
So who is the central villain in this story, the driving force behind much of the chaos and disrepute online?
We live in an age of manipulation. An extensive network of commercial surveillance tracks our every move and a fair number of our thoughts. That data is fed into sophisticated artificial intelligence and used by advertisers to hit us with just the right sales pitch, at just the right time, to get us to buy a toothbrush or sign up for a meal kit or donate to a campaign. The technique is called behavioral advertising, and it raises the frightening prospect that we’ve been made the subjects of a highly personalized form of mind control.
Across the Atlantic, there is already a model that American reformers could choose to follow. In December, the E.U. and U.K. each proposed sweeping new laws that would force tech companies to make their algorithms more transparent and, eventually, accountable to democratically elected lawmakers.
Even under Biden, one significant roadblock remains: the fact that business as usual is highly profitable for the Big Tech companies. Social media's appeal is in creating community, Zuboff notes. "But Facebook's $724 billion market capitalization doesn't come from connecting us," she says. "It comes from extracting from our connection."
Fixes to these problems won't happen overnight. Phillips, the Syracuse professor, offers a metaphor of the platforms as factories leaking toxic waste into our democracies.
Disinformation and other forms of manipulative, antidemocratic communication have emerged as a problem for Internet policy. While such operations are not limited to electoral politics, efforts to influence and disrupt elections have created significant concerns. Data-driven digital advertising has played a key role in facilitating political manipulation campaigns. Rather than stand alone incidents, manipulation operations reflect systemic issues within digital advertising markets and infrastructures. Policy responses must include approaches that consider digital advertising platforms and the strategic communications capacities they enable. At their root, these systems are designed to facilitate asymmetrical relationships of influence.
Because if we’re asking whether algorithms are a threat to democracy, the answer is surely yes, they can be – but they don’t have to be. Because our democracies have the power to protect themselves, with rules that make sure algorithms work the way that they should. And in the last few months and years, I think a consensus has been growing that the time has come for us to put these rules into place.
Greek police are due to receive gear that allows for real-time face recognition during police patrols. Despite concerns that the system could seriously affect civil liberties, details about the project are scarce.