Navigating the Google Antitrust Case: Why It Matters for Human Rights, Freedom and Online Safety

In Focus

Navigating the Google Antitrust Case: Why It Matters for Human Rights, Freedom and Online Safety

By Dr Erica Harper, Head of Research and Policy Studies, Geneva Academy of International Humanitarian Law and Human Rights

As states and the EU attempt to craft an artificial intelligence (AI) regulatory framework, questions remain around whether legal measures protect only our right to express our thoughts, beliefs and opinions, or if this right needs to be protected from external manipulation to be fully and meaningfully enjoyed.

In September 2023 a lawsuit filed by the US Department of Justice against Google saw its first day in court. But what is the Google antitrust case about, and why should those of us working in human rights, peace, and security be following it?

In the lawsuit, the government’s argument is that Google is leveraging its market power to block competitors from entering the search engine space. Google’s response is that its dominance has nothing to do with unfair practices, but that it simply offers a better product.

If we dig a little deeper, the government’s contention is that whom consumers trust to be the provider of their Internet search services is a “Hobson’s choice” — a choice that is not actually a choice. Its key piece of evidence is contracts with smartphone companies in terms of which Google pays around US$10 billion to be the default search engine on smartphones. For those who have not read Cass Sunstein and Richard Thaler’s masterful book Nudge, defaults are an incredibly powerful tool, because people rarely, if ever, change them.

These arguments are worth unpacking, because the court’s decision will impact all of us, probably more than we realise. Most obviously, the case speaks to core freedoms around the expression of thoughts, beliefs and opinions. But it is also about something very new and emergent – how, through the content we are exposed to, those thoughts, beliefs, and opinions can be influenced and, in some cases, manipulated. This raises important questions around the protections offered when users enter online spaces. In a free market, these questions are more likely to be interrogated, protections fought for and (for those who want them) alternatives provided. When there is only one choice, the situation is much different.

Today’s online world is ubiquitous. Run-of-the-mill functions — shopping, communicating, working — are increasingly web based, with old-school options such as walking into a bank increasing unavailable. This shift is by no means a bad thing. The Internet has transformed access to knowledge, reduced communications cost and boosted workplace productivity. But the mechanism delivering this is complex, opaque and by no means neutral. Going online is not like walking into a public library where ideas and information are equally displayed and accessible. Users have to deal with two issues: the content that is available (its nature and veracity) and how they engage with it (or, more accurately, how it engages with them). Competition and regulation — the issues at the heart of the Google antitrust case — are pivotal to both.


Exploring the intricate terrain of online content regulation and accountability

While much of the content available online is sound, some will be misleading, incorrect or dangerous – no surprises here! The authorities are aware of this and employ various strategies to mitigate these risks. States have laws that place limits on free speech by, for example, criminalising content that encourages violence or facilitates terrorism. They may also require a platform opera-tor to remove illegal content within a certain time period or face fines. Such regulation undoubtedly makes being online safer, but is by no means a fail-proof or complete solution. These laws are difficult to enforce, rarely have extraterritorial application, and “workarounds” such as VPNs are increasingly commonplace. Even in the best-case scenarios, a lot of harm can be done in a 24-hour period.

Platforms also have their own rules. As a “community”, Google has guidelines applicable to those who wish to be members of that community. For example, it prohibits and can ban individuals/ groups that impersonate others, post manipulated media or share content that exploits children.

But Google is not a content moderator and will not be responsible for what you find when you conduct an online search. Indeed, US courts have made clear that platforms cannot be prosecuted for their exercise/non-exercise of editorial functions. Nor do we want them to do this. Think about it, do we really want a corporation deciding where to draw the line between informative, disinformative and manipulative content – especially when this content might concern a global health emergency, climate change or threats like terrorism?

Here lies the dilemma. Modern society needs the Internet, and it equally wants a marketplace of freely expressed ideas. Disinformation and dangerous content are problems, but assigning responsibility for their prevention, identification, and removal is complicated. This means that when users select a search engine, it is a case of “let the buyer beware”. Google’s position is that if you don’t like this, then don’t be a member of the community. No one is forcing you to sign up. This is where the antitrust case becomes so important. We are locked into an Internet world that is imperfect and risk prone. These are the situations where choice is critical, and Google’s market dominance is hindering the capacity of individuals to demand safer options.


From cookies to AI algorithms

The second issue — targeting individuals with content — is more malign, but may have an easier answer. If we return to the library analogy, the challenge is not just a simple one of distinguishing between correct, incorrect or dangerous material. The fact is that not all material is available to all users. But on the Internet, what we research, like and forward comes together to inform what we are subsequently exposed to.

The technology enabling this is cookies, which work by exploiting heuristics such as the “exposure bias” and “bandwagon effect”. Simply put, narrowing or concentrating the content someone is exposed to can alter their processing of that content, including by making an idea seem more valid or appealing – and we all know how this can end. Advertising is one thing, but when the same process is harnessed for malign intent, the result can be misinformation campaigns (e.g. around COVID-19), interference in democratic processes (e.g. the Facebook-Cambridge Analytica scandal) or terrorist recruitment.

As with content moderation, the jurisprudence suggests that compelling platforms to exercise greater responsibility will be difficult. Perhaps there are other options, however. Cookie technology is rapidly being replaced by AI algorithms that can direct content faster and more reliably. In parallel, governments — concerned about AI as an “ungoverned space” — are seeking to develop tighter regulation. This is an opportunity to craft rules that prevent AI from being harnessed for coercive ends or to manipulate free thinking. This will not solve the antitrust issue, but it may provide a pathway to overcome some of the externalities associated with it. Chiefly, it would mean that irrespective of whether Google remains the dominant search engine provider, content pushing could be limited to safer forms.


Conclusion

In terms of civic freedoms, the Google antitrust case is about as important as it gets. It may be about the power of conglomerates in a free-market economy, but it speaks to important questions around how freedom of expression and opinion is valued in the digital age. Specifically, should the law protect only our rights to express our thoughts, beliefs and opinions, or do such rights need to be protected from external manipulation to be fully and meaningfully enjoyed? It is unlikely that the court will speak directly to such questions, but the case, its nuances and its sensitivities are a sure signal of where winds will blow in the future. Human rights experts, regulators, and those working in peacebuilding and security would be well served to start thinking about these questions sooner rather than later.

Disclaimer: The views, information and opinions expressed in this publication are the author’s/authors’ own and do not necessarily reflect those of the GCSP or the members of its Foundation Council. The GCSP is not responsible for the accuracy of the information.