In focus: The challenges of artificial intelligence

The challenges of artificial intelligence

In focus: The challenges of artificial intelligence

By Mr Federico Mantellassi, Research and Project Officer, Geneva Centre for Security Policy

Artificial intelligence (AI) is a branch of computer science concerned with enabling computer systems to mimic the problem-solving and decision-making capabilities of the human mind, and thus to complete tasks that would traditionally require human intelligence. Machine learning (ML) is currently the most popular subset of AI, allowing machines to learn from data without being explicitly programmed to do so. This has enabled algorithms to best humans at their own games, e.g. chess, or identify objects with greater accuracy. Today’s “Narrow” or “Weak AI” outperforms humans in specific tasks, while “General Artificial Intelligence” or “Strong AI” that can outperform humans in most cognitive tasks remains elusive. AI methods hold great promise in fields ranging from healthcare to climate protection, education and many more, and have the potential to contribute to solving many of today’s problems. However, like all disruptive technologies, AI presents some major challenges, and its potential misuses – which could have serious consequences for international security, democracy and society as a whole – are many.

Content echo chambers, disinformation and polarisation

Machine learning algorithms form the backbone of the social media platforms we use daily. They collect consumer data to increase profiling, automatically adjusting content and advertising recommendations to maximise time spent on the platform. Because the algorithms behind content recommendations are geared towards maximising engagement, they will put a premium on content that is sensational and emotional, and that is prone to reinforcing the existing users’ beliefs, over more neutral, balanced or factual content. Recent research shows that this process helps create content echo-chambers, which have the unintended consequences of aiding the spread of disinformation and deepening social polarisation. These dynamics have also been intentionally weaponised by states and organisations through disinformation campaigns aimed at influencing elections, or most recently at creating “infodemics” around COVID-19. This has serious consequences for democracy and free political debate. 

Manipulations

AI cannot only be used to spread purposefully fictitious content, but also to create it. Advances in AI-related technologies such as natural language processing (NLP) or generative adversarial networks (GANs) allow for the creation of extremely credible media, both written text or images and videos; known as Deepfakes. Current NLP models can be – and increasingly often are – misused to generate misleading news articles or automate the production of fake content to post on social media. The capacity to fake any image or audio through realistic Deepfakes could similarly create an environment where we can no longer trust what we see and hear. Today, Deepfakes are already disproportionately used against women to superimpose their likeness onto pornographic videos. In the future, Deepfakes might be increasingly used to spread disinformation, cause diplomatic incidents, or create political and social scandals, all of which could deeply impact and undermine international security.

Data biases

As we delegate more decisions affecting our lives to algorithms, such as who gets a loan, who gets a job and who gets longer jail sentences, issues over the quality, representativeness and neutrality of data have arisen. Evidence shows that we build our biases into our algorithms, and that datasets are therefore not neutral, but reflect the inequalities and unfairness of our world. When algorithms are trained on such data, the outcomes often reflect these biases, and perpetuate inequalities and discrimination on a wider scale. The inherent opacity of AI methods, often called “black boxes'', makes understanding why a certain decision was made by an algorithm almost impossible. This can obfuscate algorithmic discriminations behind the complexity of the mathematical model, with serious consequences for accountability. If AI models are to control more aspects of our lives, reducing these models’ biases and increasing their transparency and accountability are essential.

Surveillance and digital authoritarianism

Governments of countries worldwide are turning to AI surveillance tools to monitor, track and survey their countries’ citizens. While AI-powered surveillance is not inherently unlawful, it is highly susceptible to abuse. AI technologies, especially ones that rely on ML and big data, incentivise the mass collection of data. Both for governments relying on AI for surveillance and corporations relying on AI for advertisement targeting, the incentive is to relentlessly collect and analyse citizens’ or consumers’ data, with dramatic consequences for privacy. Evidence shows that some governments are exploiting these technologies for mass surveillance purposes and to reinforce repression. China’s social credit system is a prime example of the capacity of AI-powered surveillance systems to monitor and shape citizens’ behaviour. Its potential to assist and enable authoritarian crackdowns is most notable in the ongoing Uighur crisis in north-west China. Private technology corporations such as Google and Facebook have built a system of “Surveillance Capitalism”, or the “economic system centred around the commodification of personal data with the core purpose of profit making”, leading to widespread corporate surveillance in our democracies.

Geopolitical competition

AI is at the forefront of the major world powers’ efforts to increase their normative spheres of influence, as they seek to export their model of digital governance. China, for example, is using the sale of its AI-powered surveillance technology to export its model of digital authoritarian governance as a viable alternative to democratic governance. The United States is similarly exporting its technology to promote its own national interests and narrative surrounding digital governance. While the development of AI technologies is marked by deep international interconnectedness and an open-source culture, the field of AI has been at the centre of fierce geopolitical competition, notably between China and the United States. Geopolitical competition in this field risks jeopardising efforts to mitigate the key challenges linked to AI. Indeed, fears of “falling behind” in the “AI race” could lead governments and corporations to ignore privacy or ethical considerations in their development of AI technologies, because they might see such considerations as slowing down their innovations relative to those of others.

Powering autonomy in weapon systems and threatening strategic stability

Today, AI and ML are increasingly featured as the main technologies powering autonomy in weapon systems (AWS), leading to novel security, ethical, legal and political challenges as it enables the delegation of an increasing number of critical functions of the weapon to a computer. The growing autonomy in weapons systems increasingly creates true technological surrogates on the battlefield. This has spurred intense international debate, with many countries and organisations calling for an outright ban on the development and use of lethal autonomous weapons systems (LAWS).

The chief concerns with regard to LAWS revolve around predictability, explainability and responsibility. In other words, can we reliably predict the behaviour of LAWS, can we explain the processes that led to a certain action taken by an AWS, and can we attribute legal responsibility if something goes wrong? Due to the fact that ML will enable these systems to learn from experience, adapt their decision-making processes accordingly, and act in novel non-programmed ways, coupled with the relative legal vacuum surrounding this issue, many fear such requirements cannot be fulfilled and cannot be compliant with international humanitarian law. Issues around biases embedded in training data are also central to the discussion. The consequences of flagrant mistakes algorithms can make due to unseen biases in training data has serious consequences in combat and life or death  decisions for example in discrimination between combatants and civilians. Some also fear that the increased use of AWS will have adverse consequences for strategic stability by favouring offensive postures, incentivising first strikes and increasing the chance of arms races.

Looking ahead …

AI’s potential is great, but so are the challenges it presents. While we remain far from AGI (Artificial General Intelligence), we do not have to wait for it for the technology to present some profound challenges to international security and society at large. In recognition of this fact, an ecosystem to address the challenges linked to AI is emerging. Ethical guidelines issued by governments and international organisations, research initiatives addressing algorithmic bias and discrimination, and civil society campaigns raising awareness of the risks posed by AI are among the efforts to mitigate the risks and ensure that the potential misuses of the technology are minimised. Such efforts need to be amplified to lead to the establishment of strong normative and legal frameworks around the development and deployment of AI.

 

About this blog series

The 21st century has ushered in an age of unprecedented technological innovation; disproportionately for the better. However, as digital technologies take more and more space in our lives, recent years have shown that they can sometimes have unintended security and societal impacts. There is an urgent need to guarantee the safe and globally beneficial development of emerging technologies and anticipate their potential misuse, malicious use, and unforeseen risks. Fortunately, technological risk can be addressed early on, and the unintended negative consequences of technologies can be identified and mitigated by putting ethics and security at the core of technological development. This series of blogs provides insights into the key challenges related to three emerging technologies: artificial intelligence, synthetic biology and neurotechnology. Each blog promotes an “ethics and security by design” approach, and are part of the Polymath Initiative; an effort to create a community of scientists who are able to bridge the gap between the scientific and technological community and the world of policy making.

 

Disclaimer: The views, information and opinions expressed in the written publications are the authors’ own and do not necessarily reflect those shared by the Geneva Centre for Security Policy or its employees. The GCSP is not responsible for and may not always verify the accuracy of the information contained in the written publications submitted by a writer.

Federico Mantellassi is a Research and Project Officer for the Global and Emerging Risks cluster at the GCSP. He is also the project coordinator of the GCSP’s Polymath Initiative.