Skip to main contentSkip to navigationSkip to navigation
A conceptual image of artificial intelligence
‘Last year saw some astonishing breakthroughs, whose consequences will become clearer and more important.’ Photograph: Getty/Science Photo Library
‘Last year saw some astonishing breakthroughs, whose consequences will become clearer and more important.’ Photograph: Getty/Science Photo Library

The Guardian view on the future of AI: great power, great irresponsibility

This article is more than 5 years old

The world in 2019: The vexed politics of our times has obscured the view ahead. Over the holidays we have been examining some big issues on the horizon. Today, in our final instalment, we look at the spread of artificial intelligence

Looking over the year that has passed, it is a nice question whether human stupidity or artificial intelligence has done more to shape events. Perhaps it is the convergence of the two that we really need to fear.

Artificial intelligence is a term whose meaning constantly recedes. Computers, it turns out, can do things that only the cleverest humans once could. But at the same time they fail at tasks that even the stupidest humans accomplish without conscious difficulty.

At the moment the term is mostly used to refer to machine learning: the techniques that enable computer networks to discover patterns hidden in gigantic quantities of messy, real-world data. It’s something close to what parts of biological brains can do. Artificial intelligence in this sense is what enables self-driving cars, which have to be able to recognise and act appropriately towards their environment. It is what lies behind the eerie skills of face-recognition programs and what makes it possible for personal assistants such as smart speakers in the home to pick out spoken requests and act on them. And, of course, it is what powers the giant advertising and marketing industries in their relentless attempts to map and exploit our cognitive and emotional vulnerabilities.

Changing the game

The Chinese government’s use of machine learning for political repression has gone much further than surveillance cameras. A recent report from a government thinktank praised the software’s power to “predict the development trajectory for internet incidents … pre-emptively intervene in and guide public sentiment to avoid mass online public opinion outbreaks, and improve social governance capabilities”.

Last year saw some astonishing breakthroughs, whose consequences will become clearer and more important. The first was conceptual: Google’s DeepMind subsidiary, which had already shattered the expectations of what a computer could achieve in chess, built a machine that can teach itself the rules of games of that sort and then, after two or three days of concentrated learning, beat every human and every other computer player there has ever been.

AlphaZero cannot master the rules of any game. It works only for games with “perfect information”, where all the relevant facts are known to all the players. There is nothing in principle hidden on a chessboard – the blunders are all there, waiting to be made, as one grandmaster observed – but it takes a remarkable, and, as it turns out, inhuman intelligence to see what’s contained in that simple pattern.

Computers that can teach themselves from scratch, as AlphaZero does, are a significant milestone in the progress of intelligent life on this planet. And there is a rather unnerving sense in which this kind of artificial intelligence seems already alive.

Compared with conventional computer programs, it acts for reasons incomprehensible to the outside world. It can be trained, as a parrot can, by rewarding the desired behaviour; in fact, this describes the whole of its learning process. But it can’t be consciously designed in all its details, in the way that a passenger jet can be. If an airliner crashes, it is in theory possible to reconstruct all the little steps that led to the catastrophe and to understand why each one happened, and how each led to the next. Conventional computer programs can be debugged that way. This is true even when they interact in baroquely complicated ways. But neural networks, the kind of software used in almost everything we call AI, can’t even in principle be debugged that way. We know they work, and can by training encourage them to work better. But in their natural state it is quite impossible to reconstruct the process by which they reach their (largely correct) conclusions.

Friend or foe?

It is possible to make them represent their reasoning in ways that humans can understand. In fact, in the EU and Britain it may be illegal not to in certain circumstances: the General Data Protection Regulation (GDPR) gives people the right to know on what grounds computer programs make decisions that affect their future, although this has not been tested in practice. This kind of safety check is not just a precaution against the propagation of bias and wrongful discrimination: it’s also needed to make the partnership between humans and their newest tools productive.

One of the least controversial uses of machine learning is in the interpretation of medical data: for some kinds of cancers and other disorders computers are already better than humans at spotting the dangerous patterns in a scan. But it is possible to train them further, so that they also output a checklist of factors which, taken together, lead to their conclusions, and humans can learn from these. It’s unlikely that these are really the features that the program bases its decisions on: there is also a growing field of knowledge about how to fool image classification with tiny changes invisible to humans, so that a simple schematic picture of a fish can be specked with dots, at which point it is classified as a cat.

More worryingly, the apparently random defacement of a stop sign can cause a computer vision system to suppose that it is a speed limit. Sound files can also be deliberately altered so that speech recognition systems will misinterpret them. With the growing use of voice assistants, this offers obvious targets to criminals. And, while machine learning makes fingerprint recognition possible, it also enables the construction of artificial fingerprints that act as skeleton keys to unlock devices.

Power struggle

The second great development of the last year makes bad outcomes much more likely. This is the much wider availability of powerful software and hardware. Although vast quantities of data and computing power are needed to train most neural nets, once trained a net can run on very cheap and simple hardware. This is often called the democratisation of technology but it is really the anarchisation of it. Democracies have means of enforcing decisions; anarchies have no means even of making them. The spread of these powers to authoritarian governments on the one hand and criminal networks on the other poses a double challenge to liberal democracies. Technology grants us new and almost unimaginable powers but at the same time it takes away some powers, and perhaps some understanding too, that we thought we would always possess.

More on this story

More on this story

  • The Guardian view on the state of the union

  • The Guardian view on the restitution of cultural property

  • The Guardian view on small-town Britain

  • The Guardian view on US-China antagonism

Most viewed

Most viewed