2018 Was the Year That Tech Put Limits on AI

As employees and researchers push back, companies including Google and Microsoft are pledging not to use powerful AI technology in certain ways.
Image may contain Electronics and Camera
An Axon police body camera. The company has said it does not intend to deploy facial recognition on police-worn camera.CAITLIN O'HARA/The New York Times/Redux

For the past several years, giant tech companies have rapidly ramped up investments in artificial intelligence and machine learning. They’ve competed intensely to hire more AI researchers and used that talent to rush out smarter virtual assistants and more powerful facial recognition. In 2018, some of those companies moved to put some guardrails around AI technology.

The most prominent example is Google, which announced constraints on its use of AI after two projects triggered public pushback and an employee revolt.

The internal dissent began after the search company’s work on a Pentagon program called Maven became public. Google contributed to a part of Maven that uses algorithms to highlight objects such as vehicles in drone surveillance imagery, easing the burden on military analysts. Google says its technology was limited to “nonoffensive” uses, but more more than 4,500 employees signed a letter calling for the company to withdraw.

In June, Google said it would complete but not renew the Maven contract, which is due to end in 2019. It also released a broad set of principles for its use of AI, including a pledge not to deploy AI systems for use in weapons or “other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.” Based in part on those principles, Google in October withdrew from bidding on a Pentagon cloud contract called JEDI.

Google also drew criticism after CEO Sundar Pichai demonstrated a bot called Duplex with a humanlike voice calling staff at a restaurant and hair salon to make reservations. Recipients of the calls did not appear to know they were talking with a piece of software, and the bot didn’t disclose its digital nature. Google later announced it would add disclosures. When WIRED tested Duplex ahead of its recent debut on Google’s Pixel phones, the bot began the conversation with a cheery “I’m Google’s automated booking service.”

The growth of ethical questions around the use of artificial intelligence highlights the field’s rapid and recent success. Not so long ago, AI researchers were mostly focused on trying to get their technology to work well enough to be practical. Now they’ve made image and voice recognition, synthesized voices, fake imagery, and robots such as driverless cars practical enough to be deployed in public. Engineers and researchers once dedicated solely to advancing the technology as quickly as possible are becoming more reflective.

“For the past few years I’ve been obsessed with making sure that everyone can use it a thousand times faster,” Joaquin Candela, Facebook’s director of applied machine learning, said earlier this year. As more teams inside Facebook use the tools, “I started to become very conscious about our potential blind spots,” he said.

That realization is one reason Facebook created an internal group to work on making AI technology ethical and fair. One of its projects is a tool called Fairness Flow that helps engineers check how their code performs for different demographic groups, say men and women. It has been used to tune the company’s system for recommending job ads to people.

A February study of several services that use AI to analyze images of faces illustrates what can happen if companies don’t monitor the performance of their technology. Joy Buolamwini and Timnit Gebru showed that facial-analysis services offered by Microsoft and IBM’s cloud divisions were significantly less accurate for women with darker skin. That bias could have spread broadly because many companies outsource technology to cloud providers. Both Microsoft and IBM scrambled to improve their services, for example by increasing the diversity of their training data.

Perhaps in part because of that study, facial recognition has become the area of AI where tech companies seem the keenest to enact limits. Axon, which makes Tasers and body cameras, has said it does not intend to deploy facial recognition on police-worn cameras, fearing it could encourage hasty decision-making. Earlier this month Microsoft president Brad Smith asked governments to regulate the use of facial recognition technology. Soon after, Google quietly revealed that it doesn’t offer “general purpose” facial recognition to cloud customers, in part because of unresolved technical and policy questions about abuse and harmful effects. Those announcements set the two companies apart from competitor Amazon, which offers facial recognition technology of uncertain quality to US police departments. The company has so far not released specific guidelines on what it considers appropriate uses for AI, although it is a member of industry consortium Partnership on AI, working on the ethics and societal impact of the technology.

The emerging guidelines do not mean companies are significantly reducing their intended uses for AI. Despite its pledge not to renew the Maven contract and its withdrawal from the JEDI bidding, Google’s rules still allow the company to work with the military; its principles for where it won’t apply AI are open to interpretation. In December, Google said it would create an external expert advisory group to consider how the company implements its AI principles, but it hasn't said when the body will be established, or how it will operate.

Similarly, Microsoft’s Smith worked with the company’s AI boss Harry Shum on a 149-page book of musings on responsibility and technology in January. The same month, the company disclosed a contract with US Immigration and Customs Enforcement, and promoted the potential to help the agency deploy AI and facial recognition. The project, and its potential use of AI, inspired protests by Microsoft employees, who apparently had a different interpretation of the appropriate ethical bounds on technology than their leaders.

Limits on AI may soon be set by regulators, not tech companies, amid signs that lawmakers are becoming more open to the idea. In May, new European Union rules on data protection, known as GDPR, gave consumers new rights to control and learn about data use and processing that can make some AI projects more complicated. Activists, scholars, and some lawmakers have shown interest in regulating large technology companies. And in December, France and Canada said they will create an international study group on challenges raised by AI modeled on the UN’s climate watchdog, the IPCC.


More Great WIRED Stories