AI and the Future of Work

Disclaimer: the views expressed in this post are my own and do not necessarily reflect the view of my employer, Accenture.

I participated in a debate on Artificial Intelligence and the future of work last month. The motion was that “A.I. will create more jobs than it destroys” and my teammate, Dr Augustin Chevez, and I were arguing in favour of the motion. I’m happy to report that we won the debate, which was decided based on the number of audience members we managed to persuade.

It was interesting to see what did and did not come up during the debate, which included a Q&A round with audience participation. For example, I was sure there’d be a lot of talk of self-driving cars, but they hardly got a mention. And one thing that did get quite a bit of attention was the notion of the Singularity, an idea which I have difficulty in taking seriously at all.

I find that talk of the Singularity and Superintelligence can add a lot of confusion to discussions of this topic, so in this post I’m laying out how I navigate the question of whether AI will adversely affect employment, based partly on the arguments I presented at the debate.

AI and Automation

In the past, new industries hired far more people than those they put out of business. But this is not true of many of today’s new industries. . . . Today’s new industries have comparatively few jobs for the unskilled or semiskilled, just the class of workers whose jobs are being eliminated by automation.

Those words were written in 1961, in Time Magazine. Even then, automation wasn’t new - there had been automation through mechanisation for over a century. But the thinking was, This time it’s different, and it’s different because of computers. Today, the thinking is, This time it’s different, and it’s different because of AI.

Without committing to a particular definition of AI, let’s at least constrain it somewhat, i.e. to technologies that already exist but that will improve a lot over time. In other words, we’re not postulating any entirely new inventions of the future, such as whole brain emulation. We’re talking about the type of AI that exists, not the type of AI you get in science fiction stories.

Given this constraint, we also need to decide how broadly to define the term. A very broad definition of AI would include anything that uses a computer to do things that previously were done by humans: everything from ATMs, pocket calculators, and automated parking lot barriers all the way to today’s chatbots and online travel booking sites. On the “jobs destroyed” side we can then include parking lot attendants, travel agents, elevator operators, etc. But on the jobs created side we’d have to include not only the entire tech industry, but any job that involves working with a computer. Also, a big part of this automation story has already played out, and it hasn’t resulted in mass unemployment.

So it makes sense to use a narrower definition of AI, which makes it possible to claim that “this time it’s different.” What is the reason for the sudden surge of interest in AI? What are the technologies that have really taken off in recent years? Machine Learning (ML) is at the centre of it all. Deep Learning in particular is something you can point to and say “This is new. This we have not seen before.” Except it’s not really all that new — these techniques have been around for decades — but the amount of data and compute power available to train these systems is unprecedented, so let’s call Deep Learning new at least in this respect.

Is Machine Learning replacing humans?

So what is Deep Learning being used for and whose jobs will it take away? One great success story of Deep Learning is Machine Translation, one of the foundational tasks within AI. Early approaches weren’t very successful, but now the results with Google Translate and similar tools are really quite impressive. And Chinese internet giant Baidu recently announced a real-time ML-based interpreter. So surely translators’ and interpreters’ jobs are in danger? Not according to the US Bureau of Labor statistics. They forecast 18% growth in demand for translators and interpreters between now and 2026.

What’s happening here? Well, it turns out that once you reduce the cost of doing something by automating it, demand for that capability goes up, opening up more opportunities for humans to add value. In the case of translators, this means for example translating the specifics of legal contracts, doing localisation work, etc.

This is similar to what happened with ATMs: they reduced the cost of opening a branch, so banks actually opened up more branches and employed more people to do higher value tasks than just cashier work. And in fact the same thing happened in the textile industry in the early 19th century: automation made it cheaper to produce fabric, which drove demand for more product, which led to more jobs in the industry.

What all of this shows is that the interaction between automation and employment is more complicated than you might think. For more on this, read Why Are There Still So Many Jobs? The History and Future of Workplace Automation by David Autor.

Machines that are smarter than humans?

You’ve all seen the exponential progress curves, heard talk of the so-called Singularity. The thinking is that since there’s progress being made at all in something called Artificial Intelligence, then this will necessarily lead to the creation of a machine that is more intelligent than humans. And if that happens, it can of course take all the translation and interpreting jobs as it will be better at these tasks than humans.

But where progress is being made is in well-defined problem spaces: computer memory, processing speed, ability to store and work with ever larger quantities of data, accuracy of data-driven approaches to classifying objects in images or recognising faces. On the other hand, “solving intelligence” is not a well-defined problem. And so it’s really not clear at all whether progress in the well-defined problem spaces will lead to an “Artificial General Intelligence” (AGI), the term used to refer to this notional machine with at least human-level intelligence.

It’s also not clear whether the ability to understand natural language, such that a machine could be a better translator than a human, is possible without human-level intelligence. So in the absence of any coherent definition of what AGI is and how we might achieve it, and without a non-AGI solution to the task of language understanding, we can at least say that it is very unlikely that translation will be entirely taken over by machines. The same can be said for any task which requires what we call “common sense.”

Indeed, Baidu’s own CTO has said in relation to their AI-based interpreter service, that there will continue to be a need for human translators and interpreters “especially for high stake occasions which require consistent and more precise interpretation.“ 

Putting people first isn’t just a nicety

Many companies working in the AI space today are at pains to make clear that these technologies are about augmenting human intelligence, not about machine intelligence replacing humans. This may seem like a PR platitude to avoid coming across like they’re in the business of putting people out of a job, but it’s actually more than that. The type of AI that’s being used today, i.e. Machine Learning, doesn’t just augment humans, it requires them.

The first stage where humans are needed is in the labeling of data to train the Machine Learning model. It’s not always an explicit task where the human is knowingly labeling data to be used for ML (as in the case of creating training data for self-driving cars - work that is painstakingly being done by thousands of people in Kenya), and it’s not always as straightforward as annotating a picture of a cat with the word “cat”, but it’s a necessary step, and it has to be done by people who understand the domain in question. If we stick with the translation example, here the “labeling” is the human translation of text from the source language to the target language. If Google or Baidu decided to stop using human translations to train their models with and instead fed them their own past translations, people would very quickly lose faith in their translation services. When a snake eats its own tail, so much the worse for the snake.

Another fundamental facet of ML, besides its need for copious amounts of human-labeled training data, is that it is inherently uncertain. It produces guesses based on probabilities arising from the data it was trained on, which often need to be verified by a human. When the stakes of a decision aren’t that high - for example, when a mistake just means a poorly targeted ad or the wrong word being predicted in your smartphone autocomplete - there’s no need for human verification. But if a decision significantly affects people’s lives, or for any reason comes with a level of responsibility, no provider of ML systems will take on that responsibility, and so a human has to. Sadly, this is not always understood and there are certainly cases of decisions being entirely automated when they shouldn’t be.

Given that Machine Learning, by definition, requires human supervision, this means that ML-enabled automation is in fact less likely to result in wholesale replacement of people with machines than the type of automation that came before, i.e. automation of routine tasks that could be described with rules (because there’s no uncertainty there, hence no need for supervision). So the argument that this time the automation story is different because of AI really doesn’t hold much water.

The dangers of overestimating AI

Problems arising from automation of decisions that shouldn’t be automated, for example in the criminal justice system, sometimes come down to people thinking that ML-based systems are more “intelligent” than they are. This in turn comes from the many exaggerated claims being made of such systems. While reporting in the mainstream media can be held accountable for a lot of this, AI researchers themselves have been known to inflate the significance of their accomplishments. Geoff Hinton, one of the fathers of Deep Learning, proclaimed during a talk at a hospital in 2016 that “They should stop training radiologists now,” because Deep Learning will soon be better than humans at identifying medical problems from images. In fact, there’s been a radiology staffing crisis in the UK for the last few years, and though it’s hard to know whether Hinton’s remarks played a role in that, they certainly can’t have helped.

Hinton was mistaking the task for the job, and he has since walked back his comments somewhat and clarified that the role of the radiologist will change as Deep Learning approaches to medical imaging improve. Well, of course it will - as the tools of a job change, the individual tasks constituting that job must change. A big part of the job of a radiologist, which will not be changing, is the responsibility for making decisions that affect people’s lives. Deep Learning cannot help with that part. These systems are tools that can help speed up decision-making, they are not themselves decision-makers. And better tools are exactly what radiologists need if they’re to have any hope of keeping up with the exponential growth in demand for medical imaging. But even then, better tools will help only if there are enough radiologists in the first place, which there won’t be if the AI overhype discourages people from entering the field.

Making assertions about a whole class of jobs going away in the near future because of AI is downright irresponsible.

Responsible AI talk

Most of the talk around responsible AI is about the responsible development and deployment of AI, and the need for fairness, transparency and accountability. In addition to this, I’d like to advocate for the responsible discussion of AI, and I think everyone can play a part here. During the Q&A round of last month’s debate, someone said “I’ve heard that AI can now tell if you’re a criminal by looking at your face.” They were referring to a paper from a couple of years ago that claimed to have achieved this, which I quickly pointed out, as I’ve done elsewhere before, was less a case of Artificial Intelligence than human stupidity.

When we hear grand claims about AI, each and every one of us is capable of at least questioning whether the claims make sense before we repeat them as fact. How AI is discussed will affect how it is deployed, including whether its limitations (bias, uncertainty, etc.) are adequately accounted for. Adequately accounting for such limitations is, and will continue to be, the job of humans, and there’s no end in sight to the work to be done here.

One of my opponents in the AI debate-who had spent a good deal of time talking about AGI-was asked by an audience member, “If an AGI will be better than humans at everything, won’t it be better at being good too?” My opponent responded that it would; but he warned that we mightn’t like the ethical decisions it made, even if they were more correct than ours. He illustrated this claim in a puzzling way, with a scenario where an AGI makes the “correct” ethical decision to kill a healthy person in order to save five others who need organ transplants. I found the example puzzling because just that case is typically used as a counterexample to the variety of consequentialism about ethics called act utilitarianism. And it’s a counterexample precisely because it gives an ethically wrong answer. Yet here the case was now, being touted by my opponent as the “ideal” answer that a supposedly perfect AGI would give to an ethical dilemma (and by a professional ethicist, no less). But we already have machines capable of mindless decision-making based on a predefined optimization function-just like act utilitarianism. No need to wait for the AGI! Figuring out what to optimize, and how to be fair, that’s the hard part. Only we can do that. 

We humans have got our work cut out for us.

comments powered by Disqus