Competing against machines: how AI is changing the future of work

Can machines outperform human intelligence? And what are the consequences for the future of work? Philip Ross busts some fundamental myths about the capabilities of machine learning

The debate around how machines and artificial intelligence will affect the future of work just keeps getting more and more intense. How can we – as humans – predict the impact these machines will have, when they are increasingly developing intelligence that is beyond human capabilities?

We are now conscious of the fact that machines will automate many routine tasks and even entire jobs in the future, which in turn will create more time for tasks which require human judgement, creativity and empathy. After all, machines can’t mimic these innate human qualities, right?

Understanding machine capabilities

In the 1980’s, author Richard Susskind wrote his doctorate on AI and its impact on the law. Together with Professor Phillip Capper they were part of the vanguard in creating the first AI system commercially available in law. Their approach to machine learning was to sit down with a lawyer and get them to explain their methodology and capture this information in a set of instructions and rules for machines to follow.

This approach followed the belief that machines have to copy the way that human beings think and reason in order to outperform them. It is a belief that Richard Susskind’s son, Daniel Susskind, is now challenging 30 years later.

Advances in processing power, data storage capability and algorithm design mean that the distinction between routine tasks (tasks that can be easily explained by humans) and non-routine tasks (the tasks that require human intuition) is blurring. This is not to say machines are starting to think or reason like humans; and maybe that’s just the problem. Are we looking at artificial intelligence through the wrong lens?

Machines cannot think, reason and feel like humans because, simply, they are not human. They can however perform tasks by running pattern recognition algorithms through hundreds of thousands of past cases and data. While this may result in the same conclusion a human may come to, the machine is performing the task in an un-human way. And whereas 30 years ago machines were transparent with the information fed to them by humans, now they are much more opaque.

Busting the myths of machine learning

There is a common misconception that machines can only be as intelligent as the human who programmed it, but we now know that human intelligence does not represent any sort of finishing line for machine capability. Machines no longer need to be fed data to trump human intelligence. Take the example of world Go champion, Lee Sadon against the machine owned by Google’s Deep Mind, AlphaGo. Go is a Chinese game which reportedly has more moves than atoms in the universe. The machine was only programmed with the rules of the game, yet in a spectacular and surprising way it went on to beat Sadon four consecutive times.

In a world where we cannot use human logic or reasoning to predict machine intelligence, what does this mean for the future of work? Last year McKinsey found that only five per cent of jobs can be automated, but single tasks on the other hand can be more easily automated. This means that the next generation of workers can take two routes: build machines or compete against them.

‘Man with machine is not necessarily more powerful than man versus machine…’

In a recent TED talk, Daniel Susskind explained that he used the word ‘compete’ very deliberately, where most people would use the word ‘collaborate’. We still hold the belief that man with machine is better than man versus machine, but as I’ve just mentioned, machine can out-perform man through machine learning. While it is true that many automated tasks and AI capabilities can complement human activity- take the trusty SatNav for example – in the future there will be a need to compete with machines to perform jobs. This means that the next generation of workers need to learn a high standard of digital skills.

This may seem doom and gloom, but just because machines can perform human tasks better than humans it doesn’t necessarily mean they should. While AI now aids decision-making on putting someone on parole, does it mean they should have the same hand in deciding a life sentence? When machines help make an accurate medical diagnosis, does that also mean they should play the same role in deciding to take someone off life support?

For now, there are some tasks which are fundamentally human in nature.