
When even Elon Musk – normally a champion of the human ability to improve its condition through material progress – backs the robots, we know we’re in trouble. But barely a year ago, back in 2017, he became fear-monger-in-chief of the artificial-intelligence apocalypse. “What’s going to happen,” he said, “is robots will be able to do everything better than us … I mean all of us.”
But a lot can happen in two years. A report just published by PriceWaterhouseCoopers should go some way to calming such hysteria. It argues that AI will create slightly more jobs in the UK (7.2 million) than it displaces (7 million). So rather than lamenting an apocalyptic tomorrow in which we are ruled by our robot overlords, a more useful way to think about the future would be to consider how we can interlace the strengths of machines with those of humans.
John Giannandrea who left his role as Senior Vice President of Engineering at Google, to head up AI at Apple, is already doing so. “There’s just a huge amount of unwarranted hype around it right now,” he says, “[much of which is] borderline irresponsible.” We shouldn’t be using it to match or replace humans, but to make “machines slightly more intelligent — or slightly less dumb”. He isn’t dismissing the potential of computers to radically alter the way we work, but is thinking about the ways it will do so in a slightly more nuanced fashion.
He knows that the better we understand the differences between the way people think and the way in which machines calculate, the better we can assess how to work with them. For example, unlike machines, humans typically lean on a variety of mental rules of thumb that yield narratively plausible, but often logically dubious, judgments. The psychologist and Nobel laureate Daniel Kahneman calls the human mind “a machine for jumping to conclusions”.
Machines using deep-learning algorithms, in contrast, must be trained with thousands of photographs to recognise kittens— and even then, they will have formed no conceptual understanding of cats. Small children can easily learn what a kitten is from just a few examples. Not so machines. They don’t think like humans and they can only apply their ‘thinking’ to narrow fields. They cannot, therefore, associate pictures of cats with stories about cats.
But, counterintuitively perhaps, the tasks humans find hard, machines often find easy. Cognitive scientist Alison Gopnik summarises what is known as Moravec’s Paradox: “At first, we thought that the quintessential preoccupations of the officially smart few, like playing chess or proving theorems — the corridas of nerd machismo — would prove to be hardest for computers.”
Join the discussion
Join like minded readers that support our journalism by becoming a paid subscriber
To join the discussion in the comments, become a paid subscriber.
Join like minded readers that support our journalism, read unlimited articles and enjoy other subscriber-only benefits.
Subscribe