It's not the end of the world (Terminator)

When it comes to whipping up AI hysteria, there is a tried-and-tested algorithm — or at least a formula. First, find an inventor or “entrepreneur” behind some “ground-breaking” AI technology. Then, get them to say how “dangerous” and “risky” their software is. Bonus points if you get them to do so in an open letter signed by dozens of fellow “distinguished experts”.
The gold standard for this approach appeared to be set in March, when Elon Musk, Apple co-founder Steve Wozniak and 1,800 concerned researchers signed a letter calling for AI development to be paused. This week, however, 350 scientists — including Geoffrey Hinton, who effectively invented ChatGPT, and Demis Hassabis, founder of Google DeepMind — decided to up the ante. Artificial intelligence, they warned, “should be a global priority alongside other societal-scale risks such as pandemics and nuclear war”.
Does this mean you should add “death by ChatGPT” to your list of existential threats to humanity? Not quite. Although a number of prominent researchers are sounding the alarm, many are sceptical that its current state is anything close to human-like capabilities, let alone superhuman ones. We need to “calm down already”, says robotics expert Rodney Brooks, while Yann Le Cun — who shared Hinton’s Turing Prize — believes “those systems do not have anywhere close to human level intelligence”.
That isn’t to say that these machine-learning programs aren’t smart. Today’s infamous interfaces are capable of producing new material in response to prompts from users. The most popular — Google’s Bard and Open AI’s Chat GPT — achieve this by using “Large Language Models” (LLMs), which are trained on enormous amounts of human-generated text, much of it freely available on the Internet. By absorbing more examples than one human could read in a lifetime, refined and guided by human feedback, these generative programs produce highly plausible, human-like text responses.
Now, we know this enables it to provide useful answers to factual questions. But it can also produce false answers, fabricated background material and entertaining genre-crossing inventions. And this doesn’t necessarily make it any less threatening: it’s this human-like plausibility that leads us to ascribe to LLMs human-like qualities that they do not possess.
Deprived of a human world — or indeed any meaningful embodiment through which to interact with the world — LLMs also lack any foundation for understanding language in a humanlike sense. Their internal model is a many-dimensional map of probabilities, showing which word is more or less likely to follow the previous one. When, for example, Chat GPT answers a question about the location of Paris, it relies not on any direct experience of the world, but on accumulated data produced by humans.
Join the discussion
Join like minded readers that support our journalism by becoming a paid subscriber
To join the discussion in the comments, become a paid subscriber.
Join like minded readers that support our journalism, read unlimited articles and enjoy other subscriber-only benefits.
Subscribe