AI is upon us and with its coming society has been abruptly sorted into three groups - (1) the true believers (2) the small tribe of the indifferent, (3) the Cassandras or, to use the language of the internet, the ‘doomers’.
To keep this blog short, I will pass over the true believers, and move straight on to the Cassandras and their visions of the AI apocalypse. In particular, I wanted to share what is (at least for me) a relatively new and interesting way of thinking about the downside of the AI take-off (if that is what we are witnessing). So far the more lurid visions have dominated public discourse. For example, here is Nick Bostrom, the founding director of the (now defunct) Future of Humanity Institute at Oxford, on how AI will dispose of humanity in a fit of monomaniacal psychopathy:
Suppose we have an AI whose only goal is to make as many paper clips as possible. The AI will realize quickly that it would be much better if there were no humans because humans might decide to switch it off. Because if humans do so, there would be fewer paper clips. Also, human bodies contain a lot of atoms that could be made into paper clips. The future that the AI would be trying to gear towards would be one in which there were a lot of paper clips but no humans.
One is reminded of the closing stanza of Eliot's The Hollow Men - ‘this is the way the world ends, not with a bang, but with a whimper’, only Eliot was not visionary enough to see also the global deluge of stationery. Ironically, from the economists’ perspective, what has gone wrong here is not that the AI is behaving irrationally, rather it is acting in a way that is - at least within the framework of traditional rational choice theory - utterly and uncompromisingly sane. We have been optimised out of existence by a more perfect utility-seeking machine. What would Francis Edgeworth, one of the original geniuses behind rational choice theory, have made of this? After all, it was Edgeworth who first told us that for the purposes of the ‘economical calculus’ any ‘individual experiencing a unit of pleasure-intensity during a unit of time is to ‘count for one’’. Surely then creating a machine that can continually ‘be realising the maximum energy of pleasure’ would be a marvellous achievement. For utility is utility, whether it is in the head of a man or running along the logic switches and neural nets of The Great Stationer.
Anyway, Alan Turing was already thinking about the overthrow of mankind by a super-AI in the early 1950s, when the cutting-edge in the field was the McCulloch-Pitts Perceptron machine. Turing never seems to have been that worried about this prospect, even if he thought it was inevitable:
"It seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers. There would be no question of the machines dying, and they would be able to converse with each other to sharpen their wits. At some stage therefore we should have to expect the machines to take control, in the way that is mentioned in Samuel Butler's Erewhon."
Revisiting this famous passage from Turing's lecture 'Intelligent Machinery: A Heretical Theory' (1951) the bit that now stands out to me is that line about machines conversing with one another to sharpen their wits. That apparently is what happens with chess machines and other game-based AIs. But in the case of the more flexible and omni-competent systems, AI seems to be wholly parasitical on human intelligence. So far efforts to create ‘synthetic data’ that could be generated by AIs and used, in turn, to train the same machines have proved largely ineffective, leading to rapid loss in performance. And so, when all human inputs are drowned out by AI-generated stuff, one could easily envisage a race down the gradient of stupidity by competing machines to the final limit of absolute idiocy.
Which brings me on to the alternative vision of the AI apocalypse I mentioned earlier - courtesy of Alexandre Borovik, Professor of Pure Mathematics at Manchester University:
AI: Replacement Of Robotic Humans With Pseudo-Humanoid Robots
Posted on 3 May 2023 by Alexandre Borovik
I’m afraid that the discussion of Artificial Intelligence ignores the main issue, and this is a global, existential issue, and this is a question of preserving the culture of mankind: very soon AI will be able to replace people in 60%, 70%, 90% of human “intellectual” activities, because these activities have been already purged of most intellectual content. An example: look at roles of people in the over-regulated and dehumanised administrative structures in (British) universities.
About 15 years ago, one of my young colleagues, a talented mathematician, leaving academia and a university career to work in a start-up, told me: “My new task is to make middle-level managers unemployed.”
In short, AI is the replacement of robotic humans with pseudo-humanoid robots (my neologism for “robots pretending to be superhuman”).
The key line is 'preserving the culture of mankind'. We may be facing a fate worse than that envisaged by Bostrom. Instead of being turned into paperclips, we will be transformed into robots by the piece-by-piece replacement of human culture with something inhuman. Borovik calls this the 'replacement of robotic humans with pseudo-humanoid robots' (which suggests that the process had already started some decades ago). I prefer the language of C S Lewis - this would be the abolition of mankind.
Interesting take on the AI question, and perhaps capturing the nuance of what the other "models" of AI omit often - AI requires human inputs to grow, its own data is garbage that cannot be refined further for its own advancement. Perhaps the models we use lack something, perhaps the internal logic of AI wll only produce faulty material if given more of its own logic back, perhaps human creativity has something that AI is just incapable of replicating, or perhaps language models are limited by the language itself.
Could we see mediocre AI replace humans in mediocre jobs, forcing the rest of us to upskill so we can do the truly important jobs? Do we have enough of those for everyone to go around?