[Back to Computing and IT]
Introduction
Artificial Intelligence is not intelligent, but it is changing the world. It certainly poses a risk to some - perhaps many - jobs, but does it pose a risk to the human race, as some have claimed?
Some Significant Risks
It is worth noting that AI products can be very successful in a limited domain without any significant progress towards the actual goal (for many AI researchers) of AGI - Artificial General Intelligence: intelligence which is not limited to a specific domain or set of domains. An AI might be able to write a plausible essay on the causes of the French Revolution, while being useless at Mathematics; an AGI needs to be able to do both.
The introduction of AI poses a number of significant risks. It is hard to say how large any of these risks actually is, in part because they are so obviously risks that some effort will presumably be made toward mitigating them, and in part because awareness of the risks will affect - to some extent - how people use the AI in practice.
Bias. AI models need to be trained on large amounts of data, so they will reliably reproduce any bias and injustice which is recorded in the data.
Energy. It requires a great deal of energy and computing power to train an AI, and a significant amount each time it is used. Projections of future energy use have been increased as the result of anticipated use of AI.
Legality. AI models are trained on data, and this often includes copyright material. Many content creators (and content owners) deeply oppose the use of their material in this way, without payment and without credit. There are various legal challenges here, both actual and potential. Some small progress in this area can be seen in the publication of the EU General Purpose AI Code of Practice on 10 July 2025.
Sabotage. Most of the material we use to train AIs comes from the Internet, but an increasing amount of material on the Internet is being generated by AIs. Unless the people creating AIs can break this loop somehow, instead of future generations of AI reflecting reality more effectively, they will be more detached from reality.
Employment. Some jobs will probably disappear as a result of AI (who needs drivers when cars can drive themselves?) and other jobs will be so enhanced by AI that far fewer people are needed (see, for example, Agentic AI Is Quietly Replacing Developers). In the past, new technology has destroyed some jobs, while creating others; it is not yet clear how many new jobs will be created by AI; in any case, there is always a delay (and much suffering) between the destruction of the old jobs and the creation of the new.
Trust. As people become used to using AI products, they will learn to trust them, so when the AI fails - and it will - we may not spot the mistake before it is too late.
De-skilling. As people get used to using AI products, they will no longer need the same skills, and they will not build up the same experience and expertise as they did when they did the work themselves.
Weapons. AI will inevitably be used in autonomous weapon systems; it is hard to see how we can avoid an AI-powered arms race once the technology is available, and hard to see how any global convention banning the military use of AI could possibly be policed. The USA is testing an AI anti-drone system (November 2024), and the Pentagon is considering whether to deploy it; Wikipedia describes some other current applications. We are unlikely to see any AI system becoming sentient and deciding to destroy the human race, but it is entirely possible that an AI system may initiate a military strike which escalates to full nuclear war: such mistakes have nearly happened on numerous occasions.
Reasonable Expectations
People have been making bold claims about AI for a long time, and some people have made a lot of money from doing this.
In this context, people often talk about the ‘singularity’. There are different defintions of this anticipated event, but it generally refers to the moment when artificial intelligence sufficiently surpasses human intelligence that it becomes able to improve itself. Many in the AI community believe that this will happen, perhaps soon, and will rapidly lead to irreversible changes, but the nature of those changes are debated; some believe it will result in the end of the human race.
People sometimes claim that AI self-improvement is demonstrated by AlphaZero. What it achieved is incredibly impressive, but it does not translate to the real world. We are told that AlphaZero played 44 million games of chess against itself In its first nine hours. But the universe AlphaZero operated within was a computer game, with very limited options, a fixed starting point, a defined end point, and clear success criterion. It could validly play itself very rapidly, so quick feedback was very straightforward.
But the aim of AGI is to provide a tool which can operate in the real world, and feedback in the real world is somewhat slower - and rather harder to come by. At present, the AIs we use do not gain any feedback from the real world: they are generally trained on material scraped from the Internet, which includes both Wikipedia and Physics papers on arXiv, but also publications by the Flat Earth Society and Harry Potter fanfiction. As noted, much of the material they are trained on is copyright, which might throw a legal spanner in the works. None of these AIs can learn; not one can discover anything new by playing itself; the next generation is simply trained on a slightly different set of data, with occasionally a small tweak to the algorithm used to build the model.
Training the models is both time consuming and expensive - a complex model can cost millions of dollars. But even if you could make it faster and cheaper, and if your could somehow give it the ability to gain real world input, you simply cannot speed up the real world to obtain real world feedback faster. You can run fast simulations, maybe, but sooner or later you need to test those simulations and find out how closely they correspond to reality.
And even then, humans have to be in the loop. In the real world, things cost money; no AI is going to have its own bank account for a while. No AI is going to write a funding bid, employ the necessary people, rent the required offices and laboratories, manage the HR, purchase and configure the equipment and run the experiments, in order to test the accuracy of the simulation.
And Remember
Intelligence is not a thing, we can't define it, and we can't measure it. Intelligence tests only measure your ability to do intelligence tests, and all intelligence tests are culturally specific.
Living organisms display intelligence, and living organisms behave in ways which are fundamentally different from machines. You can explain chemistry in terms of physics - chemical reactions are 'emergent properties'; but you cannot explain biology purely in terms of chemistry. When someone acts out of jealousy or love, you cannot reduce that to, or explain it in terms of, chemical reactions. See our Framework and the article on Real Life for more about this.
However...
It may be suspected that a number of rich people are warning us about the dangers of AI, not because it is a risk to us (as they claim), but because it is a risk to them. AI may well become the latest in a long line of technological disruptors, and people who have made their money out of the current generation of technology may well fear that products empowered by AI might unseat them from their positions of power.
The mega-rich of our generation have made their money by building companies which generate money from computers and the Internet. But, while they might own them and make some high level decisions, they don't actually control those companies. Nobody does, not even governments: they are too big, and too global. On the Politics page, it says this.
You can stop worrying about the danger of Artificial Intelligence taking over the world: it has already happened. The world is run by organizations - businesses, multinational corporations - which operate according to their own systems and logic. Many of them are legally people but, despite the many people working within them, they are - quite literally - not human. They are inhuman creatures, entities which we have created but cannot control, and they run the world (see What Controls the World?).
AI and Climate Change
AI is a tool which can be used to help us combat climate change, but is it worth the risk? Or does the risk caused by climate change outweigh the risk from AI? It is in some ways analogous to the calculation performed by Robert Oppenheimer (as seen in the recent film) - the first atomic bomb might cause a chain reaction which destroys the world, but the risk is small, and worth taking in order to win the war.
While AI might help improve the technology we need to combat climate change, the amount of energy required to train and run AI systems is significant ("Making an image with generative AI uses as much energy as charging your phone"), and rapidly growing. It may help us, but the cost of that help, in pure energy and Carbon emission terms, may be greater than the AI delivers.
See Also
- Please also see the Alison Pickard interview
- CNN: Meta created an AI advisory council that’s composed entirely of White men
- BBC: Scarlett Johansson's AI row has echoes of Silicon Valley's bad old days
- BBC: Business locked in expensive AI 'arms race'
- RTE: Tech companies call for introduction of AI regulations
- Scientific American: AI Chatbots Have Thoroughly Infiltrated Scientific Publishing
- New Scientist: Analogue chips can slash the energy used to run AI models (23 August 2023)
- Guardian: Can AI Help Combat Loneliness?
- MIT Technology Review: What I learned from the UN's "AI for Good" Summit
- The Register: Microsoft's carbon emissions up nearly 30% thanks to AI
- Evening Standard: Bloomberg eyes more data centres as it ramps up AI capability
Comments
A slight diversion to a related topic to AI, which perhaps underlies the craziness - an approach I hadn't heard of before: Accelerationism. Hint: You won't like it one little bit. I have a word for this, but it's not a nice one.
The connection here is that AI is the latest social disrupter from - you guessed it - Silicon Valley. What better way to intentionally destroy the world. And if that isn't scary enough, there is always population collapse to worry about ...
Hi Paul,
Exactly - AI = Stochastic Parrot
AI will inevitably reinforce, amplify and reflect the pre-existing biases that exist in its training data.
The patterns that AI responds to can generalise existing solutions - and it is simply quite
extraordinary what can be done in this way. But is it truly extraordinary, and original? Perhaps not.
The AI approach to art is the big giveaway to what it does - it treats creativity as a form of directed
randomness, throwing stuff out there.
Here are some further AI sceptic links: