[Back to Computing and IT]
Introduction
Artificial Intelligence is not intelligent, but it is changing the world. It certainly poses a risk to some - perhaps many - jobs, but does it pose a risk to the human race, as some have claimed?
Some Significant Risks
One of the significant risks is that AI models need to be trained on large amounts of data - so they will reliably reproduce any bias and injustice which is recorded in the data.
Another significant risk is that people will learn to trust AI, so when it fails - and it will - we may not spot the mistake before it is too late.
One significant concern is about the use of AI in autonomous weapon systems: it is hard to see how we can avoid an AI-powered arms race once the technology is available, and hard to see how any global convention banning the military use of AI could possibly be policed. The USA is testing an AI anti-drone system (November 2024), and the Pentagon is considering whether to deploy it; Wikipedia describes some other current applications. We are unlikely to see any AI system becoming sentient and deciding to destroy the human race, but it is entirely possible that an AI system may initiate a military strike which escalates to full nuclear war: such mistakes have nearly happened on numerous occasions.
However...
It may be suspected that a number of rich people are warning us about the dangers of AI, not because it is a risk to us (as they claim), but because it is a risk to them. AI may well become the latest in a long line of technological disruptors, and people who have made their money out of the current generation of technology may well fear that products empowered by AI might unseat them from their positions of power.
The mega-rich of our generation have made their money by building companies which generate money from computers and the Internet. But, while they might own them and make some high level decisions, they don't actually control those companies. Nobody does, not even governments: they are too big, and too global. On the Politics page, it says this.
You can stop worrying about the danger of Artificial Intelligence taking over the world: it has already happened. The world is run by organizations - businesses, multinational corporations - which operate according to their own systems and logic. Many of them are legally people but, despite the many people working within them, they are - quite literally - not human. They are inhuman creatures, entities which we have created but cannot control, and they run the world (see What Controls the World?).
AI and Climate Change
AI is a tool which can be used to help us combat climate change, but is it worth the risk? Or does the risk caused by climate change outweigh the risk from AI? It is in some ways analogous to the calculation performed by Robert Oppenheimer (as seen in the recent film) - the first atomic bomb might cause a chain reaction which destroys the world, but the risk is small, and worth taking in order to win the war.
While AI might help improve the technology we need to combat climate change, the amount of energy required to train and run AI systems is significant ("Making an image with generative AI uses as much energy as charging your phone"), and rapidly growing. It may help us, but the cost of that help, in pure energy and Carbon emission terms, may be greater than the AI delivers.
See Also
- CNN: Meta created an AI advisory council that’s composed entirely of White men
- BBC: Scarlett Johansson's AI row has echoes of Silicon Valley's bad old days
- BBC: Business locked in expensive AI 'arms race'
- RTE: Tech companies call for introduction of AI regulations
- Scientific American: AI Chatbots Have Thoroughly Infiltrated Scientific Publishing
- New Scientist: Analogue chips can slash the energy used to run AI models (23 August 2023)
- Guardian: Can AI Help Combat Loneliness?
- MIT Technology Review: What I learned from the UN's "AI for Good" Summit
- The Register: Microsoft's carbon emissions up nearly 30% thanks to AI
- Evening Standard: Bloomberg eyes more data centres as it ramps up AI capability
Comments
A slight diversion to a related topic to AI, which perhaps underlies the craziness - an approach I hadn't heard of before: Accelerationism. Hint: You won't like it one little bit. I have a word for this, but it's not a nice one.
The connection here is that AI is the latest social disrupter from - you guessed it - Silicon Valley. What better way to intentionally destroy the world. And if that isn't scary enough, there is always population collapse to worry about ...
Hi Paul,
Exactly - AI = Stochastic Parrot
AI will inevitably reinforce, amplify and reflect the pre-existing biases that exist in its training data.
The patterns that AI responds to can generalise existing solutions - and it is simply quite
extraordinary what can be done in this way. But is it truly extraordinary, and original? Perhaps not.
The AI approach to art is the big giveaway to what it does - it treats creativity as a form of directed
randomness, throwing stuff out there.
Here are some further AI sceptic links: