Artificial Intelligence

[Back to Computing and IT]

Introduction

Artificial Intelligence is not intelligent, and it is not (and probably never will be) sentient, but it is important, and it is changing the world.  It certainly poses a risk to some - perhaps many - jobs, but it seems likely that the risks it poses to the human race are seriously misunderstood by many people.

Some Significant Risks

It is worth noting that AI products can be very successful in a limited domain without any significant progress towards the actual goal (for many AI researchers) of AGI - Artificial General Intelligence: intelligence which is not limited to a specific domain or set of domains.  An AI might be able to write a plausible essay on the causes of the French Revolution, while being useless at Mathematics; an AGI needs to be able to do both.

The introduction of AI poses a number of significant risks.  It is hard to say how large any of these risks actually is, in part because they are so obviously risks that some effort will presumably be made toward mitigating them, and in part because awareness of the risks will affect - to some extent - how people use the AI in practice. 

Bias.  AI models need to be trained on large amounts of data, so they will reliably reproduce any bias and injustice which is recorded in the data.

Energy.  It requires a great deal of energy and computing power to train an AI, and a significant amount each time it is used.  Projections of future energy use have been increased as the result of anticipated use of AI.  The EU General Purpose AI Code of Practice ('GPAI') issued in 2025 is interested in AI models trained with computing resources that exceed 10^23 FLOPs (equivalent to around 10,000,000,000 decent home computers), and asks for additional commitments from those distributing general-purpose AI models that were trained using a total computing power of more than 10^25 FLOPs - these are large numbers.

Legality.  AI models are trained on data, and this often includes copyright material.  Many content creators (and content owners) deeply oppose the use of their material in this way, without payment and without credit.  There are various legal challenges here, both actual and potential.  Some small progress in this area can be seen in the publication of the EU General Purpose AI Code of Practice on 10 July 2025.

Sabotage.  Most of the material we use to train AIs comes from the Internet, but an increasing amount of material on the Internet is being generated by AIs.  This is a form of self-sabotage: they are devaluing the data which they need to be trained upon - and some reports suggest that this is happening at an alarming rate.  Unless the people creating AIs can break this loop somehow, instead of future generations of AI reflecting reality more effectively, they will be more detached from reality.

Volume.  It's not only material on the Internet being generated by AI: academics are being overwhelmed by the number of academic papers being published, which are partly or entirely written by AI.  These papers have little or no actual value, beyond counting as a publication for the authors and generating money for the publishers, and they inflict significant harm - both to the careers of academics who publish fewer papers because their papers are actually written by them and contain genuine research, and also to those who waste time, effort and money trawling through far more papers to find the useful content.

Employment.  Some jobs will probably disappear as a result of AI (who needs drivers when cars can drive themselves?) and other jobs will be so enhanced by AI that far fewer people are needed (for example, see Agentic AI Is Quietly Replacing Developers).  In the past, new technology has destroyed some jobs, while creating others; it is not yet clear how many new jobs will be created by AI; in any case, there is always a delay (and much suffering) between the destruction of the old jobs and the creation of the new.

Trust.  As people become used to using AI products, they will learn to trust them, so when the AI fails - and it will - we may not spot the mistake before it is too late.

De-skilling.  As people get used to using AI products, they will no longer need the same skills, and they will not build up the same experience and expertise as they did when they did the work themselves.  Technology always does this: we don't use mental arithmetic as previous generations did, because we have electronic calculators available whenever we need them.

Weapons.  AI will inevitably be used in autonomous weapon systems; it is hard to see how we can avoid an AI-powered arms race once the technology is available, and hard to see how any global convention banning the military use of AI could possibly be policed.  The USA is testing an AI anti-drone system (November 2024), and the Pentagon is considering whether to deploy it;  Wikipedia describes some other current applications.  We are unlikely to see any AI system becoming sentient and deciding to destroy the human race, but it is entirely possible that an AI system may initiate a military strike which escalates to full nuclear war: such mistakes have nearly happened on numerous occasions. 

Of course, people will have differing views on what is a significant risk.  Using an AI to generate computer code is often considered a low-risk activity these days - we can test the code, right?  But AI is not restricted to the sort of mistakes we anticipate: one developer discovered that the AI he was using had destroyed months of work and deleted both the test and production databases.  An old adage tells us that "To err is human, but to really foul things up you need a computer," and now we know that to really wreck your system you need an AI.

Reasonable Expectations

People have been making bold claims about AI for a long time, and some people have made a lot of money from doing this.

In this context, people often talk about the ‘singularity’.  There are different definitions of this anticipated event, but it generally refers either to the moment when artificial intelligence surpasses human intelligence, or to the point when it becomes able to improve itself, or (because they are believed to be connected) to both.  Many in the AI community believe that this will happen, perhaps soon, and will rapidly lead to irreversible changes, but the nature of those changes are debated; some believe it will result in the end of the human race.

People sometimes claim that AI self-improvement is demonstrated by AlphaZero.  What it achieved is incredibly impressive, but it does not translate to the real world.  We are told that AlphaZero played 44 million games of chess against itself In its first nine hours.  But the universe AlphaZero operated within was a computer game, with very limited options, a fixed starting point, a defined end point, and clear success criterion. It could validly play itself very rapidly, so quick feedback was very straightforward.

But the aim of AGI is to provide a tool which can operate in the real world, and feedback in the real world is somewhat slower - and rather harder to come by.  At present, the AIs we use do not gain any feedback from the real world: they are generally trained on material scraped from the Internet, which includes both Wikipedia and Physics papers on arXiv, but also publications by the Flat Earth Society and Harry Potter fanfiction.  As noted, much of the material they are trained on is copyright, which might throw a legal spanner in the works.  None of these AIs can learn; not one can discover anything new by playing itself; the next generation is simply trained on a slightly different set of data, with occasionally a small tweak to the algorithm used to build the model.

Training the models is both time consuming and expensive - a complex model can cost millions of dollars.  As noted above, the EU GPAI is interested in AI models trained with computing resources that exceed 10^23 FLOPs.  But even if you could make it faster and cheaper, and if you could somehow give it the ability to gain real world input, you simply cannot speed up the real world to obtain real world feedback faster.  You can run fast simulations, maybe, but sooner or later you need to test those simulations and find out how closely they correspond to reality.

And even then, humans have to be in the loop.  In the real world, things cost money; no AI is going to have its own bank account for a while.  No AI is going to write a funding bid, employ the necessary people, rent the required offices and laboratories, manage the HR, purchase and configure the equipment and run the experiments, in order to test the accuracy of the simulation.

Where they are proving to be useful is in suggesting and designing possible useful experiments - they enhance the work of some teams of scientists, and it is reasonable to expect that this benefit will spread to other teams, working in other domains, in the not-too-distant future.

And Remember

Intelligence is not a thing, we can't define it, and we can't measure it.  Intelligence tests only measure your ability to do intelligence tests, and all intelligence tests are culturally specific.  While we can (often) recognise it, we don't know what intelligence is, and we can't increase the intelligence of anything we have made by 'adding more' intelligence.

We can't define intelligence, but we can describe it: intelligence, as we observe it, is an attribute of living organisms which enables them to survive and reproduce more effectively.  Living organisms display intelligence, and living organisms behave in ways which are fundamentally different from machines.  You can explain chemistry in terms of physics - chemical reactions are 'emergent properties'; but you cannot explain biology purely in terms of chemistry.  When someone acts out of jealousy or love, you cannot reduce that to, or explain it in terms of, chemical reactions.  See our Framework for more on the importance of not confusing the different aspects of reality, and the article on Real Life for more about the difference between machines and living organisms.

However...

It may be suspected that a number of rich people are warning us about the dangers of AI, not because it is a risk to us (as they claim), but because it is a risk to them.  AI may well become the latest in a long line of technological disruptors, and people who have made their money out of the current generation of technology may well fear that products empowered by AI might unseat them from their positions of power.

The mega-rich of our generation have made their money by building companies which generate money from computers and the Internet.  But, while they might own them and make some high level decisions, they don't actually control those companies.  Nobody does, not even governments: they are too big, and too global.  On the Politics page, it says this.

You can stop worrying about the danger of Artificial Intelligence taking over the world: it has already happened. The world is run by organizations - businesses, multinational corporations - which operate according to their own systems and logic. Many of them are legally people but, despite the many people working within them, they are - quite literally - not human. They are inhuman creatures, entities which we have created but cannot control, and they run the world (see What Controls the World?).

AI and Climate Change

AI is a tool which can be used to help us combat climate change, but is it worth the risk?  Or does the risk caused by climate change outweigh the risk from AI?  It is in some ways analogous to the calculation performed by Robert Oppenheimer (as seen in the recent film) - the first atomic bomb might cause a chain reaction which destroys the world, but the risk is small, and worth taking in order to win the war.

While AI might help improve the technology we need to combat climate change, the amount of energy required to train and run AI systems is significant ("Making an image with generative AI uses as much energy as charging your phone"), and rapidly growing.  It may help us, but the cost of that help, in pure energy and Carbon emission terms, may be greater than the AI delivers.

See Also

 

(Version 3, 23 July 2025)

E-mail me when people leave their comments –

You need to be a member of Just Human? to add comments!

Join Just Human?

Comments

This reply was deleted.

Donate