There is currently an explosion of public interest in artificial intelligence, or AI for short.

This has been sparked by the mass availability of two generative AI programmes, both developed by the company OpenAI, which has support from Microsoft.

Inputting a few specifications to Dall-E will produce an image of your choice. ChatGPT will produce a written piece of mostly serviceable prose on a given topic, tailor-made to the instructions in receives.

Google is working on its alternative to ChatGPT called Bard. There is also a programme called Jukebox that generates “new” songs.

Not surprisingly all this AI has panicked some graphic designers, writers and musicians who fear that they are on the brink of being put out of a job by machines that can do what they do better, quicker and more cheaply.

AI will certainly cause considerable displacement in the workplace, after all printing meant unemployment for a lot of scribes, cars pretty much did for horse-drawn carriages and calculators replaced counting houses.

There is no evidence today that computers armed with AI will outwit or replace our species, unless we ask them to.

AI is not genuine intelligence. It does not think creatively for itself. It may never do so. It is limited by being “artificial” in both senses of the word “made by human beings rather than occurring naturally” and being “fake or phony”, a mere substitute for the real thing, like plastic flowers.

Please use Chrome browser for a more accessible video player


2:16

Will this chatbot replace humans?

AI programmes are running into trouble

The old adage of computer data – GIGO, Garbage In Garbage Out – still applies. The image making and prose writing programmes are only as good as the instructions which they receive from people and the source material, created by humans, which they aggregate at electronic speed, much more speedily than we can.

AI programmes cannot think for themselves and already they are running into trouble because of the imperfect thoughts of the human sources they draw on.

Alphabet, Google’s parent company, has just suspended the roll out of Bard after a test run. It repeated what many humans believe, wrongly, that NASA’s James Webb Telescope was used to take the first pictures of planets beyond the solar system.

In 2016, Microsoft hastily shut down Tay, a Twitter chatbot after it started tweeting sexist and racist messages. Now where could it possibly have got those ideas?

Meta, Facebook’s owner, was embarrassed when its Blenderbot sighed along with many recovering doom-scrollers “since deleting Facebook my life has been much better”.

Some fear, or hope, that ChatGPT could put an end to homework because teachers would be unable to tell if pupils had cheated using AI. Invigilated written exams would soon put an end to that.

Read more:
How teachers are facing up to ChatGPT
New Google chatbot’s costly error wipes $100bn off market value

It turns out AI is a mediocre student

Besides schoolwork occupies relatively basic rungs on the ladder of learning.

At university and graduate school, AI turns out to be a mediocre student. Tested at Minnesota Law School it scored a low pass grade of C+. The B- grade at Wharton Business School was slightly better but still hovering around the average.

Mildly apprehensive, the newspaper columnist Hugo Rifkind asked ChatGPT to write an article on a given topic in his style. To his relief the first effort was illiterate and unusable.

He then specified some quotes and jokes for inclusion. Building on that extra input the second effort was competent but nowhere near the standard of his own work or worthy of appearing in The Times.

AI is already replacing some drudge work, although it will still need to be told what to do by people.

Those few journalists tasked with simply rewriting press releases may be an endangered species. It will be a long time before digital sub-editors can produce reliably acceptable punning headlines. Graphic artists producing standard images for advertising could be replaced. There is already a raging argument over the copyright of the original images and written material which inspire the new output.

It may work out cheaper and less risky to employ people to do the work instead.

Please use Chrome browser for a more accessible video player


0:47

Israel president uses AI to write speech

It’s always operating to instructions set by human beings

AI does offer humanity great benefits. It can process data at speeds and with an accuracy which we could not possibly match. At this week’s massive LEAP tech conference in Riyadh more than 200,000 delegates were accredited. They heard that AI will be able to save at least 10% of electrical consumption simply by monitoring devices and plants and being able to switch them off when not on use.

Similarly, the autonomous vehicles which are being planned, on roads and in the air, will only be able to operate safely because of AI can assimilate the myriad inputs from the vehicle itself, other vehicles and monitoring systems in the surrounding infrastructure.

None of this means that AI is “thinking for itself”. It is always operating to instructions and parameters set by human beings. Clearly there are dystopian potential implications such as justice meted out by computers based on input evidence; machines programmed to kill autonomously; or repressive and highly efficient monitoring of people.

But the malign instrumentality of all these would first be driven by bad people.

We are a long way from the so-called “Singularity”, a hypothetical point, conceived by philosophers and science-fiction writers, when machines programme and build better machines, supplanting humans and other carbon-based forms of life. It may never happen.

Some techies think we are all already on the way to being made redundant. Last year, Blake Lemoine – a software engineer working on Google’s LAMDA (language model for dialogue applications) – was put on leave after claiming it had obtained consciousness equivalent to a seven or eight-year-old child, feared death, and was aware of its “rights”.

He was endorsing a reductive view of humanity – that we too are mere machines responding to stimuli, that we are incapable of doing something genuinely new but merely, like AI, juggle existing stimuli.

Musk has talked about augmenting human brains with microchips

This approach seems belied by the inexplicable and so far uncopiable phenomena of evolution, genuine creative intelligence, consciousness, and, indeed, life itself.

The novelist Philip K. Dick explored the possible differences between humans and hypothetical hyperintelligent machines in Do Android’s Dream of Electric Sheep?

Those who’ve seen the Bladerunner films inspired by the book will remember that the most prized things in Dick’s dystopia are living creatures.

Elon Musk has talked about augmenting human brains with microchips. This may soon be possible to a limited extent. The philosopher Susan Schneider, founding director of the Center for the Future Mind at Florida Atlantic University, advises caution.

If you really could upload the contents of your mind into microchips, whether in your skull or in the Cloud, she points out that you would also be dead. There’s lots of life left in real human intelligence.