Fears of a threat to humanity have prompted key players in artificial intelligence to call for the suspension of powerful AI system training.
They have signed an open letter warning of potential risks and claiming that the race to develop AI systems has gotten out of hand.
Elon Musk, CEO of Twitter, is among those who want AI training above a certain capacity to be suspended for at least six months.
Steve Wozniak, co-founder of Apple, and some DeepMind researchers also signed on.
OpenAI, the firm behind ChatGPT, recently announced GPT-4 – a cutting-edge technology that has impressed observers with its capacity to perform tasks like answering queries about objects in photos.
The letter, from the Future of Life Institute and signed by the luminaries, requests that development be temporarily paused at that level, warning of the risks that future, more advanced systems may bring.
It states that “AI systems with human-competitive intelligence can pose substantial hazards to society and humanity.”
The Future of Life Institute is a non-profit organization whose purpose is to “direct transformative technologies away from extreme, large-scale hazards and toward life-benefiting applications.”
Elon Musk, the founder of Twitter and the CEO of Tesla, is named as an external adviser to the organization.
According to the letter, advanced AIs must be developed with caution, but “recent months have seen AI labs caught in an out-of-control rush to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, anticipate, or reliably control.”
The letter cautions that AIs could contaminate information channels and automate job replacement.
Is the world ready for the impending AI storm?
The letter comes in the wake of a recent analysis by investment firm Goldman Sachs, which stated that while AI was expected to enhance productivity, millions of jobs might be automated.
Some experts, though, told the BBC that the impact of AI on the labor market was difficult to anticipate.
Outwitted and rendered obsolete
More speculatively, the letter wonders, “Should we construct nonhuman minds that could eventually outnumber, smarter, obsolete [sic], and replace us?”
“AI systems pose significant risks to democracy through weaponized disinformation, to employment through displacement of human skills, and to education through plagiarism and demotivation,” said Stuart Russell, a computer-science professor at the University of California, Berkeley and a signatory to the letter.
And in the future, sophisticated AI’s may represent a “more broad challenge to human authority over our civilization”.
“Taking reasonable safeguards is a minor price to pay in the long term to reduce these hazards,” Prof Russell noted.
But Princeton computer-science professor Arvind Narayanan criticized the letter of focusing on “speculative, futuristic risk, neglecting the version of the problem that is now impacting people”.
‘Be patient.’
In a recent blog post included in the letter, OpenAI warns of the dangers of carelessly developing artificial general intelligence (AGI): “A misaligned superintelligent AGI might bring grievous harm to the planet; an autocratic state with a decisive superintelligence lead could do the same.”
“At certain junctures, coordination across AGI initiatives to slow down will likely be important,” the business concluded.
OpenAI has not publicly responded to the letter, but the BBC has asked the company whether it supports the proposal.
Mr Musk was a co-founder of OpenAI, though he resigned from the organization’s board several years ago and has been criticism of its present course.
Tesla’s autonomous driving functions, like most similar systems, rely on AI technology.
The letter requests that AI labs “immediately cease training of AI systems more capable than GPT-4 for at least six months.”
If such a delay cannot be implemented promptly, governments should step in and impose a moratorium, according to the report.
“New and capable AI regulatory authorities” would also be required.
A number of proposals for technology regulation have recently been put up in the US, UK, and EU, although the UK has ruled out a dedicated regulator for AI.