“These are not toys — expanding their capabilities is completely reckless.”
Artificial intelligence companies must be held accountable for the harm they cause, say the godfathers of AI technology. Authors and scientists also warn that developing advanced systems is «completely foolhardy» without security checks.
Powerful artificial intelligence systems threaten social stability — and artificial intelligence companies must be held accountable for the harm caused by their products, a group of senior experts, including two of the technology's godfathers, has warned.
According to The Guardian, Tuesday's remarks came as international politicians, technology companies, scientists and civil society figures prepare to gather at Bletchley Park next week for a summit on artificial intelligence security.
A co-author of policy proposals from 23 experts argues that it was “completely foolhardy” to develop increasingly powerful artificial intelligence systems before understanding how to make them safe.
“It's time to take advanced artificial intelligence systems seriously,” said Stuart Russell, a professor of computer science at the University of California, Berkeley. — These are not toys. Expanding their capabilities before we understand how to make them safe is completely reckless.”
The scientist added: “There are more rules for sandwich shops than for artificial intelligence companies.”
The document urged governments to adopt a range of policies, including: Governments should dedicate a third of their AI R&D funding, and companies a third of their AI R&D resources, to the safe and ethical use of systems. Independent auditors should be provided with access to artificial intelligence laboratories. A licensing system must be created to create advanced models. Artificial intelligence companies must take special security measures if dangerous features are found in their models. Tech companies should be held accountable for foreseeable and preventable harm from their artificial intelligence systems.
Other co-authors of the paper include Geoffrey Hinton and Yoshua Bengio, two of the three “Godfathers of AI” who received an ACM Award Turing award — the equivalent of the Nobel Prize in computer science — in 2018 for his work on AI.
Both are among the 100 guests invited to the summit. Hinton resigned from Google this year to sound a warning about what he called the “existential risk” posed by digital intelligence, while Bengio, a professor of computer science at the University of Montreal, joined him and thousands of other experts in signing a letter in March calling for a moratorium on giant experiments with artificial intelligence.
Other co-authors of the proposals include bestselling Sapiens author Noah Yuval Harari, Nobel Prize-winning economist Daniel Kahneman, and Sheila McIlraith, professor of artificial intelligence at the University of Toronto, as well as award-winning Chinese computer scientist Andy Yao.
The authors warn that poorly designed artificial intelligence systems threaten to “reinforce social injustice, undermine our professions, undermine social stability, enable large-scale criminal or terrorist activity, and weaken our shared understanding of reality, which is fundamental to society.”
Scientists have warned that existing artificial intelligence systems are already showing signs of alarming capabilities that point the way to the emergence of autonomous systems capable of planning, pursuing goals and “acting in the world.” Experts say the GPT-4 artificial intelligence model, which powers the ChatGPT tool developed by US firm OpenAI, allows you to design and conduct chemistry experiments, surf the web and use software tools, including other artificial intelligence models.
“If we create highly advanced autonomous artificial intelligence, we risk creating systems that autonomously pursue unwanted goals,” adding that “we may not be able to keep them under control.”
Other policy recommendations contained in the document include: mandatory reporting of incidents where models exhibit disturbing behavior; taking measures to prevent the self-reproduction of dangerous models; and giving regulators the power to suspend the development of artificial intelligence models that exhibit dangerous behavior.
Next week's security summit will focus on existential threats posed by artificial intelligence, such as facilitating the development of new biological weapons and escaping human control, The Guardian notes. The UK government is working with others on a statement that is expected to highlight the scale of the threat posed by “frontier AI,” a term for advanced systems. However, while the summit will outline the risks posed by artificial intelligence and measures to combat the threat, it is not expected to formally create a global regulator.
Some artificial intelligence experts say fears of an existential threat exaggerated for people. Another 2018 Turing Award winner, along with Bengio and Hinton, Yann LeCun, told the Financial Times that the idea that AI could destroy humans is “absurd.”
However, the authors of the policy paper argue that if advanced autonomous artificial intelligence systems came out now, the world would not know how to make them safe or run safety tests on them. “Even if we did, most countries lack the institutions to prevent abuse and enforce safe practices,” they added.