Experts: Unregulated development of AI poses risks to the health and existence of mankind
Experts warn that artificial intelligence poses an existential threat and a risk to the health of millions of people. An article in the scientific publication BMJ Global Health calls for an end to «the development of self-improving artificial general intelligence» until regulation is introduced.
Artificial intelligence (AI) could harm the health of millions of people and pose an existential threat to humanity, doctors and public health experts have said, calling for an end to the development of artificial general intelligence until it is resolved.
How writes, referring to the opinion of specialists, The Guardian, artificial intelligence has the potential to revolutionize healthcare by improving the diagnosis of diseases, finding better ways to treat patients and extending care to more people.
But the rise of artificial intelligence also has the potential to have a negative impact on health, say health professionals from the UK, US, Australia, Costa Rica and Malaysia, writing in BMJ Global Health.
Medical and health risks , “include the potential for AI bugs to harm patients, data privacy and security concerns, and the use of AI in ways that exacerbate social and health inequalities,” they say.
One example of harm, they said, was the use of an AI-controlled pulse oximeter that raised blood oxygen levels in darker-skinned patients, leading to undertreatment of their hypoxia.
But experts also warned of wider, global threats posed by artificial intelligence to human health and even human existence.
AI could harm the health of millions of people through the social determinants of health, control and manipulation of people, the use of deadly autonomous weapons, and the mental health consequences of mass unemployment if AI-based systems displace large numbers of workers.
“Combined with a rapidly improving ability to distort reality through deep forgery, AI-driven information systems could further undermine democracy by causing a general breakdown in trust or provoking social division and conflict with ensuing public health consequences,” the authors of the publication argue.< /p>
Threats also arise from job losses that will be accompanied by widespread adoption of artificial intelligence technology, and are estimated to be in the tens to hundreds of millions over the coming decade.
“While stopping work that is repetitive, dangerous and unpleasant would bring many benefits, we already know that unemployment is strongly associated with adverse health and behavioral outcomes,” the experts said.
“Furthermore, we do not know how society will respond psychologically and emotionally to a world where work is not available or not needed, and we think little about the policies and strategies that would be needed to break the link between unemployment and ill health,” the authors of the article argue .
But the threat posed by a self-improving artificial general intelligence that could theoretically learn and perform the full range of human tasks is pervasive, they suggested: “Now we are striving to create machines that are significantly more intelligent and powerful than ourselves. The potential of such machines to apply this intelligence and power, whether intentionally or not, and in ways that could harm or subjugate humans, is real and must be considered. With the exponential growth of artificial intelligence research and development, the window of opportunity to avoid serious and potentially existential harm is closing.”
“To avoid harm, effective regulation of the development and use of artificial intelligence is needed,” the experts warned. “Until such regulation is in place, there should be a moratorium on the development of self-improving artificial general intelligence.”
Separately in the UK, a coalition of health experts, independent fact-checkers and medical charities called for amendments to the government's upcoming Internet safety bill to take action against health misinformation: “One of the key ways we can protect the future of our healthcare system is to ensure that Internet companies have clear how they identify harmful misinformation that appears on their platforms, as well as consistent approaches to combat it.”

