GENERICO.ruНаукаFrom unemployment to nuclear war: the main threats to artificial intelligence are named

From unemployment to nuclear war: the main threats to artificial intelligence are named

Why so many experts are excited about the development of AI

Nuclear war and a pandemic are what hundreds of leading technology experts from around the world have recently compared the danger of uncontrolled development of artificial intelligence. The most alarmist-minded experts fear that AI could lead to the death of humanity. A more down-to-earth-minded public fears the disappearance of millions of jobs. What risks are associated with a new phenomenon that is increasingly conquering not only the virtual space?

Why so many people are excited about AI

Recall the film classic — the Terminator franchise. On Judgment Day, August 29, 1997, at 02:14 AM Eastern Time, Skynet's artificial intelligence becomes conscious. After the operators try to turn it off, the inflated AI decides to destroy humanity and delivers a nuclear strike on the USSR, in response, the Soviet Union launches a nuclear strike on the United States. As a result, a huge part of humanity dies, and the surviving people are forced to wage war with the forces of Skynet…

If we consider that the first two films about the Terminator were shot by James Cameron in 1984 and 1991, then we can say that by the specified date, the «rebellion of the machines» did not happen. But does this mean that the threat of artificial intelligence getting out of control remains in the field of pure fantasy?

The other day in the US, the helpline for eating disorders was forced to disable the chatbot «Tessa» with artificial intelligence because of bad advice given to them.

The National Eating Disorders Association (Neda) also came under fire for firing in March four employees who previously worked on its helpline, which allowed people to call, text and call volunteers. They, in turn, offered support and resources to those who were concerned about their eating disorder.

People were replaced by a chatbot — and what happened? Activist Sharon Maxwell posted a story on social media about how «Tessa» offered her «healthy eating tips» and recommendations on how to lose weight. The chatbot recommended maintaining a calorie deficit of 500 to 1,000 calories per day and weighing yourself weekly to track your weight.

“If I had turned to this chatbot when I was suffering from an eating disorder, I would not have received help for my eating disorder. If I had not received help, I would not be alive today,” wrote Maxwell.

Neda herself reported that those who eat a moderate diet are five times more likely to develop an eating disorder, while those who overly limit themselves are 18 times more likely to develop the disorder.

“It came to our attention last night that the current version of the Tessa chatbot running the Body Positivity program may have provided information that was harmful and unrelated to the program,” Neda said in a public statement. “We are investigating this immediately and have disabled this program until further notice for a full investigation.”

To develop the chatbot, the National Eating Disorders Association worked with psychology researchers and Cass AI, a company that develops mental health-focused artificial intelligence chatbots. In a post on the Neda website about the chatbot, which has since been removed, Ellen Fitzsimmons-Craft, a psychologist at Washington University in St. Louis who helped develop the chatbot, said «Tessa» was conceived as a solution making prevention of eating disorders more accessible.

“Even though the chatbot was a robot, we thought it could provide some motivation, feedback, and support… and maybe even deliver effective content to our program in a way that people would actually want to participate,” Fitzsimmons-Craft wrote.

In a comment to The Guardian, Neda CEO Liz Thompson said the chatbot was not intended to replace the trust line, but rather was created as a standalone program. Thompson explained that the chatbot is not run by ChatGPT and “is not a highly functional artificial intelligence system.”

Be that as it may, but the AI ​​managed to play a trick in this story a little. Not a nuclear disaster, of course, but still unpleasant.

***

Back in April, Elon Musk and a group of AI experts wrote an open letter calling for a six-month pause in the development of new AI systems, warning that AI labs are «engaged in an uncontrolled race to develop and deploy increasingly powerful digital minds that no one — even their creators can understand, predict or reliably control.”

And there are more and more such public statements of experts concerned about what is happening.

“Reducing the risk of extinction due to artificial intelligence should become a global priority along with other risks of a social scale, such as pandemics and nuclear war,” urge the “signatories” of a statement published recently by the Center for the Security of Artificial Intelligence (it is a research and advocacy group based in San Francisco). Among them are many people who themselves participate in developments related to AI. This is OpenAI CEO Sam Altman. And the «godfather» of artificial intelligence Geoffrey Hinton. And top managers and researchers from Google DeepMind and Anthropic. And Microsoft CTO Kevin Scott. And Internet security and cryptography pioneer Bruce Schneier. And climate advocate Bill McKibben. And even Elon Musk's ex-girlfriend, singer Grimes.

Artificial intelligence experts argue that society is still far from developing the kind of artificial general intelligence that is the subject of science fiction. According to them, today's advanced chatbots are largely reproducing patterns based on the training data they have been «fed» for now, and do not think for themselves.

And yet, the flood of hype and investment in the AI ​​industry has led to calls for regulation at the start of the AI ​​era, rather than waiting before any major disruptions occur.

This statement of experts, notes CNN, follows the viral success of ChatGPT, a product of the company OpenAI, which has helped intensify the arms race in the technology industry over artificial intelligence. In response, a growing number of lawmakers, advocacy groups and tech insiders are sounding the alarm about the potential for a new generation of AI-powered chatbots to spread misinformation and crowd out jobs.

Geoffrey Hinton, whose pioneering work helped shape modern artificial intelligence systems, previously told CNN that he decided to leave his position at Google and raise the alarm about this technology after he «suddenly» realized «that these things are getting smarter than us.»

And AI Security Center director Dan Hendrix compared the announcement to warnings from nuclear scientists “giving warnings about the very technologies they created.”

Hendrix's list of AI risks included artificial intelligence being used to create biological weapons more lethal than natural pandemics and its ability to spread dangerous disinformation on a global scale. Hendrix also notes that humanity's growing reliance on artificial intelligence could «make the idea of ​​simply 'turning them off' not just disruptive but potentially impossible, risking humanity losing control of our own future.»

As artificial intelligence develops, experts warn, it will become more and more intelligent, probably surpassing the capabilities of the human brain. With regard to military applications, several problems arise here. While AI will allow smarter decisions and reduce war casualties by falling into the wrong hands, it will create a very uneven playing field between opponents. Thus, the country with the most advanced artificial intelligence will dominate.

Another danger is misinformation and illegibility of information. Thanks to AI, it will become increasingly difficult to understand what is true and what is false. Consequently, countries with the most advanced artificial intelligence will be able to use this technology to mislead their own citizens or the citizens of a hostile country. Politicians could also use this technology to win elections. One country could use artificial intelligence to influence elections in another. The result, experts predict, will be a growing level of mistrust that will fuel civil unrest.

Technology, with its many wonderful benefits, could be our undoing. And this will happen if we fail to establish adequate precautions to protect humanity, experts summarize, worried about the uncomfortable prospects.

***

It is possible that the «Doomsday», independently arranged by the machine mind, is indeed far away («good», due to geopolitical shocks, a nuclear catastrophe can be arranged by representatives of homo sapiens). But after all, even in everyday life, the most ordinary people can feel the consequences of the large-scale introduction of sophisticated AI in all spheres of our life.

According to a report published by Goldman Sachs earlier this year, artificial intelligence could automate—and therefore eliminate—up to 300 million full-time jobs around the world. Are there many who want to be among those three hundred million?

“The good news,” write the Goldman Sachs researchers, “is that the exclusion of workers from automation has historically been offset by the creation of new jobs, and the emergence of new professions behind technological innovation accounts for the vast majority of long-term employment growth.”

So it is, maybe so. But a recent UK survey published by Sky News shows that about a quarter of people are worried that they could lose their jobs due to artificial intelligence. Yes, 24% of respondents are concerned about the potential of generative artificial intelligence (such as ChatGPT) to make their job unnecessary.

Opinium's survey for Prospect, which brings together technical experts and other professionals, also shows that 58% of workers believe the government should set rules for using artificial intelligence to protect workers' jobs.

“Clearly, there are both risks and benefits to AI, especially in how it is implemented in the workplace and what it means for the workplace,” said Prospect Deputy General Secretary Andrew Peiks. “Despite new skills related to data processing and technology, many jobs are precarious – not only because of the threat of artificial intelligence, but also because employers introduce new technologies at work without consultation or accountability. Instead of waiting for new problems to arise before taking action, the government should now engage with both workers and employers to develop fair new rules for the use of this technology.”

Speaking before the United States Congress, OpenAI's Sam Altman urged US lawmakers to rapidly develop rules for AI technology and recommended a licensing-based approach.

The European Union says it hopes to pass legislation by the end of the year that classifies put artificial intelligence into four risk-based categories.

China, which has announced ambitious plans to become the world leader in artificial intelligence by 2030 (and the consulting group McKinsey estimates that the sector could add about $600 billion annually to the country's GDP by then), has also taken steps to regulate artificial intelligence by legislation regulating deep forgery and requiring companies to register their algorithms with regulators. Beijing also proposed strict rules restricting politically sensitive content and requiring developers to obtain approval before releasing technology based on generative artificial intelligence.

AI products, the bill says, will have to reflect “core socialist values ” and must not “contain content aimed at subverting state authority.”

Beijing claims that deepfakes — AI-generated images and audio that can be stunningly realistic — also represent “ danger to national security and social stability.”

Moreover, Xi Jinping and senior officials of the Chinese Communist Party specifically discussed at the meeting of the National Security Commission how to “improve the security management of network data and artificial intelligence.”< /p>

“We must prepare for the worst and most extreme scenario and be prepared to withstand the severe test of strong winds, choppy waters and even dangerous storms,” the Xinhua news agency said in a statement following the meeting.

ОСТАВЬТЕ ОТВЕТ

Пожалуйста, введите ваш комментарий!
пожалуйста, введите ваше имя здесь

Последнее в категории