
MOSCOW, April 6, Zakhar Andreev. Artificial intelligence can get out of control. This is what non-specialists and leaders of leading technology companies are afraid of. What threats are posed by the development of neural networks and how to counter them? machines. Fifty percent are sure that such a threat does not exist, revealedVTsIOM poll of April 5.
At the same time, people who, according to their own assessments, understand neural networks, allow problems to arise in the labor market. With the fact that AI is able to replace representatives of creative professions, 37 percent of respondents agree. And among those who know nothing about neural networks, ten percent believe in it.
Recent survey conductedportal Superjob showed that only 17 percent of economically active adult Russians are afraid of the development of artificial intelligence, and 43 percent are not afraid of such a prospect. 40 percent of respondents found it difficult to answer.
«Loss of control over civilization»
Fears are justified — at least according to economists and leaders of the world's leading technology companies. At the end of March, the American bank Goldman Sachs issuedreport that artificial intelligence, as well as services and applications based on it, will lead to the reduction of 300 million jobs worldwide. At risk are professions related to accounting, administrative and office work. Those who do manual labor (like cleaners and plumbers) won't be hurt as much.
But computer technologists are much more frightened by the possibility that AI will get out of control. SpaceX, Tesla and Twitter CEO Elon Musk, Apple co-founder Steve Wozniak, visionary Yuval Noah Harari and 1,000 other IT industry influencers signeda letter asking to suspend development of systems more powerful than GPT-4 for at least six months in order to prepare security protocols.
“Should we develop non-human intelligence that could eventually outsmart, outwit, obsolete and replace us? Should we risk losing control of our civilization? Such decisions cannot be delegated to unelected tech leaders. Powerful AI systems should only be developed when we are confident that the effect will be positive and the risks manageable,” the letter says. American company OpenAI. It is able to perform tasks previously inaccessible to such systems: from creating artworks to programming.
“Today, unique services are produced and brought to market on the basis of GPT, starting with a personal AI advisor in the field of investments and ending with solutions that help doctors make the right diagnosis and choose the best treatment,” says Professor at the HSE Graduate School of Business, Head of the Business Development Department. computer scientist Evgeny Zaramenskikh.
There are known cases when the capabilities of ChatGPT were used for dubious or outright illegal purposes. The chatbot took exams at universities (both in medical and MBA programs), generated keys to activate licensed programs, and helped win the lottery. And in Belgium, a man committed suicide — allegedly after a month of communication with a chat bot created on the basis of GPT.
In March, the fourth — the last version of the program for today — was released. Studies have shown that she is already showing «sparks of reason». Despite this, a chatbot is still far from independent thinking.

The head of the IT department at the Moscow City Open College, Daniil Makeev, explains that there are three types of artificial intelligence. All existing ones belong to ANI — a narrow AI aimed at performing certain tasks. The next stage is AGI, artificial general intelligence, or, as it is also called, strong AI. He will be able to solve any problems accessible to the human intellect. The logical continuation of this system should be ASI — superintelligence, which, according to the expert, still belongs to the realm of science fiction.
“It may seem that the ChatGPT neural network is the same AI that can develop empathy, learn to think like a person, and act independently. But this is not the case,” Makeev emphasizes. “Despite the fact that the neural network successfully passes tests for professions and understands humor, this is not AGI yet. ChatGPT is not able to improve itself and develop without the help of people. And most importantly: the neural network cannot think independently.»
But engineers are working to fix it. So, one of the developers of OpenAI Siki Chen stated that the new version of the bot — GPT-5 — will reach the AGI level, that is, it will be equal in intellectual capabilities to a person. The program is expected to be released in December 2023.
Criminal neglect
«I'm afraid of AGI,» admitted Arram Sabeti, a Silicon Valley investor. people are so dismissive of risks.»
He said on social media that he is friends with dozens of artificial intelligence developers and «almost all of them are worried.»
but half of the nuclear engineers think there is at least a ten percent chance of an «extremely serious» catastrophe, and safety engineers estimate it to be over 30 percent. That's exactly the situation with AGI.»

One of the surveys cited by Sabeti showed that almost half of the 738 machine learning experts gave at least a ten percent chance of a «very bad outcome» as a result of AI development.
Another study, which polled 44 AI security experts, gave a 30 percent chance of a cyber apocalypse, with some experts estimating it above 50 percent.
Sabeti noted that the chance of losing at Russian roulette is only 17 percent. And I remembered Elon Musk's old saying: «AI is much more dangerous than nuclear weapons.» existing AI systems to a new, «independent» level. According to some, it can no longer be prevented.
“There is one wonderful rule on the Internet: if something gets there, it remains forever. There are many examples in history when they tried to stop progress, but it never led to success,” says an expert in artificial intelligence and machine learning, teacher Moscow School of Programmers (MSP) Vasily Lysov.
Other specialists believe that the technological basis for such a transition does not yet exist. To develop into a strong AI, the system must learn to change its primary code.
«This is exactly what humans do, constantly rewriting the neural connections in the brain. AI will have to request a modification of its code from the developer or find a way to bypass this barrier and perform the modification independently. At the same time, he must have a sufficient level of thinking so as not to make a mistake in rewriting the code and, for example, not to block himself. We are far from creating such intelligence,» Makeev is sure.
Finally, some experts note that even with the very concept of «general AI» not everything is clear. “First, you need to come to a common understanding of what AGI is and whether it is exactly achievable. There are no unambiguous criteria yet. We still do not fully understand how memory, thinking and many other biological processes work,” explains the head of the group of machine Positive Technologies training by Alexander Murzin.
According to her, we «are both close to the goal and at the same time far from it.» «The development of technology is not linear, and it is difficult to predict when the next breakthrough will be in order to get closer to AGI,» she said. If it fails, then limiting its technological capabilities is more than realistic, experts believe.
“To train neural networks, you need very specific computing power, data sets. Without access to the technological infrastructure, “on the knee”, no systems that threaten humanity can be created. There are not so many big techs in the world, it is quite possible to agree on restrictions, especially at the state level,» says Sergei Gataullin, Dean of the Faculty of Digital Economy and Mass Communications at MTUCI.
According to experts, people have nothing to worry about as long as they control the material world.
«Any AI solution is software code that runs hardware: physically existing servers, powerful computers. The hardware has a shutdown button. The program has no way to press it on its own. And the scenario in which AI-based robots will prevent a person from pressing this button cannot be implemented now due to the poor development of robotics,” Professor Zaramenskikh points out. https://ria.ru/20230310/chatgpt-1856912787.html» data->
True, in theory, AI will be able to get around this limitation by becoming a decentralized system on user devices — that is, on our laptops or smartphones.
«But large smartphone and laptop companies have the ability to remotely lock devices — in this case, even decentralized AI will quickly lose a significant part of the computing power,» the expert is convinced.
Several approaches are being considered to control AGI. In addition to restricting access to physical resources, humanity can oppose independent AI with a “friendly” system based on ethical and social norms and restrictions, says Alfiya Latypova, Senior Data Engineer at Kaspersky Lab.
“Many researchers and organizations are developing standards and regulations that can help regulate the creation and use of AGI,” she notes. “This may include international agreements governing the production and use of AGI, as well as documents that determine the behavior of AI in areas such as medicine, autonomous technology and industry».
«With understanding for people»
Russian experts consider the possibility of an existential threat to humanity from AI unlikely. To destroy our species, artificial intelligence would need to create an «industrial pyramid», gaining access to metals, electricity, oil and other resources. And for this it is necessary to completely exclude people from production chains.
“But the global industry is still largely dependent on manual labor. Before AI can exterminate humanity, our species needs to fully automate the production and functioning of everything in the world. In the foreseeable future, this does not look realistic,” says the director of development of the company Alexander Zhukov on the development of the «Code Format» software.
Nevertheless, an independent synthetic mind is capable of causing significant harm to people.
< /span>
«Imagine that he controls the information agenda of different regions of the world and provokes conflicts by manipulating facts. If these facts are partly based on real data, some people will believe them — and the consequences will be unpredictable,» warns Alexander Kobozev, director of the Data fusion direction of the Digital Economy League.
Murzina believes that AGI may not be the main threat to humanity. People are still facing new viruses, military conflicts and man-made disasters. «Is it true that AGI will have time to appear and harm before other risks?» the expert asks.
Other experts are optimistic. «In my opinion, artificial general intelligence, or superintelligence, will be intelligent enough to understand why it was assembled and trained,» says Makeev.
In his opinion, AI will take into account the weaknesses of its creators and therefore people.»

