«We don't need systems that mimic human behavior»
The government is not doing enough to control artificial intelligence, says British professor Stuart Russe. According to him, ChatGPT, which has become very popular in recent months, can become part of a super-intelligent machine that cannot be limited.
One of the professors at the forefront of artificial intelligence said ministers were not doing enough to protect against the dangers posed by superintelligent machines in the future, writes The Guardian.
In his latest contribution to the security debate, constantly The accelerating development of artificial intelligence, Professor Stuart Russell told The Times that ministers are reluctant to regulate the industry, despite fears that the technology could get out of control and threaten the future of humanity.
Russell, a professor at the University of California at Berkeley and a former adviser to the US and UK governments, admits that he is concerned that ChatGPT, which was released in November, could become part of a super-intelligent machine that will be impossible to limit.
“How do you manage to maintain dominion over beings more powerful than you—forever? the professor asks. “If you don’t have an answer, then stop doing research. Everything is very simple. The stakes couldn't be higher: if we don't control our own civilization, we won't have a say in whether we continue to exist.”
According to The Guardian, following the release last year of ChatGPT, which has been used for writing prose and has already caused concern among lecturers and educators about its use in universities and schools, the debate about its long-term safety has intensified.
Elon Musk, the founder of Tesla and owner of Twitter, and Apple co-founder Steve Wozniak, along with 1,000 AI experts, wrote a letter warning that there was an “out of control race” in the AI labs and calling for a pause in the creation of the giant AI.
The letter warned that these labs were developing «increasingly powerful digital intelligences that no one, not even their creators, can understand, predict or reliably control.»
There is also concern about its wider application. A House of Lords committee this week heard evidence from Sir Lawrence Freedman, professor of military studies, who spoke of concerns about how artificial intelligence could be used in future wars.
Professor Russell himself had previously worked at the UN on nuclear test-ban treaty verification and was asked to work with Whitehall earlier this year. He said: “The Foreign Office … spoke to many people and they concluded that a loss of control was a likely and extremely important outcome. And then the government came up with a regulatory approach that says, «There's nothing to see here… we'll welcome the AI industry,» as if it were car manufacturing or something like that. I think we're something did wrong from the very beginning, when they were so engrossed in the idea of understanding and creating intelligence that they did not think about what this intelligence would be used for.
“Unless its only purpose is to to benefit people, you are actually creating a competitor — and that would obviously be stupid. We don't need systems that mimic human behavior… Basically, you are teaching them to set goals like human ones and achieve those goals. You can just imagine how disastrous it would be to have really capable systems that would pursue this kind of goal, ”the expert warns.

