GENERICO.ruНаука"Godfather of artificial intelligence" spoke about the dangers of his offspring

«Godfather of artificial intelligence» spoke about the dangers of his offspring

Scientists warn of “serious risks to society and humanity”

Geoffrey Hinton, known as the “godfather of artificial intelligence”, confirmed that he left his position at Google last week to comment on the «dangers» of the technology he helped develop.

Scientists warn of “serious risks to society and humanity”

Hinton's pioneering work on neural networks has shaped the artificial intelligence systems that power many of today's products, CNN tells CNN. He worked part-time at Google for ten years developing the tech giant's artificial intelligence, but has since developed misgivings about the technology and his role in driving it.

“I console myself the usual excuse: if I hadn't done it, someone else would have done it,” Hinton told The New York Times.

In a tweet Monday, Hinton said he left Google to speak freely about the risks of AI, and not out of a desire to specifically criticize Google.

“I left to talk about the dangers of AI without thinking about how it will affect Google,” Hinton tweeted.

Jeff Dean, Chief Scientist at Google, said Hinton «has made fundamental breakthroughs in the field of artificial intelligence» and acknowledged Hinton's «decade of contributions to Google.»

«We remain committed to responsible AI,” Dean said in a statement provided to CNN. “We are constantly learning to understand emerging risks while boldly innovating.”

As CNN notes, Hinton's decision to step down from the company and speak out about the technology came as a growing number of lawmakers, advocacy groups and tech insiders have raised concerns about the potential emergence of a new generation of AI-powered chatbots to spread disinformation and dislodge workers. places.

The wave of attention that hit ChatGPT at the end of last year has helped reignite the arms race among technology companies to develop and implement similar artificial intelligence tools in their products. OpenAI, Microsoft and Google are at the forefront of this trend, but IBM, Amazon, Baidu and Tencent are also working on similar technologies.

In March, some prominent figures in technology signed a letter calling on AI labs to stop training the most powerful AI systems for at least six months, citing “serious risks to society and humanity.” The letter, published by the Future of Life Institute, a non-profit organization backed by Elon Musk, comes just two weeks after OpenAI announced GPT-4, an even more powerful version of the technology that supports ChatGPT. In early tests and a demo of the company, the GPT-4 was used to draft lawsuits, take standardized exams, and create a working website from a hand-drawn sketch.

In an interview with The New York Times, Hinton echoed concerns about the potential of artificial intelligence to eliminate jobs and create a world in which many “can no longer know what is true.” He also pointed out the staggering pace of progress, far beyond what he and others expected.

“The idea that this stuff could actually get smarter than people was something that few believed,” Hinton said in an interview. “But most people thought that would be far from the case. And I thought it would be far from it. I thought that before that, another 30-50 years, or even more. Obviously, I don't think so anymore.»

Even before he left Google, Hinton spoke publicly about the potential of artificial intelligence to be both harmful and beneficial.

«I believe that the rapid progress of artificial intelligence will transform society in ways we do not fully understand, and not all consequences will be good,” Hinton said in his 2021 commencement speech at the Indian Institute of Technology Bombay in Mumbai. He noted how artificial intelligence will improve healthcare while enabling lethal autonomous weapons: “I find this prospect much more immediate and much more terrifying than the prospect of a robot takeover, which I think is very far away.”

ОСТАВЬТЕ ОТВЕТ

Пожалуйста, введите ваш комментарий!
пожалуйста, введите ваше имя здесь

Последнее в категории