AI threatens to put all teachers out of work
Artificial intelligence is likely to end the traditional classroom, says leading expert. Professor Stuart Russell says that AI technology could lead to «fewer teachers being hired — maybe not even one.»
One of the world's leading experts on artificial intelligence predicted in an exclusive interview with The Guardian that recent advances in artificial intelligence are likely to bring about the end of traditional schooling.
Professor Stuart Russell, a British computer scientist from California University of Berkeley, said ChatGPT-style personalized educators could greatly enrich education and expand global access by providing personalized learning to every family with a smartphone. According to him, technology could realistically provide “most of the material through the end of high school.”
“Education is the biggest asset we can look forward to in the next few years,” Russell said before speaking Friday at the United Nations' Artificial Intelligence for Good Global Summit in Geneva. “It should be possible within a few years, maybe by the end of this decade, to provide a fairly high quality education to every child in the world. This potentially transforms the situation.”
However, the expert warned that introducing powerful technology to the education sector also comes with risks, including potential indoctrination.
Stuart Russell cited evidence from studies using human tutors that one-on-one learning can be two to three times more effective than traditional classroom activities, allowing children to receive individual support and be guided by curiosity.
“Oxford and Cambridge don't really use traditional classes … they use tutors, probably because it's more efficient,” the professor said. “It is literally impossible to do this for every child in the world. There aren't enough adults here to beat around the bush.”
OpenAI is already exploring educational applications, announcing in March a partnership with educational nonprofit the Khan Academy to test a virtual tutor powered by ChatGPT-4.
The prospect could raise “reasonable fears” among teachers and teacher unions that “fewer teachers will be hired – perhaps none at all,” Russell said. He predicted that human participation will continue to be important, but could be drastically different from the traditional role of a teacher, potentially including «playground watcher» duties, facilitating more complex collective activities and providing civic and moral education.
< p>“We haven't done experiments, so we don't know if the artificial intelligence system will be enough for a child. There is motivation, there is cooperation, it is not just a question of “Can I count?” Russell noted. “It will be important to ensure that the social aspects of childhood are preserved and improved.”
The technology also needs careful risk assessment. “Hopefully the system, if properly designed, won't tell a child how to make a bioweapon. I think it can be dealt with,” says Russell. A more pressing concern, he says, is the potential for software to be hacked by authoritarian regimes or other players. “I'm sure the Chinese government hopes [technology] is more effective at instilling loyalty to the state,” he said. “I suppose we would expect this technology to be more effective than a book or a teacher.”
According to The Guardian, Professor Russell has spent years highlighting the broader existential risks associated with artificial intelligence, signing an open letter in March, along with Elon Musk and others, calling for a pause in the “out of control race” for the development of powerful digital minds. According to Russell, this issue has become more relevant with the advent of large language models. “I think of [artificial general intelligence] as a giant magnet for the future,” he said. “The closer we get to it, the stronger the power becomes. It definitely feels closer than before.”
According to him, politicians are late in dealing with this issue. «I think governments have woken up… now they're running around the bush figuring out what to do,» he said. “It's good — at least people are paying attention to it.”
However, managing AI systems comes with both regulatory and technical challenges, as even experts don't know how to quantify the risks of losing control of the system. OpenAI announced on Thursday that it will dedicate 20% of its computing power to finding a solution to “manage potentially ultra-intelligent AI and prevent it from breaking down.”
“In particular with large language models, we really have no idea do not have how they work — said Russell. “We don't know if they're capable of reasoning or planning. They may have internal goals that they pursue – we don't know what they are.”
Even beyond the direct risks, systems can have other unpredictable consequences for everything from climate change action to relationships with China.
“Hundreds of millions of people, and pretty soon billions, will be in constant contact with these things,” Russell said. “We don’t know in what direction they could change world public opinion and political trends.”
“We could face a massive environmental crisis or a nuclear war and not even understand why this happened,” he added scientist. “It's just a consequence of the fact that whatever direction it takes, public opinion does it in an interconnected way around the world.”