GENERICO.ruНаукаArtificial intelligence called a threat to national security: "Dive bomb on the crowd"

Artificial intelligence called a threat to national security: «Dive bomb on the crowd»

British experts warn of danger of AI 'idle ammo' in Ukraine

Artificial intelligence poses a threat to national security, UK counter-terrorism watchdog warns. Security services fear that the new technology could be used by extremists to train vulnerable people for their own purposes.

British experts warn of AI 'loose ammo' danger in Ukraine

AI creators need to abandon their “technological utopian” thinking amid fears that new technology could be used to exploit «vulnerable people» by terrorists, experts urge.

Jonathan Hall QC, whose role is to review the adequacy of anti-terrorism legislation, argues that the threat to national security from artificial intelligence is becoming more and more obvious, and technology must be developed with terrorists in mind.

As writes The Observer, Hall said that too much AI development focuses on the potential benefits of the technology, while neglecting to consider how terrorists can use AI to carry out attacks.

“They need some horrible little 15-year-old neo-Nazi in the room with them to figure out what they could do. You need to protect yourself hard against what you know people will do about it, – Hall said.

The UK government's independent expert on counter-terrorism law has admitted he is growing concerned about the ability of AI-powered chatbots to persuade vulnerable or neurodivergent people to carry out terrorist attacks.

“What worries me is the suggestibility of people when they are immersed in this world and the computer is turned off. The use of language in a national security context matters because, ultimately, language convinces people to do something”,– expert says.

Security services are believed to be particularly concerned about the ability of AI chatbots to target children, who are already a growing part of MI5's list of terrorist suspects.

As calls for technology regulation grow following warnings from AI pioneers last week that it could threaten the survival of the human race, Prime Minister Rishi Sunak is expected to raise the issue when he travels to the US on Wednesday to meet with President Biden. and dignitaries of Congress:

«Back in the UK, we will step up efforts to address the national security challenges posed by artificial intelligence in partnership between MI5 and the Alan Turing Institute, the national body for data science and artificial intelligence.»

Alexandre Blanchard, Researcher Digital Ethicist at the Institute's Defense and Security Programme, said his work with the security services shows that the UK takes the security challenges posed by artificial intelligence extremely seriously.

“There is a greater willingness among defense and security policy makers to understand what is happening, how actors can use artificial intelligence, what the threats are, – Blanchard says. – Indeed, there is a sense of need to be aware of what is happening. Work is underway to understand what the risks are, what are the long-term risks, and what are the risks for next generation technologies”.

Rishi Sunak said last week that Britain wants to become a global center for artificial intelligence and its regulation, insisting that this could bring «huge benefits to the economy and society». Both Blanchard and Hall say the central issue is how people maintain «cognitive autonomy»; – control – over AI and how that control is built into the technology.

The potential for rapid AI impact on vulnerable people alone in their bedrooms is becoming increasingly clear, according to Hall.

Jonathan Hall notes that tech companies need to learn from the mistakes of past complacency – in the past, social media has been a key platform for sharing terrorist content.

Hall added that more transparency is also needed on the part of AI firms, primarily about how many staff and moderators they have hired.

“We need absolute clarity on how many people are working on these things and their moderation, – he said. – How many people are actually involved when they say they have fences in place? Who checks the fences? If you have a company of two, how much time do they spend on public safety? Probably little or nothing”.

Hall said new laws may also be needed to combat the terrorist threat posed by artificial intelligence to curb the growing danger of deadly autonomous weapons – devices that use artificial intelligence to select their targets.

Hall said, «You're talking about the type of terrorist who wants denial, who wants to be able to 'fly away and forget'. They can literally throw a drone in the air and fly away. No one knows what decision his artificial intelligence will make. For example, it could just be a dive bomb on a crowd. Do our criminal laws cover this kind of behavior? As a rule, terrorism is associated with intent; the intention of a man, not a machine.»

Deadly Autonomous Weapons – or “idling ammo” – has already been seen on the battlefields in Ukraine, which raises moral questions about the consequences of using an autonomous air-killing vehicle, The Observer notes.

“AI can learn and adapt by interacting with the environment and improving its behavior”,– Alexandre Blanchard warns.

ОСТАВЬТЕ ОТВЕТ

Пожалуйста, введите ваш комментарий!
пожалуйста, введите ваше имя здесь

Последнее в категории