A new Orwellian video released by the Future of Life Institute calls on the United Nations to ban AI-powered drone swarms. Showcasing a hypothetical future, the video’s been watched more than 3 million times – and even artificial intelligence pioneer Elon Musk seems alarmed.
While the video may not be real, it raises important questions about the future of drone-based weapons the regulations surrounding them.
“Slaughterbots – if human: kill()” shows what might happen if autonomous weapons are allowed to coolly follow two directives: kill men and kill women. In a mock newscast montage, the video shows scenarios that already seem familiar: targeted assassinations, bank robberies, attacks at polling stations, mayhem at nightclubs. But behind them all are passionless, unstoppable machines.
You can check out the video below, but be aware, it’s dark.
In the earlier Slaughterbots video, released in 2017, AI quadcopters designed by the military eventually come under the control of criminals and terrorists. It doesn’t end well, especially for a group of high school students.
Once again, there are assurances from the experts that this kind of technology won’t be released into the wild. And yet… well Elon Musk says it best.
The FLI and groups like the Red Cross and autonomousweapons.org argue weapons that kill by an algorithm rather than human judgment are immoral and a threat to global security. They want an international prohibition on weapons that use artificial intelligence to identify and kill without human intervention.
Ban AI-powered drone swarms
New Zealand is pushing for a ban, the superpowers are not. It’s the old argument, that if one country doesn’t develop the weapons, its rivals will.
But Professor Max Tegmark, the co-founder of FLI and AI researcher at MIT, told Forbes the argument isn’t valid. Other weapons of mass destruction like chemical and biological agents have been successfully outlawed:
“Bioweapons are also really easy to make, but a powerful combination of stigma and controls have successfully prevented their widespread use,” says Tegmark. “It’s not in the national security interest of the U.S., the U.K. or China for W.M.D.’s to be so cheap that all our adversaries can afford them.
A slaughterbot ban would incentivize legitimate drone manufacturers to vet their large customers, just as many companies do today with export-controlled technology.”
Professor Max Tegmark, co-founder of Future of Life Institute
This is all grim stuff, particularly at this time of year. But the video does end on a very slightly less dreary note.
Two combatants about to murder one another are spotted by an autonomous drone. Recognizing the common threat, the two act to save their own lives. It’s only the slightest glimmer of hope in the video, but it’s going to take more than a pair of actors to make a difference.
FTC: We use income earning auto affiliate links. More.
Comments