The United States Defense Department is working on drones which automatically target individual enemy combatants or vehicles with no human input.
A DoD memo on the topic states that the aim of such a drone will be to:
“Detect, Recognize, Classify, Identify (DRCI) and target personnel and ground platforms or other targets of interest.”
The main drive behind this technology is that it will enable faster identification (and presumably killing) of enemies or attackers.
Sounds terrifying. How does it work?
The current generation of predator drones that the United States deploys in countries such as Afghanistan, Iraq and Pakistan tend to be big (27 ft / 8.22 m long) and fly at altitudes of up to 25,000 feet (7,600 meters). This new breed of UAVs are set to be the same size as consumer drones. They could therefore fly much closer to the ground to get a clearer picture of the potential targets.
The small UAVs will use algorithms to recognize particular targets of interest according to a predetermined set of criteria. So, rather than a human making an assessment on who the target is, software would identify said target and (at least at this stage) a human would decide whether or not to engage or eliminate them/it.
Why is this a big deal?
Human rights groups have long criticized the use of predator drones for causing thousands of civilians deaths in multiple countries.
Last year, the Future of Life Institute produced a frightening short-film about the dangers of AI as a dramatic warning of a dystopian future that might result from autonomous killer drones (video viewable on second page).
The FLI has the backing of intellectual heavyweights such as the recently departed Steven Hawking and Elon Musk. In 2015 the group wrote an ominous open letter to those in the AI community who would work on automated weapon systems. To date the letter has been signed by 3724 AI/Robotics researchers.
Quotes from the letter include:
- “If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable.”
- “They will become ubiquitous and cheap for all significant military powers to mass-produce.”
- “It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace.”
The letter concludes:
“Starting a military AI arms race is a bad idea and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control.”
The DoD proposal document does not mention a timeline for the development of such weapons. We can, however, conclude from drones already available to consumers that can follow athletes autonomously as they run, jump, ski or engage in other activities – that this technology is not too far from fruition.While this project is not proposing fully autonomous drones per se, it is a step in that direction.
Considering the violent potential consequences of pursuing automated weapons of war may bring to humanity – this is really something that should concern us all.
The Future Of Life Institute created Slaughterbots, a video about the topic of automated Weaponry. To watch it and enter our awesome Easter giveaway, check out the next page.