For the most part, current military aerial weapon systems are still controlled by people, but all that is set to change with the push for technology to allow such machines make the final judgment regarding targets to attack. An example is the announcement by the US Army that it is working on drones that can identify and target people and vehicles with the aid of artificial intelligence.

[OBJECTIVE: Develop a system that can be integrated and deployed in a class 1 or class 2 Unmanned Aerial System (UAS) to automatically Detect, Recognize, Classify, Identify (DRCI) and target personnel and ground platforms or other targets of interest. The system should implement learning algorithms that provide operational flexibility by allowing the target set and DRCI taxonomy to be quickly adjusted and to operate in different environments.]

The militarization of the artificial intelligence needed to power these unmanned aerial systems has far-reaching ethical considerations, not to mention legal ramifications. One may argue that whereas warfare involving human input at various levels still has requires engagement of sorts, the move to deploy machines and leave these decisions to them moves firmly into the territory of extermination.

Also, this extends the horizon of what may be termed as “warfare,” since those in the tech and scientific community may also themselves inadvertently become targets.

Military drones, for the most part, are guided via satellite, and a human sensor operator actually directs missiles towards targets with the help of lasers. As such, such a person must analyze the scenario on ground, including the risk of hurting civilians during the process of deploying such missiles.

The important factors at play here include human judgment, ethics and emotions. The idea behind self-learning algorithms is for the technology to improve at any task given by learning from collected data. At what point in this self-learning development process will the decision be made that the machine has collected enough data to make accurate independent decisions regarding missile deployment?

Also, the concept brings to mind extermination on a grand scale, how many civilian casualties will be termed as acceptable while the drone is taking out targets? The recent fatalities during several autonomous vehicles tests show that human input matters a lot when the chips are down.