Manufacturers, military personnel programmers could escape any liability in cases involving injuries or deaths caused by fully autonomous machines, or “killer robots,” according to a report by the Human Rights Watch.

The report titled, “Mind the Gap: The Lack of Accountability for Killer Robots,” analyzes the roadblocks towards implementing even basic personal accountability for the activities of machines powered by artificial intelligence, especially fully autonomous weapons.

Fully autonomous weapons themselves cannot substitute for responsible humans as defendants in any legal proceeding that seeks to achieve deterrence and retribution, said the report.

The lack of accountability under both civil and criminal law means there will be no “retribution for victims,” no “deterrence of future crimes,” and no social condemnation of any responsible party, said Bonnie Docherty, senior Arms Division researcher at Human Rights Watch, and the lead author of the report.

The fast-paced rate of technology advancement towards AI means that fully autonomous weapons would go a step beyond existing remote-controlled drones, and they would be able to select and engage targets with minimal human control.

A paper by researchers from the George Washington University and Allen Institute for Artificial Intelligence tiled “Keeping AI Legal” contends that policy makers and academics are raising more and more questions about the ways the legal and moral order can accommodate a large and growing number of machines, robots, and instruments equipped with artificial intelligence.

According to them, machines equipped with artificial intelligence, “such as driverless cars have a measure of autonomy; that is, they make many decisions on their own, well beyond the guidelines their programmers provided. Moreover, these instruments make decisions in very opaque ways, and they are learning instruments whose guidance systems change as they carry out their missions.”

The mad race to develop fully autonomous driverless cars by various entities is an example of the potential impact of such machines in the natural order of things. According to the researchers, a policeman in California issued a warning to the passenger of a Google self-driving car because it “impeded traffic by driving too slowly.”

They raised the question of who the policeman should have cited? The passenger? What if there is none? The owner? The programmer? The car’s computer? Was there intent? Who or what should be held liable for the resulting harm? How could the government deter repeat offenses by the same instruments? The effect of AI may be marginal, for instance, a program that causes a driverless car to crash into another, or it could be significant, such as the fears that smart instruments may rebel against their makers and harm mankind.

AI systems are learning systems that review conditions and adapt accordingly. Using complex algorithms, they respond to environmental inputs independently of real-time human input: they “can figure things out for themselves.”

They may even deviate or act in defiance of the guidelines the original programmers installed into these smart instruments, the paper argues.

For instance, self-driving cars decide when to change speed, how much distance to keep from other cars, and may decide to travel faster than the law allows—when they note that other cars often violate the speed limits. Automatic emergency braking systems, which stop cars without human input in response to perceived dangers, are becoming more common.

Consumers complain of false alarms, sudden stops that are dangerous to other cars, and that these brakes force cars to proceed in a straight line even if the driver tries to steer them elsewhere.

With the introduction of regulation guiding the use of drones, and the race to develop AI-based systems, it is certainly not premature to consider the underlying factors and the effect on humankind.