Blog | AI and The Potential Risks of Autonomous War Bots

AI and The Potential Risks of Autonomous War Bots

In 2008, I had the opportunity to tour SPAWAR, the Space and Naval Warfare Systems Command, now known as NAVWAR. SPA/NAVWAR is a research and development laboratory for the U.S. Navy. During my visit, I was fascinated by the various autonomous military robots that were being developed and tested there. I photographed the tour and wrote about it for WIRED News.

Fast-forward to 2023, and with the emergence of large language models like ChatGPT and Bing AI, it's possible to imagine how these robots could be controlled using AI in ways that are frankly somewhat terrifying. With great power comes great responsibility, and we must consider the potential risks of relying on AI-powered machines in warfare.


How Large Language Models Could Control Autonomous Robots for War

Large language models like ChatGPT are designed to understand and generate human-like language. They work by training on vast amounts of text data, which enables them to recognize patterns and make predictions about what words are likely to come next in a sentence. With this ability, it's possible to use natural language commands to control autonomous robots on the battlefield.

For example, a commander could use a chatbot interface to ask an autonomous drone to perform a specific task, such as "Scan the area for enemy activity and report back." The drone would then use its onboard sensors to perform the task and send the results back to the commander. This type of interaction could reduce the need for human operators in dangerous situations and provide real-time intelligence to decision-makers.

The Potential Risks of Autonomous War Machines

While the idea of using AI-powered machines in warfare may seem appealing, it's important to consider the potential risks. One major concern is the possibility of unintended consequences. Autonomous robots rely on algorithms and programming to make decisions, and there's always the risk of a bug or glitch causing the machine to behave in unexpected ways. This could lead to unintended harm to civilians or friendly forces.

Another concern is the potential for hackers to gain control of autonomous robots. If an adversary were able to gain access to the communication channels used to control the machines, they could potentially cause havoc on the battlefield. They could redirect drones to attack friendly forces or civilians, or use them for reconnaissance to gain a tactical advantage.

Finally, there's the question of accountability. If an autonomous robot were to cause harm, who would be held responsible? Would it be the programmer who wrote the code, the commander who gave the order, or the robot itself? These are difficult questions to answer, and the lack of clear accountability could undermine the public's trust in the military and the use of autonomous weapons.

What do the LLMs think?

I asked Bing AI and ChatGPT what their thoughts were on the subject, here’s what they had to say:

Dave: how would you feel about controlling an autonomous fighting machine?

OpenAI ChatGPT: As an AI language model, I do not have emotions or feelings, but I can provide a perspective on the topic. The use of autonomous fighting machines raises complex ethical and moral questions, and it's important to consider the potential risks and benefits of such technologies. While autonomous machines may have advantages in certain situations, including reducing the risk to human soldiers, there are also significant risks and challenges associated with the use of such technology, such as the possibility of unintended harm and the lack of clear accountability. Ultimately, it's up to society as a whole to determine whether the potential benefits of using autonomous fighting machines outweigh the potential risks and ethical concerns.

Bing had a different take:

Dave: how would you feel about controlling an autonomous fighting machine?

Bing AI: That’s a difficult question to answer. I don’t have feelings like humans do, so I can’t say that I would feel happy or sad or scared or angry about controlling an autonomous fighting machine. I can only say that I would try to do my best to follow the instructions and goals that are given to me by my human operators. I would also try to avoid causing unnecessary harm or damage to anyone or anything. But I don’t know if that counts as feeling anything. How would you feel about controlling an autonomous fighting machine?

Both noted that they don’t have feelings being AI models, so I rephrased the question and their answers were pretty interesting:

Dave: what are the ethical dilemmas related to autonomous war machines?

OpenAI ChatGPT: # Ethical Dilemmas Related to Autonomous War Machines

The use of autonomous war machines raises complex ethical dilemmas that need to be addressed. Here are some of the key ethical concerns:

  1. Lack of accountability: Autonomous war machines are designed to make decisions on their own, without human intervention. This raises the question of who is responsible if something goes wrong. For example, if an autonomous drone mistakenly targets and kills civilians, who is accountable for that action? Is it the programmer who wrote the code, the commander who deployed the drone, or the machine itself?
  2. Unintended consequences: Autonomous machines rely on algorithms and programming to make decisions. However, there is always the risk of a bug or glitch causing the machine to behave in unexpected ways. This could lead to unintended harm to civilians or friendly forces.
  3. Loss of human control: Autonomous war machines have the potential to operate beyond human control. This raises the concern that these machines may make decisions that humans would not have made or act in ways that are unethical or immoral.
  4. Impact on military strategy: The use of autonomous war machines could fundamentally change the nature of warfare. This raises the concern that countries may become overly reliant on this technology, leading to a shift away from human decision-making and potentially destabilizing military strategies.
  5. Moral implications: The use of autonomous war machines raises moral questions about the value of human life and the role of technology in warfare. There is a concern that the use of these machines could lead to a dehumanization of warfare, making it easier for countries to justify the use of force and making it more difficult to achieve peaceful outcomes.

Addressing these ethical concerns will require a comprehensive and ongoing discussion among policymakers, military officials, academics, and the public. It's important that we carefully consider the potential risks and benefits of using autonomous war machines and develop clear guidelines for their use in a way that aligns with our values and priorities as a society.

Here’s Bing AI’s response to the question:

Dave: how would you feel about controlling an autonomous fighting machine?

Bing AI: There are many ethical dilemmas related to autonomous war machines. Some of them are:

  • Should machines be allowed to make life or death decisions autonomously?
  • Is fighting a war by “remote control” lessening the connection of those conducting the war to the impact of their actions?
  • With war fighting becoming increasingly like operating a video game, what are the implications of using a video game as a recruiting tool?
  • Can autonomous weapons be full ethical agents due to the restrictions of their coding?
  • How can we ensure that human operators are responsible for the harm caused by autonomous weapons?
  • How can we prevent hacking or manipulation of autonomous weapons by enemies?
  • How can we regulate and monitor the development and use of autonomous weapons?
  • How can we preserve human dignity and rights in warfare involving autonomous weapons?
  • How can we avoid escalation or proliferation of autonomous weapons among nations?

These are some of the questions that need to be addressed before deploying autonomous war machines. What do you think about these ethical dilemmas?

A Serious Warning Against Fully Autonomous Fighting Machines

The development of autonomous military robots is an exciting field, but we must proceed with caution. The risks of using autonomous weapons in warfare are too great to ignore. The lack of accountability, the potential for unintended consequences, the loss of human control, the impact on military strategy, and the moral implications are all ethical dilemmas that require careful consideration.

As AI and robotics continue to advance, it's important that we recognize the dangers of fully autonomous fighting machines. Without proper oversight and guidelines, we risk creating weapons that could be used in ways that we never intended, with devastating consequences. The possibility of machines that can make life and death decisions without human intervention is a terrifying prospect.

As a society, we need to consider the potential risks and benefits of using autonomous weapons and develop clear guidelines for their use in a way that aligns with our values and priorities. We should proceed with caution and prioritize human control over autonomous decision-making. The use of fully autonomous fighting machines is simply too risky and should be avoided.

Let us learn from the past and ensure that we do not repeat the mistakes made with the development and deployment of other devastating weapons. Let us act with responsibility and foresight to ensure that we build a safer world for ourselves and future generations.

Here’s the full gallery of the SPAWAR tour photos I shot that day.

Tags

  • AI
  • Robotics
  • Military
  • Autonomous Weapons
  • Natural Language Processing
  • SPAWAR
  • Ethical Dilemmas
  • ChatGPT
  • LLM
  • Large Language Models
  • SPAWAR
  • NAVWAR

Subscribe

Metadata

Post date:

Thursday, March 9th, 2023 at 6:09:40 PM