By Sophia Baerend
April 8, 2026
9-11 minutes
Introduction
Methods of warfare have progressed exponentially in recent years. In this century alone, innovation has progressed from stealth technology and precision missiles in 2000 to fully autonomous AI systems and hypersonic weapons by 2026. This article analyzes the philosophical, ethical, and legal implications of AI use in modern warfare.
Historical Relevance of Autonomous and Automatic Weapons
The evolution of weapons and defense systems depicts the shift from routine-based automation to intelligent, autonomous systems. Early examples of automated weapons include guided missiles developed during World War II by the United States of America (the ASM-N-Bat), which relied solely on pre-programmed flight paths and trajectories and operated with limited sensor feedback (Frazier, 2025).
This technology enabled them to track and strike targets with minimal intervention. While highly effective and a marker of their time as a significant technological advancement, they were tightly constrained in functionality and flexibility.
The transition to AI-enabled systems in the late 20th to early 21st centuries introduced learning, real-time data processing, and more advanced decision-making, enabling weapons to operate at a previously unseen level of autonomy. Modern examples of this progression include the use of unmanned drones and loitering munitions (Beebe, 2025), which are weapons capable of surveillance, target identification, and execution without human interference.
This evolution shows a fundamental transformation in warfare and conflict, with complex computers and algorithms increasingly replacing the roles of soldiers and human operators. While marking a new technological era in how humanity defines war, it raises a slew of strategic and ethical questions.
What Are Autonomous Weapons?
These developments raise the question: What are fully autonomous weapons systems? According to the United Nations Office for Disarmament Affairs, there is no universally agreed-upon definition. However, in the aforementioned article (United Nations Office for Disarmament Affairs [UNODA], n.d.), they are defined as “weapon platforms and technologies that, once activated, can search for, identify, select, and engage targets without further human intervention”.
The existence of a weapons system capable of indiscriminate killing without the presence of a human controller raises a multitude of ethical questions regarding accountability, civilian harm, and decision-making. To answer such questions, it is crucial to distinguish between the types of automatic weapons systems (Tiwari, 2025). An automated system has a set of instructions and rules for repetitive tasks; for example, Remote Weapons Systems (RWSs) operate according to predetermined rules and are controlled remotely. An autonomous system, on the other hand, operates independently, making decisions based on its environment using data and sensors.
Such weapons include Israel’s (mostly autonomous) Iron Dome defense system, which detects incoming rockets or missiles and calculates whether to intercept based on the potential damage they could do if they reach their target. It can learn and adapt without intervention, showing significantly more intelligence than automated systems. Within this autonomy, there are five levels (Tonsen, 2025). Level one is when the AI acts as an operator, giving only basic assistance. The levels become increasingly advanced until level five, at which point it operates fully independently.
The Relevance of Philosophy in Conflict
An ethical analysis of autonomous weapons is based on established philosophical principles that help assess both the justification for war and conduct in conflict. Just War Theory (Carnegie Council, 2023), developed over centuries of analysis of war, is an ethical doctrine that holds that war can be morally justifiable if strict criteria are met regarding why (jus ad bellum) and how it is fought (jus in bello).
From a philosophical standpoint, there is a contrast between deontological and consequentialist ethics (Osimen et al., 2024), both of which highlight a major source of controversy and tension in evaluating AI-operated weapons: should actions be judged by adherence to moral rules or by their outcomes? Deontology is based on the principle that the outcome is only as moral as the process used to reach it. Consequentialism holds that actions are justified so long as the outcome is ethically right. These theories are applicable as they can most relevantly be attributed to both sides of the argument on the use of autonomous weaponry. Deontology could be used to support the opinion that AI-controlled weapons are always ethically wrong to use, since being willing to use them in the first place would be accepting the potential deaths of innocent civilians. Consequentialism posits that the use of autonomous weapons is not immoral, as their use has the potential to save more time due to their efficiency, and more lives than it could take. This raises fundamental questions about responsibility and accountability, especially whether machines can be considered capable of making ethical decisions or whether such responsibility should remain human. The increased willingness to hand over life-or-death decisions to algorithmic systems complicates our view of traditional roles of responsibility, highlighting the emerging accountability gap in modern conflict.
Ethical Implications of Autonomous Weaponry
Besides consequentialism and deontology, ethics plays a larger role in AI-controlled weapons. Such ethical issues deviate from, and, in some cases, go against the norms that have become commonplace in warfare and international law. One of the most prominent concerns is accountability, especially regarding who should be held responsible when autonomous systems cause unintended harm (Mako, 2026). As the decision-making process may involve several actors (software developers, military personnel, authorities, etc.), an accountability gap may arise when it becomes unclear who is morally or legally responsible for the system’s actions.
Another issue involves civilian protections and discrimination (Docherty, 2025), a key aspect of international humanitarian law (a field of law commonly known as ‘The Law of War’). This requires the military to distinguish between military targets and civilians. AI systems may struggle with the complexity of real-world situations. Biases in training and programming may further increase the risk of misidentification.
Questions regarding proportionality and decision-making also arise, as ethical warfare requires assessing whether the proposed military action is justified by the potential collateral damage it could cause. Some argue that AI is closely connected to the broader debate around “meaningful human control” (Marijan, 2024), a concept discussed by organizations such as the UN in the context of lethal autonomous weaponry.
The Legal Aspects of Non-Human Controlled Conflict
Finally, the rapid development of AI-powered military technology raises fears of escalation and arms-race dynamics, since countries would compete to develop increasingly autonomous and intelligent capabilities, which could lower the threshold for initiating conflict and create an increasingly unstable security environment.
From a legal and regulatory perspective, views on autonomous weapon systems are largely grounded in International Humanitarian Law (Davison, 2018), which governs armed conflicts. Such rules include principles of proportionality and military necessity. A significant question in recent debates is whether the legal frameworks currently in place are sufficient to address the uniquely 21st-century issue posed by AI-operated weapons, or whether newer, more specific regulations are needed. Humanitarian law applies to all weapons systems, but its enforcement becomes more complicated when decisions previously made by humans are delegated to algorithms.
International organizations, such as the UN, have facilitated discussions on these issues through mechanisms like the Convention on Certain Conventional Weapons (UN Office for Disarmament Affairs, 2025), where countries and experts examine the potential implications of emerging technologies. Increasingly, discussions are focusing on Autonomous Weapons Systems (AWS), with some countries and advocacy groups calling for preemptive bans or strict regulations (Chengeta, n.d.). Others argue for continued technological development under preexisting legal standards. The existence of such discussions shows the tension between innovation, military ambition, and the need to uphold the established legal and ethical standards of warfare.
Conclusion
In conclusion, the fast-paced development of autonomous weapons represents a significant shift in how war is fought. This issue presents a plethora of issues in blending advanced technology with extreme ethical and legal challenges. Such systems offer efficiency and precision that are impossible for humans to achieve while also potentially reducing risk to soldiers. However, they simultaneously raise questions about legal accountability, moral decision-making, and the protection of civilians, prompting mankind to adjust our existing ethical and legal frameworks as we transition from human to machine-controlled forces. As autonomous weaponry continues to develop and upgrade, the need for clear regulation and meaningful human oversight becomes even more urgent.
Ultimately, the future of conflict depends not only on technological capabilities but also on mankind’s ability to control such technology, ensuring such advancements remain aligned with our fundamental principles. Automating and thereby removing the life-or-death decision-making process does not take away the human element of conflict or war. However, the danger of automating how and when life is taken should perhaps not be placed solely in the hands of an algorithmic machine.
Reference List
Adamson, L. (2026, March 9). Legal Accountability for AI-Driven Autonomous Weapons. Lieber Institute West Point. https://lieber.westpoint.edu/legal-accountability-ai-driven-autonomous-weapons/
Beebe, E. (2025, August 22). Loitering Munitions 101: What They Are and Why They Matter | IDGA. IDGA. https://www.idga.org/command-and-control/articles/loitering-munitions-101-what-they-ar e-why-they-matter
Branka Marijan. (2024, December 9). Meaningful human control and AI-enabled warfare – Project Ploughshares. Project Ploughshares. https://ploughshares.ca/meaningful-human-control-and-ai-enabled-warfare/
Carnegie Council for Ethics in International Affairs. (2023). Just War. http://Www.carnegiecouncil.org. https://www.carnegiecouncil.org/explore-engage/key-terms/just-war
Chengeta, T. (n.d.). 31 Is the Convention on Conventional Weapons the appropriate framework to produce a new law on autonomous weapon systems? https://researchonline.ljmu.ac.uk/id/eprint/19201/1/Is%20the%20CCW%20the%20appro priate%20framework%20to%20produce%20new%20law%20on%20AWS.pdf
Davison, N. (2018). A legal perspective: Autonomous weapon systems under international humanitarian law. https://www.icrc.org/sites/default/files/document/file_list/autonomous_weapon_systems_ under_international_humanitarian_law.pdf
Docherty, B. (2025, April 28). A Hazard to Human Rights. Human Rights Watch. https://www.hrw.org/report/2025/04/28/a-hazard-to-human-rights/autonomous-weapons systems-and-digital-decision-making
Frazier, A. (2025, October 18). From World War II to Drone Wars: How America’s First Guided Weapons Launched the Space Age. Military.com. https://www.military.com/feature/2025/10/17/world-war-ii-drone-wars-how-americas-first guided-weapons-launched-space-age.html#
Lethal Autonomous Weapon Systems | United Nations Office for Disarmament Affairs. (2025). Unoda.org. https://disarmament.unoda.org/en/our-work/emerging-challenges/lethal-autonomous-weapon-systems
Niels Tonsen. (2025, October 22). The 5 Levels of AI Autonomy: From Co-Pilots to AI Agents. Turian. ai; turian. https://www.turian.ai/blog/the-5-levels-of-ai-autonomy
O’Hanlon, M. (2018, September). Forecasting change in military technology, 2020-2040. Brookings. https://www.brookings.edu/articles/forecasting-change-in-military-technology-2020-2040 /
Osimen, G. U., Newo, O., & Fulani, O. M. (2024). Artificial intelligence and arms control in modern warfare. Cogent Social Sciences, 10(1).https://doi.org/10.1080/23311886.2024.2407514
Tiwari, A. (2026). Automated Systems vs Autonomous. Scribd. https://www.scribd.com/document/951259004/Automated-Systems-vs-Autonomous
The Convention on Certain Conventional Weapons | United Nations Office for Disarmament Affairs. (2025). Unoda.org. https://disarmament.unoda.org/en/our-work/conventional-arms/convention-certain-conv entional-weapons
Disclaimer of liability:
While we are transparent about all sources used in this article and double-checked all the given information, we make no claims about its completeness, accuracy or reliability. If you notice a mistake or misleading phrasing, please contact centuria-sa@hhs.nl .
This article also contains links to other third-party websites, which have only been placed for the convenience of the reader and does not imply endorsement of contents of said third-party websites.


Leave a comment