Maya Khachab
“Autonomous weapons are not a work of science fiction from a distant dystopian future. They are an immediate cause of humanitarian concern and an urgent, international political response,” said Neil Davison, the senior scientific and policy adviser at the International Committee of the Red Cross.
Houses and buildings destroyed due to Israeli strikes in Jabalia in the Nothern Gaza Strip (Creative edit: India Today - Via: Reuters)
Lethal autonomous weapon systems (LAWS) are defined as “weapon systems using sensors and algorithms to independently identify and destroy targets without human control.” This independence raises grave concerns about the accountability and ethical implications of its use.
Civil society organizations are actively condemning autonomous weapons, often called "killer robots," advocating for global restrictions. This condemnation aligns with the United Nations' growing concern; in December 2023, the UN adopted Resolution 78/241, emphasizing the humanitarian, legal, and ethical issues posed by these technologies. Despite international pressure, the U.S. remains a primary supporter of autonomous weapons, investing billions in their development and deployment.
The U.S. Department of Defense claims that autonomous weapons allow operators to exercise human judgment on force usage. However, critics argue that delegating life-and-death decisions to machines dehumanizes violence, reducing human lives to mere numbers. Although the department claims these systems minimize collateral damage by targeting military objectives, mounting evidence suggests that such technology can have catastrophic humanitarian outcomes, especially in densely populated areas.
Nowhere is this more evident than in the ongoing genocide of Palestinians in the Gaza Strip. The Israeli military, with U.S. support, claims to target Hamas militants in their bombing campaigns. However, reports from humanitarian organizations and independent observers indicate that such operations have used “disproportionate force” and resulted in the deaths of hundreds of thousands of civilians, widespread destruction of infrastructure, and the erosion of Palestinian civil society. In July 2024, British medical journal The Lancet estimated that the death toll in Gaza could currently reach up to 600,000, making it one of the deadliest military campaigns in modern history.
Israel, as one of the largest recipients of U.S. military aid, has deployed automated targeting systems such as robo-snipers, suicide drones, and swarm drone technology. Since October 2023, the U.S. has provided Israel with $17.9 billion in military aid, enabling the use of these technologies across conflict zones.
Over the past year, the IDF has employed advanced AI systems, including ‘Lavender’ to generate extensive "kill lists" with minimal human oversight. Lavender marked tens of thousands of Gazans as suspects, including children, raising significant ethical concerns. Another system called 'Where's Daddy?' tracks individuals identified by Lavender, targeting them at home with their often present. An Israeli intelligence officer explained, “The IDF bombed targets in homes without hesitation as a first option. It’s much easier to bomb a family’s home. The system is built to look for them in these situations.”
In 2020, computer scientist and AI expert Stuart Russell wrote Human Compatible about the dangers of autonomous weapons, including their potential to aid ethnic cleansing and target minorities under authoritarian regimes. Russell claims that these weapons can commit mass murders without human supervision, allowing a government to scale violence to unprecedented levels: “You can do a million times more killing by buying a million times more weapons because the weapons are autonomous. Unlike remotely piloted drones, they don’t need individual supervision to do their work” (Russell, 112).
Likewise, Former Israeli Army Chief of Staff Avi Kochavi said, “You see, in the past, there were times in Gaza where we would create 50 targets per year. And here, the machine produced 100 targets in one day.” Autonomous weapons can target people by age, gender, skin color, uniform, or facial recognition on a massive scale. Russell suggests that they could target individuals based on facial recognition or visual criteria (Russell, 110). This capacity allows autonomous weapons to target ethnic or religious groups, posing ethical questions about their use in large-scale, targeted violence. Russell warns that autonomous weapons are “weapons of mass destruction” that can “eliminate only those who might threaten an occupying force,” making them superior to nuclear or conventional methods (Russell, 110). Such technology could allow authoritarian regimes to target vulnerable populations without scrutiny, therefore making its proliferation dangerous. Russell's discoveries show that AI-driven weapons must be regulated to prevent oppression and genocide.
On April 5, 2024, the Human Rights Council explicitly criticized Israel’s use of artificial intelligence in military decision-making, further contributing to the commission of potential international crimes. Ten days later, some United Nations experts issued a press release deploring Israel's use of AI-directed military operations in occupied Gaza, which has led to an unprecedented toll on the civilian population, housing, essential services, and infrastructure. Under the Leahy Law, the U.S. Department of State and Department of Defense are prohibited from providing military assistance to foreign security force units that violate human rights with impunity. The United States should align itself with humanitarian organizations and its own amendment to suspend military aid for Israel for using LAWS in ways that violate human rights.
Congress could enact legislation banning the development, production, sale, and use of fully autonomous weapons, requiring human oversight to preserve accountability. Internationally, the U.S. could spearhead a coalition to enact a binding treaty prohibiting LAWS with the UN’s Group of Governmental Experts on LAWS as well as NATO in order to create and enforce global standards. Transparency measures should mandate that defense contractors and military branches disclose AI applications in weapons, promoting both domestic and international accountability. Expanded export controls could further limit the proliferation of these weapons, especially in regard to nations with poor human rights records or ongoing conflicts. This can be achieved through frameworks like the International Traffic in Arms Regulations (ITAR).
Congress must acknowledge the ongoing genocide in Gaza and immediately halt military and financial aid to Israel, as well as establish an arms embargo. The devastating impact of LAWS has exacerbated the death toll, underscoring the need for urgent action. The U.S. should lead in establishing a global framework for responsible AI governance, setting ethical standards to prevent further tragedies in Gaza and future conflicts. By prioritizing responsible disarmament, the U.S. can mitigate the spread and use of autonomous weapon technologies worldwide.
Comments