Departing from widely accepted principles such as discrimination and proportionality in warfare poses moral dangers and prompts demanding calls for a moratorium on using AI in conflicts.
ast month, +972 magazine reported that the Israel Defense Forces (IDF) used Lavender, an artificial intelligence-powered targeting system, to identify over 30,000 targets and generate a “kill list” in Gaza. In November 2023, it detailed the IDF’s use of Gospel, another AI-powered system, to target facilities linked to militants.
As civilian casualties rise in Gaza, though awaiting verification, these reports have raised concerns over states’ reliance on military AI. The IDF denied using AI to select bombing targets, dismissing the reports as exaggerations. Nonetheless, given the IDF’s prior assertions about precise targeting, these reports support the claim of Israel's increasing use of military AI in the conflict.
Using military AI is now a global trend, and what is happening in Gaza serves as a warning that we are entering a new era of AI-assisted warfare, where targeting and killing with minimal human inputs is increasingly the new practice. Without vigilance and updated regulations, the risks posed by military AI would likely escalate even further in future warfare.
A surge in civilian casualties in Gaza signifies an unprecedented degree of harm suffered by the civilian population. Since October 2023, the total Palestinian death toll has risen to 33,843, with 66 percent being women and children. According to data calculated by Yagil Levy, an Israeli political scientist, the latest aerial attacks have seen a staggering 60 percent of noncombatants among the casualties.
The high civilian casualty rate has raised concerns about the humanitarian situation reaching a breaking point. United Nations Secretary-General Antonio Guterres said he was “deeply troubled” by certain claims in media reports. Some human rights experts have warned that the systematic destruction of civilian facilities could constitute war crimes, with some going so far as to characterize it as “AI-assisted genocide”.
Critics have cautioned that the reports remain unproven. But, at its worst, AI becomes a technological justification for the widespread killing of innocent people. Departing from widely accepted principles such as discrimination and proportionality in warfare poses moral dangers and prompts demanding calls for a moratorium on using AI in conflict.
Israel has been developing AI-supported targeting systems since 2019. With “revolutionary changes”, the IDF deployed two new AI machines in recent military attacks, making autonomous warfare a reality today.
Share your experiences, suggestions, and any issues you've encountered on The Jakarta Post. We're here to listen.
Thank you for sharing your thoughts. We appreciate your feedback.
Quickly share this news with your network—keep everyone informed with just a single click!
Share the best of The Jakarta Post with friends, family, or colleagues. As a subscriber, you can gift 3 to 5 articles each month that anyone can read—no subscription needed!
Get the best experience—faster access, exclusive features, and a seamless way to stay updated.