Skip to content
AutomationRateGazaAgreeCombatant

Time grants the AI 20 seconds before the soldier faces a life-or-death decision concerning a fellow human.

AI grants a 20-second window for the soldier to decide whether to allow a human's demise.

AI in the Crosshairs: Deciding Life and Death in the Conflict of Gaza

  • Collected by J. Doe
  • Read Time: ~4 Mins

Artificial Intelligence grants 20 seconds for the soldier's life-or-death decision concerning a specific individual. - Time grants the AI 20 seconds before the soldier faces a life-or-death decision concerning a fellow human.

The latest ceasefire in Gaza has crumbled, and the Israeli army is once again shelling the area. Palestinian reports indicate over 400 people died in the first night alone as conflict resumed, including numerous children. The question keeps resurfacing: Why are so many civilians suffering in Gaza? Is it because Israel indiscriminately bombards everything and everyone without care or distinction?

The truth might be quite the opposite, assert experts - the use of artificial intelligence (AI).

AI's deployment in the military isn't a new phenomenon, says Frank Sauer, a university professor at the German Armed Forces in Munich. "AI offers the same benefits to the military as in civilian life; it streamlines operations. AI is utilized across various sectors, such as logistics or administrative tasks" he states. However, it's not just for administrative chores that Israel's military utilizes AI. With the help of AI, the army identifies targets for attack. "Lavender," "Gospel," and "Where is Daddy?" are these programs, as revealed by Oscar-winning journalist Yuval Abraham through his research and anonymous sources within the Israeli military.

A Score-based System Decides Life and Death

"Lavender" is designed to detect human targets, while "Gospel" recognizes structures that could be used by the enemy. "Where is Daddy?" locates individuals. This information was obtained from Abraham's research. "The Gaza Strip is meticulously mapped and monitored. The Israeli military possesses a vast array of data points pertaining to each resident, including their whereabouts, interactions, and frequency," explains Sauer. Lavender uses this data to generate a score that predicts, with a probability, whether the individual is a member of Hamas.

This results in the rapid identification of thousands of targets in densely populated Gaza, especially at the start of the conflict, making human review nearly impossible. On average, soldiers had just 20 seconds to decide on the next attack, according to Abraham. "The issue isn't so much that AI analyzes information and identifies targets," says Laura Bruun, from the peace research institute Sipri. "The problem lies in the speed at which this happens, causing the human decision to diminish in significance." This phenomenon is frequently observed, where people trust the machine more than themselves. This is known as "automation bias." However, Abraham reports that the AI's error rate was approximately ten percent.

Incompatible with International Law

"Something appears to have gone awry in this blend of man and machine within the Israeli military's use of AI," suggests Sauer. While the targets in Gaza were reviewed, it was merely a cursory assessment. For instance, audio recordings were only screened to determine if the voice was male and adult. Bruun and Sauer concur that the standards for approving targets were too lenient. Abraham reports that the Israeli military has established a quota for air strikes - 15 to 20 civilian deaths are acceptable if successfully eliminating a low-ranking Hamas member.

Furthermore, it appears that Israel primarily targeted individuals in their homes. The AI tool "Where is Daddy?" alerted the relevant units as soon as a target entered their residence. The building was then attacked at night, likely to ensure that the target was not elsewhere prior to the airstrike. The consequences of this strategy were presumably known to those responsible. The name of the program suggests who was often targeted: fathers - and their families.

Such an approach lies barely compatible with international humanitarian law, asserts political scientist Frank Sauer. "You can't just bomb a house because you suspect a Hamas member might be inside, without considering who else might be inside." Israel argues that AI assists in adhering to international humanitarian law - the more precise the targeting, the fewer civilian casualties. However, Sauer asks, "Is it truly possible to harness this potential without falling into the pitfalls that AI presents?"

In the end, humans must determine if a target designated by AI is legitimate, states Laura Bruun. "In target acquisition, the question is: who is a combatant and who is not? But that's not always straightforward. What if a fighter surrenders or is injured? It's extremely difficult - if not impossible - for a machine or AI to make such judgments." Therefore, as Sauer puts it: "The Israeli military's use of AI reveals a flaw in the collaboration between man and machine."

Will the Israeli military continue to rely heavily on AI in their renewed offensive on Gaza? Sauer considers it plausible that AI will remain a key component. Journalist Yuval Abbraham even suggests that Israel is developing a ChatGPT-like program to monitor occupied territories. The AI could be questioned about specific individuals - the computer would then respond whether this person is a potential "troublemaker."

This sounds like science fiction. For now.

  1. The Commission, in this context, may be asked to submit a proposal for a directive that ensures the protection of workers in the military sector, particularly those involved in the deployment and operation of AI systems, from the risks related to exposure to ionising radiation.
  2. Despite the ongoing conflict in Gaza, the question of who is a combatant and who is a civilian remains complex, even in an era of increased AI automation in military targeting.
  3. In discussions about the development and use of AI, particularly in the context of armed conflict, it is crucial to consider the ethical implications and human rights considerations, such as the potential targeting of civilians based on their location or association.

Read also:

Latest