Gaza and the Collective Political Costs of Algorithmic Warfare

Written by

Since Israel started its military campaigns against Gaza, Lebanon, and the West Bank after the terrorist attacks of October 7, 2023, both academia and the media have intensely debated the Israel Defence Force’s (IDF) use of so-called artificial intelligence-enabled decision-support systems (AI-DSS) for its combat operations. Controversies have mainly revolved around the staggering number of deaths in Gaza, which at the time of writing stands at nearly 50,000, conservatively estimated, with the vast majority of them civilians. Focusing on the systems’ possible contribution to this immense loss of life, as most of the scholarly interrogations of the AI-DSS in use have done, has been and remains of utmost importance. However, in this post I will, instead, appraise some of the less immediate, larger societal costs that AI-supported systems, like the ones deployed by the IDF, cause. In particular, the piece attends to how such systems, due to the pervasive surveillance without which they cannot function, innately impede political agency and, as a result, ultimately violate the collective right to self-determination of populations living within their area of deployment.

Improved Civilian Protection as the Promise of AI-DSS

Despite its obvious shortcomings in sparing civilian lives in Gaza, to date the IDF has remained steadfast in its commitment to the narrative that AI-DSS “make the intelligence process more accurate and more effective”, enabling Israel “to minimize civilian casualties”. This rationalisation of AI-based technologies is all too familiar, and in no way limited to the IDF. States frequently echo the expectation that such systems will be capable of assisting armed forces in ensuring better compliance with international humanitarian law (IHL) and in particular its core principles of distinction, proportionality, and precautions in attack, as laid down in Additional Protocol I to the Geneva Conventions (AP I) and as reflected in customary law. In the autumn of 2024 – when both the casualty figures in Gaza and the IDF’s routine deployment of AI-DSS had been common knowledge for months – two high-level diplomatic communiqués, the “Blueprint for Action” of the “Responsible AI in the Military Domain” (REAIM) Summit and a UN General Assembly resolution, professed the purported advantages of AI in improving the protection of civilians and civilian objects in armed conflict. This narrative, which continues to gain traction, has prompted some scholars to conclude that not employing AI in warfare “would be irresponsible and unethical”.

Through such ritual invocation of expected improvements in precision and accuracy, the IHL targeting rules have come to function as a justificatory rhetorical framework for the continuing entrenchment of all-encompassing surveillance practices in conflict zones. Because whether or not any of the claims about enhanced IHL adherence and protection of civilian lives will ever hold up to scrutiny, any such optimistic statements fail to account for how many of the AI-DSS in use today actually operate. That is a problem because it deflects the focus away from the larger societal costs arising from the deployment of such systems. Despite what the promotional materials of some of the best known AI-DSS sometimes imply, it is frequently not the primary task of such systems to simply engage in object classification allowing a military commander, for example, to accurately distinguish an enemy tank from a civilian bus. Instead, as recently pointed out by Elke Schwarz, their principal purpose is often to “find” and generate new targets; they discover “suspicious” behavioural patterns or other signifiers in aggregated volumes of data that mark individuals as targetable.

The Link between AI-DSS and Perpetual Surveillance

One system that has received a lot of media attention is “Lavender”. This AI-DSS is based on a machine-learning algorithm that reportedly plays a major part in the generation of the IDF’s target banks by predicting the likelihood of a person belonging to an armed group in Gaza. Such predictions are generated through an analysis of data synthesised and aggregated from various sources including the person’s family tree or mobile phone data, as well as behaviour monitored both on- and offline. Unlike classification algorithms that are merely supposed to facilitate the targeting process by distinguishing objects located on the battlefield, Lavender’s target discovery function would not be achievable through the occasional collection and labelling of appropriate training data or even through synthetically generated input data during training. By definition, such systems cannot function without massive surveillance infrastructures that constantly monitor the target population because this is the only way they can detect anomalous behavioural patterns that might indicate adversarial behaviour calling for security interventions, including lethal action. Relatedly, more recent reports have uncovered the development of yet another AI-enabled system, a “ChatGPT-like tool” for Israeli security forces that purportedly aims to predict “who the next terrorist will be”. To this end, the tool utilises the same varied data sources acquired through existing surveillance infrastructures to generate detailed information about every Palestinian in the occupied territories.

Scholarly and policy publications increasingly recognize the obvious privacy implications these practices entail. But as is often the case when it comes to the legal and ethical assessment of AI systems, the focus has mostly been on the avoidance of harmful biases in the input datasets and on the explainability of a system’s predictive output, usually with the ultimate objective of enabling some variation of “meaningful human control” of the system; conversely, almost no attention has been paid to the principle of data minimisation, or in other words to the question of restricting the underlying surveillance practices themselves. This can be explained by way of the inherent tension at work: If the training and operational feeding of AI-DSS with massive quantities of contextually relevant (i.e., pertaining to the area of deployment) and time-sensitive (i.e., up-to-date) data is the only way to reliably identify legitimate targets and to increase accuracy, thus to actually contribute to increased IHL compliance – provided we accept this narrative for the sake of the argument – then the continuous and sustained collection of more and more data through constant multi-source surveillance becomes imperative. Invoking the right to privacy can be of only limited help in this scenario.

The Obstruction of Political Agency and Self-Determination

More importantly, and as I explain in more detail in my article “Self-Determination in the Age of Algorithmic Warfare” – recently published as part of a special issue in the European Journal of Legal Studies – the focus on the individual right to privacy obscures the collective dimension of the extensive surveillance practices in the context of AI-DSS. To understand this perhaps less than intuitive connection, it is first necessary to conceive a people’s right to self-determination not merely as an enforceable claim to a material-legal outcome in terms of political status (“internal” or “external” self-determination, e.g. autonomy or independence) but as furthermore entailing a collective right to political action in furtherance of these ends. Without this corresponding procedural component, which provides the right to form a collective political will, a people’s right to “freely determine their political status” (Article 1(1) ICCPR) would be rendered effectively meaningless (see more explicit in this regard Article 20(2) African Charter on Human and Peoples’ Rights).

This understanding of self-determination raises the question of what conditions must exist for a people to be able to exercise the right through the collective formation of a political will with the purpose of determining their political status. As I explore in my article, one possible answer lies in a nuanced reading of the concept of spontaneity as developed in the theory of Rosa Luxemburg and Hannah Arendt. According to Luxemburg, spontaneous political action is a necessary catalyst for engendering the circumstances that enable political transformation and thus genuine self-determination. Expanding on this claim, Arendt insisted that “all political freedom would forfeit its best and deepest meaning without this freedom of spontaneity”. The work of the two authors reveals the capacity to spontaneous initiative as the condition of possibility to enact an emancipatory politics of change and in this way demonstrates its intrinsic link to the collective actualisation of the right to self-determination. Simply put, the right is sufficiently realised only if the right holders are in a position to exercise their political agency spontaneously.

The ability for spontaneous political action is directly and structurally undermined by the mechanics of AI-DSS such as “Lavender”. Seeking to generate targets by identifying suitable individuals by detecting “anomalous” behaviour or otherwise “suspicious” patterns, the algorithmic systems can only generate predictions based on an analysis of past data. Inherently, such a system “freezes the past” in the way that the latter is represented within the dataset that was fed to the system for the purpose of building its internal model, a dataset that was produced through constant surveillance of the target population. It follows that these AI-DSS proceed from the baseline assumption that human behaviour remains essentially constant, that is consistent with whatever patterns and frequencies have been detected in the input dataset. By its very nature, any genuinely spontaneous political action, which, as Arendt reminds us, “always happens against the overwhelming odds of statistical laws and their probability”, initiates something new by disrupting the preconceived course of events. Inevitably, the action will not find representation in the dataset and thus will be registered as an anomaly by the AI-DSS, raising suspicion. For this reason, a data subject acting in spontaneous and therefore unexpected ways is likely to end up being included in a target bank – with lethal consequences. This probability renders any political act fraught with unpredictable risk.

Under conditions of constant algorithmic surveillance feeding AI-DSS, spontaneous political action becomes practically impossible. Following the procedural understanding advanced here, the right to self-determination under international law is directly implicated. Ultimately, therefore, the prevailing narrative that supports the increased use of AI-DSS to enhance IHL compliance rationalises and enables the continuous suppression of collective political practice, the essence of self-determination, in Palestine.

The Implications Beyond Gaza

As far as the situation in Gaza is concerned, the violation of the right to self-determination through the use of AI-DSS and other systems of algorithmic surveillance can be said to be a mere outgrowth of and to some extent predicated on the infrastructures of occupation that have long been in place and that facilitate the continuation of the foreign, illegal control of the Palestinian body politic. In that sense, one may argue that such structures of domination are a precondition for AI-enabled systems to exert the described effects on the political rights of the populace under surveillance. If that is true, then a comparable situation can perhaps only be found in Xinjiang, where the Chinese Communist Party has long utilised artificial intelligence to persecute the Uyghur community. But the evolution of Israel’s practices is itself intelligible only within the larger context of the security practices that emerged during the post-9/11 “war on terror” with its ongoing trend towards the further “datafication of counter-terrorism”. Mass surveillance programmes such as the U.S. National Security Agency’s “XKeyscore” or “Skynet”, with their own versions of anomaly detection, foreshadowed Lavender’s more immediate and consequential implications for political agency.

The argument advanced here is thus not contingent on situations of direct and intentional domination. The same considerations would principally apply if a state employed such AI-DSS for any other security purpose as long as the systems for their operation depend on sweeping surveillance practices. What is specific about the situation in Gaza, as far as I can tell, is Israel’s explicit rationalisation of the use of these types of AI-DSS, and thus of permanent surveillance, in the purported interest of increased IHL adherence – a logic that other actors have not, so far, refuted on principle. On the contrary, other states and scholars actively reinforce and amplify this dangerous narrative.

Concluding Remark

The ritual invocation of purported gains in effectiveness, precision, and accuracy, supposedly to help protect civilians, normalises surveillance infrastructures that are detrimental to spontaneous political action. As I have argued, with these practices Israel continuously curtails Palestinians’ collective political agency and, thereby, violates their right to self-determination. At the same time, it is unlikely that this development will remain limited to Gaza, or even to the military context. Tech CEOs, and the politicians who venerate them, are already dreaming of a future in which all citizens everywhere will always “be on their best behaviour” thanks to permanent and all-encompassing algorithmic surveillance. The “Palestine Laboratory” of algorithmic warfare has surely started paving the way for this coming dystopia.

Leave a Comment

Comments for this post are closed

Comments