In our highly corporatised world of today, some of the technologies touted as innovative can also be wholly unoriginal, especially when used by an occupying power as a tool to maintain authority over civilians. Smart Shooter’s latest technology, an artificial intelligence-powered remote crowd dispersal weapon, has been installed at a checkpoint on al-Shuhada Street (or Martyrs’ Street) in Hebron. This technology is one of many artificial intelligence (AI) weapons and invasive surveillance technologies employed by Israel against Palestinian civilians that amplifies existing systematic state violence, and violates international human rights and humanitarian laws and obligations.

AI-powered weapons

Smart Shooter is an Israeli company that develops AI image-processing accuracy weapons under the infamous motto “one shot, one hit”. This week, Israel installed a remote weapon—with the ability to fire stun grenades, tear gas and sponge-tipped bullets—at a checkpoint that receives over 200 Palestinians daily, including children crossing on their way to school. Although this weapon is not programmed to fire live bullets, sponge-tipped bullets have caused serious injuries when used in previous incidents.

   The ability to discover people’s identities without their knowledge on such a massive scale not only invades Palestinian civilians’ privacy, but also restricts their freedom of movement, assembly, association, and expression   

This remote weapon uses image processing based on AI, and accurately hits targets that are “disrupting order”. While al-Shuhada Street has witnessed many demonstrations and clashes over the years, quelling disruptions through an image processing technology is very likely to cause unsolicited harm to women and children crossing the checkpoint, given the margin of error associated with such technologies.

This is not the first time Israel has tested prototype AI technologies on Palestinians before refining and exporting them abroad. During last year’s “Guardian of the Walls” operation on Gaza, Israel engaged in “the first AI war”, in which the Israeli Defence Forces (IDF) heavily relied on advanced technologies and machine learning. For starters, Israel’s Intelligence Corps 8200 Unit reportedly devised algorithms that led to combat drone programmes called “Alchemist” and “Gospel” that used geographical, human, and signal intelligence to generate target recommendations for troops and military officials, and to pinpoint strike targets. Employing such predictive policing technologies has “unthinkable scope, speed and intrusiveness”, according to Rohan Talbot, Advocacy and Campaigns Manager at Medical Aid for Palestinians (MAP). Drones have also been used to launch tear gas at demonstrators and fire live ammunition from a distance; using Smart Shooter’s “SMASH Dragon” armed drone system. The recently unveiled SMASH technology eliminates both static and moving targets with extreme precision using assault rifles, sniper rifles, 40mm, and other ammunition.

Invasive surveillance technologies

AI-powered weapons are among many technological advancements that have been incorporated into the IDF’s efforts to create what Israeli officials have dubbed a “smart city” and a “frictionless” occupation. In recent years, Israel has installed closed-circuit television (CCTV) facial recognition technology on most roadblocks in Hebron, while additional CCTVs have been installed to cover 95% of public areas in occupied East Jerusalem. Some cameras are even said to capture images of the inside of civilian houses, forcing some women to sleep in their hijabs, and children to feel consistently unsafe and unable to play outside, thereby massively invading civilians’ right to privacy and further restricting their freedom of movement.  

Israeli soldiers have also been incentivised with prizes as encouragement to take the largest number of photos of Palestinian civilians without their consent, which are then uploaded onto a database called “Blue Wolf” to be cross-referenced with other photos. This is a live automatic facial recognition system used to monitor and identify Palestinians from a distance, detect patterns from their faces and alert IDF officials of any “suspicious activity”. The Blue Wolf technology is reportedly capable of searching over 100 million faces in a matter of seconds, and the information collected is stored to be securitised as well as militarised. “White Wolf” is a similar application used in the West Bank to scan Palestinians’ identification cards before they enter settlements to work, and to store their data.  

These invasive surveillance technologies come as no surprise after the NSO Group’s Pegasus spyware scandal, which concerns a software that can be installed remotely on a target’s cellphone through “zero-click attacks”. Amnesty International found that Pegasus was used to target approximately 50,000 people, many of whom were Palestinian rights activists and journalists. The pandemic has further exacerbated this situation, as an application was created under the pretense of public health, yet forces users to provide the company with access to their calls and photos.

Most recently, Project Nimbus, a $1.2 billion cloud computing system built by Google and Amazon, centralises Israel’s surveillance technologies into an all-encompassing cloud solution that can be used for “facial detection, automated image categorisation, object tracking and sentiment analysis”. The latter, which involves assessing the emotional content of images, speech, and writing, is an increasingly controversial form of machine learning that has been called “invasive and pseudoscientific”.

   Israel’s new artificial intelligence strategy, which is to be implemented through a centralised AI department in the country’s military is particularly dangerous, and violates Israeli obligations under international law   

The ability to discover people’s identities without their knowledge on such a massive scale not only invades Palestinian civilians’ privacy, but also restricts their freedom of movement, assembly, association, and expression. These technologies further reflect the systematic, inherent discrimination within Israeli strategies, given that they are used solely to target Palestinians and gather their personal data.

Israel has also developed mass surveillance technologies to detect aerial threats, such as the aerial reconnaissance balloon. This aerostat system, one of the largest in the world, is used to surveil and detect aerial threats at long ranges and high altitudes and provide early warnings against them. Israel’s air defences also include the Iron Dome, known to detect and shoot down short-range rockets and drones, the David’s Sling system, designed to intercept tactical ballistic missiles and cruise missiles, and the Arrow system, which intercepts ballistic missiles outside of Earth’s atmosphere.

Israel’s obligations under international law 

Israel’s new artificial intelligence strategy, which is to be implemented through a centralised AI department in the country’s military is particularly dangerous, and violates Israeli obligations under international law. Here’s why:

As an occupying power, and having ratified the Fourth Geneva Convention, Israel is bound by international humanitarian law. Although the Convention makes no mention of modern AI and its limitations, Article 27 outlines an occupying power’s obligation to protect civilians under its occupation through “respect for their persons, their honor, their family rights”, and asserts that people “at all times be humanely treated” and that protections be applied “without adverse distinction based on race, religion or political opinion”. The right to privacy is therefore guaranteed by the Convention, and any limitations must satisfy the principles of legality, legitimate aim, necessity, and proportionality.

Although Israel has a duty to “restore and ensure public order and civil life” under Article 43 of the Hague Regulations, this duty must be balanced against their obligation to treat the population humanely and ensure the fulfilment of their rights. In this case, high-level AI-enabled surveillance and intrusion technologies, as well as movement restrictions, diminish civil life and undermine the honor and humane treatment of civilians under the Israeli occupation.

As for Israel’s obligations under international human rights law, it has ratified the International Convention on Civil and Political Rights (ICCPR) and is therefore bound by Article 17, which stipulates that “no one shall be subjected to arbitrary or unlawful interference with his privacy, family, home or correspondence”. The European Court of Human Rights has clarified that while CCTV does not necessarily interfere with individuals’ right to privacy, the systematic recording of data—even in a public context—may give rise to privacy considerations.

Consequently, the automatic collection and storage of data on individuals in public spaces constitutes a breach of their right to privacy. While Israel argues that these invasive automated technologies are part of an aim to minimise civilians’ physical interactions with IDF soldiers and are a positive step towards achieving a “frictionless occupation, what this strategy actually promotes is the state’s invisible, efficient attempt to perpetuate deeply entrenched patterns of violence and control, while absolving itself of its responsibilities as an occupying power, as has been the case following its disengagement from Gaza in 2005.