Human Wrongs Watch
CIVICUS discusses the military use of artificial intelligence (AI) in Gaza with Dima Samaro, a Palestinian lawyer and researcher, and director of Skyline International for Human Rights, a civil society organisation (CSO) that defends digital freedoms and human rights in the Middle East and North Africa.
Dima serves on multiple boards focused on civic space and surveillance issues, including Innovation for Change’s MENA Hub, the Surveillance in the Majority World Network and the VUKA! Solidarity Coalition, and volunteers with Resilience Pathways to help Palestinian CSOs counter Israeli efforts to restrict civic space and manipulate public narratives.
Gaza has become a testing ground for AI-powered warfare. Israel deploys systems such as Gospel and Lavender that produce thousands of strike recommendations based on alleged links to Hamas.
Meanwhile, facial recognition technology controls aid distribution and tracks displaced civilians. These tools operate without legal oversight or transparency, creating dangerous accountability gaps.
As private companies develop and profit from this technology, Gaza exposes the grave dangers of unregulated AI warfare and its potential for normalising automated violence.
What AI tools are being deployed in Gaza?
Israel is using experimental AI systems on an unprecedented scale in Gaza, making real-time life-or-death decisions against a besieged civilian population.
The technology strips away humanity from warfare. In Nuseirat refugee camp, residents reported hearing the cries of infants and women before Israeli quadcopters opened fire directly on those who responded.
The surveillance apparatus is equally invasive. During forced evacuations from northern to southern Gaza, civilians undergo invasive facial recognition and biometric scans to pass military checkpoints.
AI-equipped ‘smart cameras’ monitor hospitals such as Al-Shifa in real time during raids. Constant biometric scanning leaves people feeling hunted, reducing them to targets and inflicting deep psychological trauma.
The impacts extend beyond surveillance. In Jabalia refugee camp, explosive robots systematically destroy homes and kill civilians, blocking rescue efforts and burying survivors under rubble.
United Nations (UN) experts describe these attacks as ‘domicide’ – the deliberate destruction of civilian homes.
Technology no longer just enables violence but also helps automate the genocide. Israel has integrated AI into its military kill chain, using systems such as The Gospel, Lavender and Where’s Daddy to generate kill lists, geolocate targets and assign strikes.
Lavender alone reportedly marked over 37,000 Palestinians for assassination based on flawed metadata and biased algorithms.
These systems eliminate human oversight, leading to mass civilian casualties under a secretive, unaccountable regime.
Most information about these technologies comes from Israeli whistleblowers and western investigative journalists. In Gaza, over 230 journalists have been killed since October 2023, many deliberately targeted in drone strikes. This has allowed experimental warfare to continue largely hidden from global scrutiny.
MORE:
What legal protections exist against military AI?
What’s needed for effective accountability?
Why does this matter globally?
*SOURCE: CIVICUS. Go to ORIGINAL: https://lens.civicus.org/interview/the-lesson-from-gaza-is-clear-when-ai-powered-machines-control-who-lives-human-rights-die/
2025 Human Wrongs Watch
Discover more from HUMAN WRONGS WATCH
Subscribe to get the latest posts sent to your email.

Leave a comment