Project Liberty Substack
Will AI technology and autonomous weapons mean fewer deaths? Or will AI, programmed with its own ethics and optimized for speed, lead to more casualties in war?
As explored in last week’s newsletter, in part one of our series on AI and war, AI is permeating every aspect of military operations—from identifying targets to strike in Iran to mass surveillance in Gaza.
But AI is not a weapon itself. Instead, it operates as underlying technological infrastructure across all aspects of military operations. This makes regulating and governing AI in war more challenging, and it raises ethical questions about the specific circumstances of its use. Of course, this is true about AI outside of military use-cases, as well: AI is moving from the application layer to the infrastructure layer, powering every aspect of the internet as we know it.
In this week’s newsletter, we examine the legal and ethical implications of AI’s use.
