What are the ethics of AI at war?

Project Liberty Substack

Will AI technology and autonomous weapons mean fewer deaths? Or will AI, programmed with its own ethics and optimized for speed, lead to more casualties in war?

As explored in last week’s newsletter, in part one of our series on AI and war, AI is permeating every aspect of military operations—from identifying targets to strike in Iran to mass surveillance in Gaza.

But AI is not a weapon itself. Instead, it operates as underlying technological infrastructure across all aspects of military operations. This makes regulating and governing AI in war more challenging, and it raises ethical questions about the specific circumstances of its use. Of course, this is true about AI outside of military use-cases, as well: AI is moving from the application layer to the infrastructure layer, powering every aspect of the internet as we know it.

In this week’s newsletter, we examine the legal and ethical implications of AI’s use.

Discuss

OnAir membership is required. The lead Moderator for the discussions is Matthew Kovacev. We encourage civil, honest, and safe discourse. For more information on commenting and giving feedback, see our Comment Guidelines.

This is an open discussion on this news piece.

Home Forums Open Discussion

Viewing 1 post (of 1 total)
Viewing 1 post (of 1 total)
  • You must be logged in to reply to this topic.
Skip to toolbar