AI House Davos 2025

A Matter of Life and Death: AI in Military Decision-Making

Moderated by: Gilles CarbonnierTuesday - Power & Responsibility

Video ID: RplWo-KwjKQ

Video Thumbnail

Executive Summary

The panel discussion centered on the complex and pressing issue of AI in military decision-making, particularly the risks associated with delegating life-and-death decisions to machines. Panelists emphasized the applicability of international humanitarian law to all weapons, including AI-powered systems, and stressed the importance of human oversight and control. A key point of contention revolved around the adequacy of existing legal frameworks and the need for new treaties to regulate autonomous weapons. While acknowledging the potential benefits of AI in conflict situations, such as debunking disinformation and enhancing humanitarian efforts, panelists highlighted the ethical challenges and potential for unintended consequences. The discussion underscored the responsibility of tech companies to consider the human rights impacts of their products and the need for international cooperation to address the challenges posed by AI in warfare. A novel idea presented by Kenneth Cukier, encoding the Geneva Conventions into military equipment, sparked considerable interest and debate. This proposal aimed to ensure that AI systems are programmed to adhere to the principles of international humanitarian law, potentially mitigating the risks of unintended harm to civilians. The panel concluded with a call for urgent action to address the ethical and legal challenges posed by AI in warfare, emphasizing the need for ongoing dialogue and collaboration between governments, tech companies, and civil society. The urgency was underscored by the rapid pace of technological development and the increasing investment in AI-powered weapons systems, highlighting the potential for a future where AI plays an increasingly significant role in shaping the nature of conflict.

Panelists

Alain Berset
Secretary General, Council of Europe
  • International treaty on AI is ambitious but necessary to develop a common democratic vocabulary for governing transformative technologies and their impact on human rights.
  • Regulation alone is not enough; parallel development and regulation are needed.
  • There is a risk of private actors having access to badly developed AI technologies.
  • The treaty is a first step to protect the collaboration between AI and human beings.
Isabel Ebert
Leads the B-Tech project, UN Human Rights
  • Engages with tech companies on human rights, focusing on the dual use of technologies.
  • Identified risks of AI, including physical/psychological harm, discrimination, privacy breaches, and manipulation of thought.
  • Advocates for co-developing guidance with companies while upholding human rights standards.
  • Considering a new work stream within B-Tech specifically for AI use cases in the military domain to develop practical guidelines.
Laurent Gisel
Head of the Arms and Conduct of Facilities Office, ICRC
  • International humanitarian law applies to all weapons, including those of today and tomorrow.
  • AI in the military domain brings more autonomy and less human control, raising legal, humanitarian, and ethical concerns.
  • Calls for a new treaty to regulate autonomous weapon systems, prohibiting unpredictable and anti-personnel types.
  • Recommends regulations for other autonomous weapons, focusing on limiting their use to military objectives by nature.
Jean-Marc Rickli
Head of Global and Emerging Risks, GCSP
  • Emerging technologies like AI are becoming indicators of global power, with companies at the forefront of this race.
  • The democratization of capabilities means that technologies formerly in the hands of states are now accessible to non-state actors.
  • Increasingly, technology should be considered an actor in itself, changing the rules of the game.
  • Cooperation is needed to prevent AI and other technologies from falling into the wrong hands, focusing on existential risks.

Main Discussion Points

Key Insights

✓ Consensus Points

  • International humanitarian law applies to the use of all weapons, including AI-powered systems.
  • Human oversight and control are essential to ensure that AI is used responsibly and ethically in the military domain.
  • Tech companies have a responsibility to consider the potential human rights impacts of their products and services.
  • International cooperation is needed to address the challenges posed by AI in warfare.
  • AI has the potential to be used for both beneficial and harmful purposes, and it is important to focus on mitigating the risks while maximizing the benefits.

⚡ Controversial Points

  • The extent to which AI can be trusted to make decisions in complex and unpredictable conflict situations.
  • Whether existing international law is sufficient to address the challenges posed by AI in warfare, or whether new treaties and regulations are needed.
  • The feasibility and desirability of encoding ethical principles and legal rules into AI systems.
  • The appropriate level of autonomy for weapons systems, and the potential for unintended consequences.
  • The balance between promoting innovation in AI and mitigating the risks of its misuse.

🔮 Future Outlook

  • Increased automation in warfare, with AI playing a greater role in decision-making and targeting.
  • The development of increasingly autonomous weapons systems, raising concerns about the potential for unintended consequences and escalation.
  • The democratization of AI capabilities, making them accessible to a wider range of actors, including non-state groups.
  • The potential for AI to be used for both offensive and defensive purposes, leading to an arms race in AI technology.
  • The need for ongoing dialogue and collaboration between governments, tech companies, and civil society to address the ethical and legal challenges posed by AI in warfare.

💡 Novel Insights

  • Kenneth Cukier's proposal to encode the Geneva Conventions into military equipment.
  • The idea of requiring defense contractors to develop dual or triple use goods (civilian, military, and IHL compliant).
  • The framing of AI as a potential tool for debunking fake news and disinformation in conflict situations.