AI House Davos 2025

AI We Can Trust: Collective Systems for Global Safety

Moderated by: Ayisha PiottiTuesday - Power & Responsibility

Video ID: DKcFw5RS6pU

Video Thumbnail

Executive Summary

The panel discussion centered on the critical need for building trustworthy AI systems that prioritize global safety. Panelists emphasized the importance of establishing clear standards and regulations, while acknowledging the challenges posed by the rapid advancement of AI technology and geopolitical divergences. A key debate revolved around whether regulation or standards should come first, with different perspectives based on jurisdictional contexts. Transparency emerged as a central theme, with panelists advocating for open-source initiatives and collaborative development to foster trust and accountability. The discussion also addressed the challenges of deepfakes and the need for solutions to ensure content authenticity. Panelists highlighted the importance of collaboration between industry, academia, and international organizations to advance AI safety. They stressed the need for diverse representation in standard-making bodies, including experts from various fields beyond technology. The ecological impact of AI was also raised, underscoring the need for sustainable AI development practices. The discussion concluded with a call for continued efforts to address the global challenges of AI governance, emphasizing the importance of staying positive and working collectively to ensure that AI benefits humanity as a whole. The panelists agreed that a multi-faceted approach involving technical solutions, ethical frameworks, and international cooperation is essential to navigate the complex landscape of AI safety and trust.

Panelists

Agata Ferretti
AI Alliance Lead for Europe, IBM Research
  • The AI Alliance is a nonprofit organization founded by IBM, Meta, and others to build, advocate for, and adopt open-source AI.
  • Transparency is fundamental to the AI Alliance's mission, allowing for assessment and improvement of technology safety.
  • The AI Alliance works on authentic AI, data, and open models, focusing on collaborative development.
  • Open-source solutions can be adopted by enterprises and institutions, balancing power imbalances with closed-source options.
Andreas Krause
Professor, ETH Zürich
  • AI systems should be safe, reliable, fair, and transparent, as outlined in responsible AI frameworks and laws like the EU AI Act.
  • Formal assurances, similar to those in controls engineering, are needed for AI systems, but are challenging due to complexity.
  • Basic research is crucial to make AI systems more reliable, focusing on areas like safe reinforcement learning.
  • Current systems rely on assessments, tests, and benchmarks to evaluate capabilities and compliance.
Kirk Bresniker
HPE Fellow, Vice President and Chief Architect at HPE Labs, HPE Labs
  • HPE is evaluating AI in product, process, and partnership, driven by a human rights footprint assessment.
  • HPE's responsible AI ethics design principles focus on being human-focused, privacy-preserving, inclusive, responsible, and robust.
  • Hazards-based safety engineering principles should be applied to AI systems to guarantee public safety.
  • Quality is fitness for use as defined by the end customer, requiring dialogue and empathy.
Silvio Dulinsky
Deputy Secretary General, ISO
  • ISO brings together 170 national standards bodies to develop international standards through technical experts.
  • Standards organizations develop roadmaps to address challenges and establish best practices for industries and governments.
  • Countries can adopt international standards as national standards, which governments can then use as the basis for regulations.
  • Standards support regulators by defining principles while leaving technical details to technical standards that can evolve faster.

Main Discussion Points

Key Insights

✓ Consensus Points

  • AI systems should be safe, reliable, fair, and transparent.
  • Collaboration between industry, academia, and international organizations is crucial for advancing AI safety.
  • Transparency is essential for building trust in AI systems.
  • Diversity and inclusion are important in standard-making bodies.
  • The ecological impact of AI needs to be addressed.

⚡ Controversial Points

  • The question of whether regulation or standards should come first in AI governance, with differing views based on jurisdiction (EU vs. North America vs. China).
  • The balance between open-source and closed-source AI models, with concerns about the safety and control of open-source models.
  • The extent to which AI systems can be made fully transparent and explainable, with differing views on the feasibility and necessity of blackbox models.

🔮 Future Outlook

  • AI will continue to advance rapidly, requiring ongoing efforts to develop regulations and standards that keep pace.
  • The path to AI safety will involve negotiation, litigation, legislation, and innovation, iterated vertically and jurisdictionally.
  • Attestation and provenence will become increasingly important for ensuring the trustworthiness of AI systems.
  • The international community will need to work together to address the global challenges of AI governance, despite geopolitical divergences.
  • AI will be used as a positive factor to drive forward sustainable development.

💡 Novel Insights

  • The idea of watermarking human-generated content instead of AI-generated content to address the issue of deepfakes.
  • The application of hazards-based safety engineering principles, traditionally used in transportation and nuclear industries, to AI systems.
  • The concept of physically unclonable functions as a nonforgeable anchor for attestation in AI systems.
  • The recognition that quality is fitness for use as defined by the end customer, requiring dialogue and empathy in AI development.
  • The need to consider the total sum impact of achieving a particular AI outcome, including the resources consumed.