Invent a Better Everyday | Abu Dhabi, UAE | G42

Make Enquiry

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

G42's Frontier AI Safety Framework

G42’s Frontier AI Safety Framework is a comprehensive set of protocols designed to ensure the safe and responsible development, deployment, and management of advanced AI technologies. Built on industry best practices and in collaboration with SaferAI and METR, the framework provides multi-layered safeguards to protect against potential risks associated with high-capability AI systems while maximizing their societal benefits.

This page provides a high-level summary of the  Frontier AI Safety Framework, but there’s much more to explore. For an in-depth understanding of the protocols, evaluations, and phased implementation plan, you can view the complete document by clicking below:

Download Report

 

G42's Core AI Principles

As a leader in AI innovation, G42 is committed to developing AI systems in line with its core principles of fairness, reliability, safety, privacy, security, and inclusiveness - upholding societal values in every project.

Fairness
Reliability
Safety
Privacy
Security
Inclusiveness

Frontier Capability Thresholds

G42 defines "Frontier Capability Thresholds" as critical points where an AI system's capabilities require enhanced safeguards due to elevated risks. Evaluations are conducted throughout the model lifecycle to assess and monitor potential hazardous capabilities, such as:

  • Biological Threats Potential misuse in biological weapon development
  • Offensive Cybersecurity Enabling automated cyberattacks
  • Autonomous Operation and Advanced Manipulation Potential future additions for high-impact use cases

Deployment Mitigation Levels (DML)

DMLs are structured safeguards that escalate based on the model's risk profile, protecting against misuse through preventive measures:

Level 1
Basic safeguards for minimal risks
Level 2
Real-time monitoring, prompt filtering, and behavioral anomaly detection
Level 3
Advanced safeguards including red-teaming, phased rollouts, and adversarial testing
Level 4
Maximum safety protocols for high-stakes models

Security Mitigation Levels (SML)

SMLs ensure robust protection against theft of model weights, sensitive data breaches, and unauthorized access. Security measures include:

Level 1
No significant security risks—potential for open-source release
Basic Open
Level 2
Access controls, red-teaming, and adversarial simulations
Protected Controlled
Level 3
Encryption, multi-party access controls, and zero-trust architecture
Encrypted Zero-Trust
Level 4
Maximum security measures
Maximum Fortress

Governance and Compliance

A dedicated Frontier AI Governance Board oversees safety protocols, compliance, and incident response.

Regular evaluations of AI models and compliance with safety standards
Investigating and mitigating non-compliance incidents
Documenting design decisions, risk assessments, and safety measures

Phased Implementation Plan

The Framework will be rolled out in three phases:

Phase 1
First 6 months
Initial setup of governance structures, foundational security protocols, and capability definitions
Phase 2
6-12 months
Scaling operational safeguards, red-teaming exercises, and monitoring
Phase 3
12+ months
Continuous improvements, adaptive safety technology integration, and external collaboration

External Collaboration and Transparency

G42 actively collaborates with regulatory bodies, academic institutions, and industry experts to enhance safety and promote global AI standards.

Public disclosure of the Framework and annual transparency reports
Sharing threat intelligence with industry partners
Engaging in public safety initiatives and cross-sector collaboration

Conclusion

The G42 Frontier AI Safety Framework sets a high standard for responsible AI innovation. By embedding adaptive safeguards, G42 ensures its frontier AI models are developed and deployed safely, protecting societal interests while fostering public trust.

With continuous governance, security, and collaboration, G42 demonstrates leadership in balancing technological progress with ethical responsibility. This proactive stance positions G42 to continue innovating with confidence while protecting public trust and promoting AI's positive impact on society.

For better web experience, please use the website in portrait mode