G42 has introduced its Frontier AI Safety Framework, establishing a multi-layered governance system to address risks associated with the development and deployment of advanced AI models.
The framework outlines protocols for risk assessment, independent oversight, and deployment safeguards designed to detect and mitigate risks linked to emerging AI capabilities, including cybersecurity vulnerabilities, biological threats, and autonomous decision-making risks.
The framework will be governed by the G42 Frontier AI Governance Board, led by key executives, including Dr Andrew Jackson, Chief Responsible AI Officer, and Alexander Trafton, Head of Technology Risk. The board will oversee model compliance, safety protocols, and incident response measures.
As part of its oversight mechanisms, G42 will conduct regular internal audits and annual external reviews to monitor compliance. The company plans to publish transparency reports detailing key findings and risk assessments to enhance accountability.
Risk thresholds and mitigation measures
G42 has defined capability thresholds that will trigger specific safeguards. If AI models approach risk areas—such as cybersecurity breaches or biological misuse—G42 will modify system behaviours, limit deployment, or introduce additional restrictions.
The framework includes a collaboration with external AI risk experts, including METR and SaferAI, who provided guidance on mitigation strategies.
X-Risks Leaderboard
G42 launched the X-Risks Leaderboard, an open evaluation platform to measure real-world AI model risks across cybersecurity, chemistry, and biology. Built on G42’s Safety Evaluation Suite at Inception, the platform allows AI models to be tested for vulnerabilities under operational conditions.
G42 is expanding its partnerships with major global tech players, including Microsoft, NVIDIA, AMD, Cerebras, and Qualcomm, to foster collaboration on AI risk mitigation. Peng Xiao, Group CEO of G42, emphasised the company’s commitment to innovation with safeguards, stating, “This framework reflects our commitment to AI safety, ensuring that innovation moves forward with the right safeguards in place.”
The company also pledged to share threat intelligence with industry stakeholders and engage with regulators and policymakers globally to support the development of international AI safety standards.
