UK

Nations agree to develop shared risk thresholds for AI as Seoul summit closes

Twenty-seven nations, including the UK and US, as well as the EU have signed up to the new agreement.

Twenty-seven nations and the EU have signed an agreement to create shared risk thresholds around the development of AI
Twenty-seven nations and the EU have signed an agreement to create shared risk thresholds around the development of AI (Dominic Lipinski/PA)

Twenty-seven nations and the European Union have signed a new agreement to create shared risk thresholds around the development of artificial intelligence (AI) to close the Seoul summit on the safety of the technology.

The agreement will see the countries develop an internationally recognised threshold for AI model capabilities and when it should be considered it poses a severe risk without appropriate mitigations.

That risk could include the potential for AI to help malicious actors acquire or use chemical and biological weapons, or by the technology attempting to evade human oversight through deception.

The agreement, known as the Seoul Ministerial Statement, was signed at the conclusion of the AI Seoul Summit in South Korea, which the UK has co-hosted.

Alongside the UK and South Korea, the United States, France, and the UAE were among those to sign up to the agreement, However China, which was involved in the summit talks, did not sign the statement.

Join the Irish News Whatsapp channel

Technology Secretary Michelle Donelan said: “It has been a productive two days of discussions which the UK and the Republic of Korea have built upon the ‘Bletchley Effect’ following our inaugural AI Safety Summit which I spearheaded six months ago. 

“The agreements we have reached in Seoul mark the beginning of phase two of our AI safety agenda, in which the world takes concrete steps to become more resilient to the risks of AI and begins a deepening of our understanding of the science that will underpin a shared approach to AI safety in the future. 

“For companies, it is about establishing thresholds of risk beyond which they won’t release their models.

“For countries, we will collaborate to set thresholds where risks become severe. The UK will continue to play the leading role on the global stage to advance these conversations.”

As part of the agreement, the signatories have now set the target of developing the risk proposals alongside AI companies, civil society and academia, so that they can be discussed at the AI Action Summit, which is due to be hosted by France in 2025.

The announcement follows agreements also being reached on the first day of the summit which saw 16 leading AI companies from around the world commit to publishing safety frameworks on how they will approach specific risks around AI, and a second agreement between 10 nations and the EU to create an international network of AI safety institutes that will share research and other data.