Dado Ruvic | Reuters

Major tech companies, including Microsoft, Amazon, and OpenAI, came together in a landmark international agreement on artificial intelligence safety at the Seoul AI Safety Summit on Tuesday.

In light of the agreement, companies from various countries, including the U.S., China, Canada, the U.K., France, South Korea, and the United Arab Emirates, will make voluntary commitments to ensure the safe development of their most advanced AI models.

Where they have not done so already, AI model makers agreed to publish safety frameworks laying out how they’ll measure the challenges of their frontier models, such as preventing misuse of the technology by bad actors.

These frameworks will include “red lines” for the tech firms that define the kinds of risks associated with frontier AI systems, which would be considered “intolerable.” These risks include but aren’t limited to automated cyberattacks and the threat of bioweapons.

To respond to such extreme circumstances, companies said they plan to implement a “kill switch” that would cease the development of their AI models if they can’t guarantee mitigation of these risks.

“It’s a world first to have so many leading AI companies from so many different parts of the globe all agreeing to the same commitments on AI safety,” Rishi Sunak, the U.K.’s prime minister, said in a statement Tuesday.

“These commitments ensure the world’s leading AI companies will provide transparency and accountability on their plans to develop safe AI,” he added.

The pact on Tuesday expands on a previous set of commitments made by companies involved in the development of generative AI software last November.

The companies have agreed to take input on these thresholds from “trusted actors,” including their home governments as appropriate, before releasing them ahead of the next planned AI summit — the AI Action Summit in France — in early 2025.

The commitments agreed to on Tuesday apply only to so-called frontier models. This term refers to the technology behind generative AI systems like OpenAI’s GPT family of large language models, which powers the popular ChatGPT AI chatbot.

Ever since ChatGPT was first introduced to the world in November 2022, regulators and tech leaders have become increasingly worried about the risks surrounding advanced AI systems capable of generating text and visual content on par with, or better than, humans.

The European Union has sought to clamp down on unfettered AI development with the creation of its AI Act, which was approved by the EU Council on Tuesday.

The U.K. hasn’t proposed formal laws for AI, however, instead opting for a “light-touch” approach to AI regulation that entails regulators applying existing laws to the technology.

The government recently said it would consider legislating for frontier models in the future but has not committed to a timeline for introducing formal laws.