Artificial Intelligence Security Summit

Screenshot_20231105_183456_Chrome

Ministers and representatives from various countries have gathered at a summit to discuss the safety of artificial intelligence in Bletchley Park, UK.

The document, whose signatories include Brazil and Chile, emphasizes the “urgent need to collectively understand and manage the risks” of AI.

Given the growing potential of models like ChatGPT, Bletchley’s statement shows they are coming together to identify the problem and highlight its opportunities.

Then there will be two international summits on AI, one in South Korea – in six months – and another in France, in a year.

In conjunction with the meeting, US Vice President Kamala Harris will announce the creation of an artificial intelligence security institute in Washington during a speech in London.

That structure – similar to the one announced by the United Kingdom – brings together experts to establish “warnings” and evaluate the most advanced AI models to “identify and mitigate” risks, according to the White House.

Creative AIs – capable of producing text, sounds or images in a few seconds – have progressed exponentially in recent years and the next generation of these models will appear this summer.

They offer great promise in the fields of medicine and education, but they can also destabilize societies, enable the manufacture of human weapons or circumvent human arms control, the British government has warned.

Among them, Ursula von der Leyen, President of the European Commission, Antonio Guterres, Secretary General of the UN, Giorgia Meloni, Head of the Italian Government, Elon Musk, billionaire and American business star, among others.

“Our goal is to establish a framework for better understanding (…) and at least have an independent arbitrator who can look at what AI companies are doing and raise the alarm if something is of concern,” Musk told the press.

The hope of this summit is that there will be an international consensus on the initial understanding of advanced AI.

In an open letter published on Tuesday, several of the “founding fathers” of this technology, such as Yoshua Bengio and Geoffrey Hinton, advocated “developing and ratifying an international treaty on AI” to reduce the “potentially catastrophic” risks. “Advanced systems defy humanity.”

The challenge is to implement safeguards without hindering the innovation of AI labs and tech giants.

The EU and the United States, unlike the United Kingdom, chose the path of regulation.

Last week, several companies such as OpenAI, Meta (Facebook) or DeepMind (Google) agreed to make public some of their AI security policies at the request of the UK.

In an open letter addressed to Rishi Sunak, a hundred international organizations, experts and activists regret that the summit is being held “behind closed doors”, dominated by tech giants and with limited access to civil society.