On Wednesday, UK Prime Minister Rishi Sunak announced that the nation will host “the first major global summit on AI safety” this autumn. It hopes to bring together “key countries, leading tech companies, and researchers” to evaluate and monitor risks from artificial intelligence.
Over the past year, the perceived high rate of tech progress in machine learning has fostered concerns about adequate government regulation. These worries were recently amplified by some AI experts likening the potential threats posed by AI to those of pandemics or nuclear weapons. “AI” has also been an extremely buzzy term in business recently. Along those lines, the UK government wants to step in and take a leadership role in the field.
“Breakthroughs from AI continue to improve our lives—from enabling paralysed people to walk to discovering superbug-killing antibiotics,” the UK government said in a press release. “But the development of AI is extraordinarily fast moving and this pace of change requires agile leadership. That is why the UK is taking action, because we have a global duty to ensure this technology is developed and adopted safely and responsibly.”
The somewhat vague news release did not specify a date, organizational format, or venue for the summit.
In the lead-up to the prospective summit, Sunak recently conducted talks with various industry leaders, including CEOs from AI labs like OpenAI, DeepMind, and Anthropic. The UK says that the upcoming event is set to build on recent discussions about AI safety held at the G7, OECD, and Global Partnership on AI. Additionally, the first briefing of the UN Security Council on the impact of AI on international peace and security is planned for July.
“No one country can do this alone,” Sunak said in the news release. “This is going to take a global effort. But with our vast expertise and commitment to an open, democratic international system, the UK will stand together with our allies to lead the way.”
Who will be invited?
In its summit announcement, the UK government did not formally reveal a list of invitees. But the press release enthusiastically discusses the companies OpenAI, DeepMind, Anthropic, Palantir, Microsoft, and Faculty as examples of machine learning-adjacent businesses that have offices in the UK. It also quotes executives from these companies, including Alexander Karp, the co-founder and CEO of Palantir:
“We are proud to extend our partnership with the United Kingdom, where we employ nearly a quarter of our global workforce. London is a magnet for the best software engineering talent in the world, and it is the natural choice as the hub for our European efforts to develop the most effective and ethical artificial intelligence software solutions available.”
This list of potential summit participants has already drawn criticism from some quarters. Rachel Coldicutt, who runs the London-based equity and social justice research consultancy Careful Industries, tweeted, “This press release for the UK AI Safety Summit features DeepMind, Anthropic, Palantir, Microsoft and Faculty and not a single voice from civil society or academia, and no one with lived experience of algorithmic harms.”
Of the companies mentioned, Palantir, in particular, has been the subject of controversy, with its close ties to the military and defense sectors raising questions about the potential misuse of AI. The company’s technology has been deployed in law enforcement and surveillance, leading to privacy and civil liberties concerns.
Dr. Sasha Luccioni of Hugging Face expressed similar concerns on Twitter: “The ‘first major global summit on AI safety’ featuring companies like Palantir (military/defense) and DeepMind (X-risk/AGI)—very promising for addressing the real and current risks of AI (like its use in the military).” Her sarcasm underlines the perceived disconnect between the summit’s proclaimed safety objectives and the activities of some of the businesses touted in the announcement.
After several warning letters about AI risk this year from big names in tech, “AI safety” has become a touchy topic, with much debate over whether machine learning systems represent an existential risk to humanity. Simultaneously, AI ethics proponents like Luccioni, whom we have interviewed previously on this topic, feel that not enough attention is being paid to harmful applications of AI that already exist today.
While the UK government emphasizes popular talking points about AI risk in its news release, these critiques from AI ethics proponents highlight a growing demand for more comprehensive and diverse perspectives at events that seek to craft government regulation over AI. The global AI community will no doubt be closely watching to see whether the UK-hosted event can deliver an inclusive and productive discussion about the risks of AI that reaches beyond the usual photo-ops with the new titans of industry.