In 1950, Alan Turing, the gifted British mathematician and code-breaker, published an academic paper. His aim, he wrote, was to consider the question, “Can machines think?”
The answer runs to almost 12,000 words. But it ends succinctly: “We can only see a short distance ahead,” Mr. Turing wrote, “but we can see plenty there that needs to be done.”
More than seven decades on, that sentiment sums up the mood of many policymakers, researchers and tech leaders attending Britain’s A.I. Safety Summit on Wednesday, which Prime Minister Rishi Sunak hopes will position the country as a leader in the global race to harness and regulate artificial intelligence.
On Wednesday morning, his government released a document called “The Bletchley Declaration,” signed by representatives from the 28 countries attending the event, including the U.S. and China, which warned of the dangers posed by the most advanced “frontier” A.I. systems.
“There is potential for serious, even catastrophic, harm, either deliberate or unintentional, stemming from the most significant capabilities of these A.I. models,” the declaration said.
“Many risks arising from A.I. are inherently international in nature, and so are best addressed through international cooperation. We resolve to work together in an inclusive manner to ensure human-centric, trustworthy and responsible A.I.”
The document fell short, however, of setting specific policy goals. A second meeting is scheduled to be held in six months in South Korea and a third in France in a year.
Governments have scrambled to address the risks posed by the fast-evolving technology since last year’s release of ChatGPT, a humanlike chatbot that demonstrated how the latest models are advancing in powerful and unpredictable ways.
Future generations of A.I. systems could accelerate the diagnosis of disease, help combat climate change and streamline manufacturing processes, but also present significant dangers in terms of job losses, disinformation and national security. A British government report last week warned that advanced A.I. systems “may help bad actors perform cyberattacks, run disinformation campaigns and design biological or chemical weapons.”
Mr. Sunak promoted this week’s event, which gathers governments, companies, researchers and civil society groups, as a chance to start developing global safety standards.
The two-day summit in Britain is at Bletchley Park, a countryside estate 50 miles north of London, where Mr. Turing helped crack the Enigma code used by the Nazis during World War II. Considered one of the birthplaces of modern computing, the location is a conscious nod to the prime minister’s hopes that Britain could be at the center of another world-leading initiative.
Bletchley is “evocative in that it captures a very defining moment in time, where great leadership was required from government but also a moment when computing was front and center,” said Ian Hogarth, a tech entrepreneur and investor who was appointed by Mr. Sunak to lead the government’s task force on A.I. risk, and who helped organize the summit. “We need to come together and agree on a wise way forward.”
With Elon Musk and other tech executives in the audience, King Charles III delivered a video address in the opening session, recorded at Buckingham Palace before he departed for a state visit to Kenya this week. “We are witnessing one of the greatest technological leaps in the history of human endeavor,” he said. “There is a clear imperative to ensure that this rapidly evolving technology remains safe and secure.”
Vice President Kamala Harris, and Gina Raimondo, the secretary of commerce, were taking part in meetings on behalf of the United States.
Wu Zhaohui, China’s vice minister of science and technology, told attendees that Beijing was willing to “enhance dialogue and communication” with other countries about A.I. safety. China is developing its own initiative for A.I. governance, he said, adding that the technology is “uncertain, unexplainable and lacks transparency.”
In a speech on Friday, Mr. Sunak addressed criticism he had received from China hawks over the attendance of a delegation from Beijing. “Yes — we’ve invited China,” he said. “I know there are some who will say they should have been excluded. But there can be no serious strategy for A.I. without at least trying to engage all of the world’s leading A.I. powers.”
With development of leading A.I. systems concentrated in the United States and a small number of other countries, some attendees said regulations must account for the technology’s impact globally. Rajeev Chandrasekhar, a minister of technology representing India, said policies must be set by a “coalition of nations rather than just one country to two countries.”
“By allowing innovation to get ahead of regulation, we open ourselves to the toxicity and misinformation and weaponization that we see on the internet today, represented by social media,” he said.
Executives from leading technology and A.I. companies, including Anthropic, Google DeepMind, IBM, Meta, Microsoft, Nvidia, OpenAI and Tencent, were attending the conference. Also sending representatives were a number of civil society groups, among them Britain’s Ada Lovelace Institute and the Algorithmic Justice League, a nonprofit in Massachusetts.
In a surprise move, Mr. Sunak announced on Monday that he would take part in a live interview with Mr. Musk on his social media platform X after the summit ends on Thursday.
Some analysts argue that the conference will be heavier on symbolism than substance, with a number of key political leaders absent, including President Biden, President Emmanuel Macron of France and Chancellor Olaf Scholz of Germany.
And many governments are moving forward with their own laws and regulations. Mr. Biden announced an executive order this week requiring A.I. companies to assess national security risks before releasing their technology to the public. The European Union’s A.I. Act, which could be finalized within weeks, represents a far-reaching attempt to protect citizens from harm. China is also cracking down on how A.I. is used, including censoring chatbots.
Britain, home to many universities where artificial intelligence research is being conducted, has taken a more hands-off approach. The government believes that existing laws and regulations are sufficient for now, while announcing a new A.I. Safety Institute that will evaluate and test new models.
Mr. Hogarth, whose team has negotiated early access to the models of several large A.I. companies to research their safety, said he believed that Britain could play an important role in figuring out how governments could “capture the benefits of these technologies as well as putting guardrails around them.”
In his speech last week, Mr. Sunak affirmed that Britain’s approach to the potential risks of the technology is “not to rush to regulate.”
“How can we write laws that make sense for something we don’t yet fully understand?” he said.