5 Notes from the Big Paris A.I. Summit

0
3

World leaders, tech moguls and assorted hangers-on (including yours truly) are gathered in Paris this week for the Artificial Intelligence Action Summit, a conference co-hosted by Emmanuel Macron, the French president, and Narendra Modi, India’s prime minister, to discuss a host of A.I.-related issues.

The leaders of three American A.I. companies — Sam Altman of OpenAI, Dario Amodei of Anthropic and Demis Hassabis of Google DeepMind — are here, as are a flock of prominent A.I. leaders, academic researchers and civil society groups. (Vice President JD Vance, who is leading the U.S. delegation, is expected to appear on Tuesday.)

Between bites of pain au chocolat, here’s some of what I’m seeing so far:

The backdrop for the A.I. summit is that Europe — which passed tough laws on data privacy and social media over the last decade, and had a head start on regulating A.I. with the European Union’s A.I. Act — appears to be having second thoughts.

Mr. Macron, who this week announced $112.5 billion in private investments into the French A.I. ecosystem, has been especially wary of falling behind. He has become a cheerleader for Mistral, a French A.I. start-up, and has argued against “punitive” regulation that could make the country’s tech sector less competitive.

Tech companies (and their lobbyists) appreciate the assist. But it’s probably too late to stop the A.I. Act, which is slated to take effect in stages over the next year. And several American A.I. executives told me they still considered Europe a hard place to do business compared with other big markets, such as India, where regulation is comparatively lax.

The Paris A.I. summit is actually the third in a series of global A.I. summits. The first two — held in Britain in 2023 and in South Korea last year — were much more focused on the potential risks and harms of advanced A.I. systems, up to and including human extinction.

But in Paris, the doomers have been sidelined in favor of a sunnier, more optimistic vision of the technology’s potential. Panelists and speakers were invited to talk up A.I.’s ability to accelerate progress in areas like medicine and climate science, and gloomier talks about A.I. takeover risks were mostly relegated to unofficial side events. And a leaked draft of the official summit statement, which was expected to be signed by some of the attending nations, was panned by A.I. safety groups for paying too little attention to catastrophic risks.

Partly, that reflects a deliberate decision by Mr. Macron and his lieutenants to play up the positive side of A.I. (One of them, Anne Bouverot, a special envoy to the summit, took aim at the “exaggerated fears” of people focused on A.I. safety during her opening remarks on Monday.) But it also reflects a larger shift within the A.I. industry, which seems to be realizing that it’s easier to get policymakers excited about A.I. progress if they’re not worried it’s going to kill them.

Like all A.I. events over the past month, the Paris summit has been buzzing with conversation about DeepSeek, the Chinese A.I. start-up that stunned the world with its powerful reasoning model, reportedly built for a fraction of the cost of leading American models.

In addition to lighting a fire under America’s A.I. giants, DeepSeek has given new hope to smaller A.I. outfits in Europe and elsewhere that had counted themselves out of the race. By using more efficient training techniques and clever engineering hacks to build their models, DeepSeek proved that you might need only tens of millions of dollars — rather than hundreds of billions of dollars — to keep pace on the A.I. frontier.

“DeepSeek has shown that all countries can be part of A.I., which wasn’t obvious before,” Clément Delangue, the French-born chief executive of Hugging Face, an A.I. development company, told me.

Now, Mr. Delangue said, “the whole world is playing catch-up.”

The most popular guessing game of the week has been what the Trump administration’s posture on A.I. will be.

The new administration has made a few moves on A.I. so far, such as repealing the Biden White House’s executive order that laid out a testing program for powerful A.I. models. But it hasn’t yet laid out a full agenda for the technology.

Some people here are hopeful that Elon Musk — one of the president’s top advisers and a man who both runs an A.I. company and has expressed fears about powerful A.I. run amok — will persuade Mr. Trump to take a more cautious approach.

Others believe that the venture capitalists and so-called A.I. accelerationists in Mr. Trump’s orbit, such as the investor Marc Andreessen, will persuade him to leave the A.I. industry alone and tear up any regulations that could slow it down.

Mr. Vance may tip the administration’s hand on Tuesday, during his summit address. But no one here is expecting stability any time soon. (One A.I. executive characterized the Trump administration to me as “high variance,” which is A.I.-speak for “chaotic.”)

The biggest surprise of the Paris summit, for me, has been that policymakers can’t seem to grasp how soon powerful A.I. systems could arrive, or how disruptive they could be.

Mr. Hassabis, of Google DeepMind, said during an event at the company’s Paris office on Sunday that A.G.I. — artificial general intelligence, an A.I. system that matches or exceeds human abilities across many domains — could arrive within five years. (Mr. Amodei, of Anthropic, and Mr. Altman, of OpenAI, have predicted its arrival even sooner, possibly within the next year or two.)

Even if you apply a discount to the predictions made by tech C.E.O.s, the discussions I’ve heard in Paris have lacked the urgency you’d expect them to have, if powerful A.I. really is around the corner.

The policy wonks here are big on fuzzy concepts like “multi-stakeholder engagement” and “innovation-enabling frameworks.” But few are thinking seriously about what would happen if smarter-than-human A.I. systems were to arrive in a matter of months, or asking the right follow-up questions.

What would it mean for workers if powerful A.I. agents capable of replacing millions of white-collar jobs were not a far-off fantasy but an imminent reality? What kinds of regulations would be necessary in a world where A.I. systems were capable of recursive self-improvement, or carrying out autonomous cyberattacks? And if you’re an A.G.I. optimist, how should institutions get ready for rapid improvements in areas like scientific research and drug discovery?

I don’t mean to pile on the policymakers, many of whom are doing their best to keep pace with A.I. progress. Technology moves at one speed; institutions move at another. And it’s possible that industry leaders are way off in their A.G.I. predictions, or that new obstacles to A.I. improvement will emerge.

But at times this week, listening to policymakers discuss how to govern A.I. systems that are already several years old — using regulations that are likely to be outdated soon after they’re written — I’ve been struck by how different these time scales are. It feels, at times, like watching policymakers on horseback, struggling to install seatbelts on a passing Lamborghini.

I’m not sure what to do about this. It’s not as if industry leaders are being vague or unclear about their intentions to build A.G.I., or their intuition that it’s going to happen very soon. But if the summit in Paris is any indication, something is getting lost in translation.

LEAVE A REPLY

Please enter your comment!
Please enter your name here