The world’s most valuable and dominant internet companies are based in the US, but the nation’s unproductive lawmakers and business-friendly courts have effectively outsourced the regulation of tech giants to the EU. That has given tremendous power to Didier Reynders, the European commissioner for justice, who is in charge of crafting and enforcing laws that apply across the 27-nation bloc. After nearly four years on the job, he’s tired of hearing big talk from the US with little action.
Ahead of his latest round of biannual meetings with US officials, including attorney general Merrick Garland in Washington, DC, tomorrow, Reynders told WIRED why the US needs to finally step up, where a probe into ChatGPT is headed, and why he made contentious comments about one of the world’s most prominent privacy activists. His bicoastal tour began with a Waymo robotaxi ride through San Francisco (he gave it a rave review) and include meetings with Google and California’s privacy czar.
On the Costs of US Inaction
It’s been five years since the EU’s stringent privacy law, the GDPR, went into effect, giving Europeans new rights to protect and control their data. Reynders has heard a series of proposals for how the US could follow suit, including from Meta CEO Mark Zuckerberg and other tech executives, Facebook whistleblowers, and members of Congress and federal officials. But he says there has been no “real follow up.”
Although the US Federal Trade Commission has reached settlements with tech companies requiring diligence with user data under threat of fines, Reynders is circumspect about their power. “I’m not saying that this is nothing,” he says, but they lack the bite of laws that open the way to more painful fines or lawsuits. “Enforcement is of the essence,” Reynders says. “And that’s the discussion that we have with US authorities.”
Now Reynders fears history is repeating with AI regulation, leaving this powerful category of technology unchecked. Tech leaders such as Sam Altman, CEO of ChatGPT developer OpenAI, says they want new safeguards, but American lawmakers seem unlikely to pass new laws.
“If you have a common approach in the US and EU, we have the capacity to put in place an international standard,” Reynders says. But if the EU’s forthcoming AI Act isn’t matched with US rules for AI, it will be more difficult to ask tech giants to be in full compliance and change how the industry operates. “If you’re doing that alone, like for the GDPR, that takes some time and it slowly spreads to other continents,” he says. “With real action on the US side, together, it will be easier.”
On ChatGPT’s Data-Gobbling and Policy-Lobbying
ChatGPT is in the crosshairs of both privacy and AI-specific regulatory efforts.
OpenAI in April updated its privacy options and disclosures after Italy’s data protection authority temporarily blocked ChatGPT, but the conclusions of a full investigation into the company’s GDPR compliance is due by October, the country’s regulator says. And an EU-wide data protection task force expects by year’s end to hand down common principles for all member nations on dealing with ChatGPT, Reynders says. All that could force OpenAI to make further adjustments to its chatbot’s data collection and retention.
More broadly, while OpenAI’s Altman has supported calls for new rules governing AI systems, he has also expressed concern about overregulation. In May, headlines thundered that he had threatened to pull services from the EU. Altman has said his comments were taken out of context and that he does want to help define policy.
Reynders says Altman has significant business incentive to make nice with the EU, which has about 100 million more people than the US. “We have asked to have all the major actors in the discussions,” Reynders says. “We want to know their concerns and to see if we will solve that in legislation.” He insists that OpenAI shouldn’t fear new AI rules. “I’ve seen the origin of OpenAI. It’s quite the same idea—to develop new technologies, but for the good,” Reynders says.