All for Joomla All for Webmasters
TECH

Exclusive: OpenAI Lobbied the E.U. to Water Down AI Regulation

The CEO of OpenAI, Sam Altman, has spent the last month touring world capitals where, at talks to sold-out crowds and in meetings with heads of governments, he has repeatedly spoken of the need for global AI regulation.

Also Read Google Lens: 5 simple ways to use Google’s visual search tool

But behind the scenes, OpenAI has lobbied for significant elements of the most comprehensive AI legislation in the world—the E.U.’s AI Act—to be watered down in ways that would reduce the regulatory burden on the company, according to documents about OpenAI’s engagement with E.U. officials obtained by TIME from the European Commission via freedom of information requests.

In several cases, OpenAI proposed amendments that were later made to the final text of the E.U. law—which was approved by the European Parliament on June 14, and will now proceed to a final round of negotiations before being finalized as soon as January.

Also Read– Elon Musk tells remote workers to get off their ‘moral high horse.’ Google tracks ID badge swipes. Employees cry foul.

In 2022, OpenAI repeatedly argued to European officials that the forthcoming AI Act should not consider its general purpose AI systems—including GPT-3, the precursor to ChatGPT, and the image generator Dall-E 2—to be “high risk,” a designation that would subject them to stringent legal requirements including transparency, traceability, and human oversight.

That argument brought OpenAI in line with Microsoft, which has invested $13 billion into the AI lab, and Google, both of which have previously lobbied E.U. officials in favor of loosening the Act’s regulatory burden on large AI providers. Both companies have argued that the burden for complying with the Act’s most stringent requirements should be on companies that explicitly set out to apply an AI to a high risk use case—not on the (often larger) companies that build general purpose AI systems.

Also Read– Reddit sent messages to its protesting mods threatening to boot them if they don’t get in line and end their virtual protest, The Verge reports

“By itself, GPT-3 is not a high-risk system,” said OpenAI in a previously unpublished seven-page document that it sent to E.U. Commission and Council officials in September 2022, titled OpenAI White Paper on the European Union’s Artificial Intelligence Act. “But [it] possesses capabilities that can potentially be employed in high risk use cases.”

TIME is publishing the White Paper in full alongside this story.

Also Read– Reddit CEO doubles down on API changes

These lobbying efforts by OpenAI in Europe have not previously been reported, though Altman has recently become more vocal about the legislation. In May, he told reporters in London that OpenAI could decide to “cease operating” in Europe if it deemed itself unable to comply with the regulation, of which he said he had “a lot” of criticisms. He later walked back the warning, saying his company had no plans to leave and intends to cooperate with the E.U.

Still, OpenAI’s lobbying effort appears to have been a success: the final draft of the Act approved by E.U. lawmakers did not contain wording present in earlier drafts suggesting that general purpose AI systems should be considered inherently high risk. Instead, the agreed law called for providers of so-called “foundation models,” or powerful AI systems trained on large quantities of data, to comply with a smaller handful of requirements including preventing the generation of illegal content, disclosing whether a system was trained on copyrighted material, and carrying out risk assessments. OpenAI supported the late introduction of “foundation models” as a separate category in the Act, a company spokesperson told TIME.

Also Read AMD Stock: Rumored Deal with AMZN, AI Event Lead to Flurry of Raised Price Targets

Back in September 2022, however, this apparent compromise had yet to be struck. In one section of the White Paper OpenAI shared with European officials at the time, the company pushed back against a proposed amendment to the AI Act that would have classified generative AI systems such as ChatGPT and Dall-E as “high risk” if they generated text or imagery that could “falsely appear to a person to be human generated and authentic.” OpenAI said in the White Paper that this amendment would mean their models could “inadvertently” be considered high risk and recommended scrapping the amendment. The company argued that it would be sufficient to instead rely on another part of the Act, that mandates AI providers sufficiently label AI-generated content and be clear to users that they are interacting with an AI system.

The amendment that OpenAI took issue with was not included in the final text of the AI Act approved by the European Parliament in June. “They got what they asked for,” says Sarah Chander, a senior policy advisor at European Digital Rights and an expert on the Act, who reviewed the OpenAI White Paper at TIME’s request. The document, she says, “shows that OpenAI, like many Big Tech companies, have used the argument of utility and public benefit of AI to mask their financial interest in watering down the regulation.”

Also Read- Gadget of the Week: Vivo revives its love for the high end

In a statement to TIME, an OpenAI spokesperson said: “At the request of policymakers in the E.U., in September 2022 we provided an overview of our approach to deploying systems like GPT-3 safely, and commented on the then-draft of the [AI Act] based on that experience. Since then, the [AI Act] has evolved substantially and we’ve spoken publicly about the technology’s advancing capabilities and adoption. We continue to engage with policymakers and support the E.U.’s goal of ensuring AI tools are built, deployed and used safely now and in the future.”

In June 2022, three months before sending over the White Paper, three OpenAI staff members met with European Commission officials for the first time in Brussels. “OpenAI wanted the Commission to clarify the risk framework and know how they could help,” an official record of the meeting kept by the Commission and obtained by freedom of information request states. “They were concerned that general purpose AI systems would be included as high-risk systems and worried that more systems, by default, would be categorized as high-risk.” The message officials took away from that meeting was that OpenAI—like other Big Tech companies—were afraid of “overregulation” that could impact AI innovation, according to a European Commission source with direct knowledge of the engagement, who asked for anonymity because they were not authorized to speak publicly. OpenAI staffers said in the meeting they were aware of the risks and doing all they could to mitigate them, the source said, but the staffers did not explicitly say that, as a result of their efforts, OpenAI should be subject to less stringent regulations. Nor did they say what type of regulation they would like to see. “OpenAI did not tell us what good regulation should look like,” the person said.

Read More:-Amazon deploys AI to summarize product reviews

The White Paper appears to be OpenAI’s way of providing that input. In one section of the document, OpenAI described at length the policies and safety mechanisms that it uses to prevent its generative AI tools from being misused, including prohibiting the generation of images of specific individuals, informing users they are interacting with an AI, and developing tools to detect whether an image is AI-generated. After outlining these measures, OpenAI appeared to suggest that these safety measures should be enough to prevent its systems from being considered “high risk.”

“We believe our approach to mitigating risks arising from the general purpose nature of our systems is industry-leading,” the section of the White Paper says. “Despite measures such as those previously outlined, we are concerned that proposed language around general purpose systems may inadvertently result in all our general purpose AI systems being captured [as high risk] by default.”

Read More:-Will the Lebanese parliament manage to elect a president today?

One expert who reviewed the OpenAI White Paper at TIME’s request was unimpressed. “What they’re saying is basically: trust us to self-regulate,” says Daniel Leufer, a senior policy analyst focused on AI at Access Now’s Brussels office. “It’s very confusing because they’re talking to politicians saying, ‘Please regulate us,’ they’re boasting about all the [safety] stuff that they do, but as soon as you say, ‘Well, let’s take you at your word and set that as a regulatory floor,’ they say no.”

In other sections of the White Paper, OpenAI argues for amendments to the Act that would allow AI providers to quickly update their systems for safety reasons, without having to undergo a potentially lengthy assessment by E.U. officials first.

Also Read- Invest in Your Future: 7 In-Demand Careers to Watch Out For

The company also argued for carve-outs that would allow certain uses of generative AI in education and employment, sectors that the first draft of the Act suggested should be considered blanket “high risk” use cases for AI. OpenAI argued that, for example, the ability of an AI system to draft job descriptions should not be considered a “high risk” use case, nor the use of an AI in an educational setting to draft exam questions for human curation. After OpenAI shared these concerns last September, an exemption was added to the Act that “very much meets the wishes of OpenAI to remove from scope systems that do not have a material impact on, or that merely aid in, [human] decision making,” according to Access Now’s Leufer.

OpenAI has continued to engage with European officials working on the AI Act. In a later meeting, on March 31 of this year, OpenAI carried out a demonstration of ChatGPT’s safety features, according to an official record of the meeting kept by the European Commission, obtained via a freedom of information request. An OpenAI staff member explained during the meeting that “learning by operating”—the company’s parlance for releasing AI models into the world and adjusting them based on public usage—“is of great importance.”

Read More:-U.S. stocks rally as inflation data cements bets on rate hike pause

OpenAI also told officials during the meeting that “instructions to AI can be adjusted in such a way that it refuses to share for example information on how to create dangerous substances,” according to the record. This is not always the case. Researchers have demonstrated that ChatGPT can, with the right coaxing, be vulnerable to a type of exploit known as a jailbreak, where specific prompts can cause it to bypass its safety filters and comply with instructions to, for example, write phishing emails or return recipes for dangerous substances.

Source :
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Most Popular

To Top