All for Joomla All for Webmasters
TECH

Google shared AI knowledge with the world — until ChatGPT caught up

In February, Jeff Dean, Google’s longtime head of artificial intelligence, announced a stunning policy shift to his staff: They had to hold off sharing their work with the outside world.

Read More:Motorola intros Moto Watch 70 & Watch 200 smartwatches

For years Dean had run his department like a university, encouraging researchers to publish academic papers prolifically; they pushed out nearly 500 studies since 2019, according to Google Research’s website.

But the launch of OpenAI’s groundbreaking ChatGPT three months earlier had changed things. The San Francisco start-up kept up with Google by reading the team’s scientific papers, Dean said at the quarterly meeting for the company’s research division. Indeed, transformers — a foundational part of the latest AI tech and the T in ChatGPT — originated in a Google study.

Read More:– 7 Ways Apple Could Improve the Mac App Store

Things had to change. Google would take advantage of its own AI discoveries, sharing papers only after the lab work had been turned into products, Dean said, according to two people with knowledge of the meeting, who spoke on the condition of anonymity to share private information.

The policy change is part of a larger shift inside Google. Long considered the leader in AI, the tech giant has lurched into defensive mode — first to fend off a fleet of nimble AI competitors, and now to protect its core search business, stock price, and, potentially, its future, which executives have said is intertwined with AI.

In op-eds, podcasts and TV appearances, Google CEO Sundar Pichai has urged caution on AI. “On a societal scale, it can cause a lot of harm,” he warned on “60 Minutes” in April, describing how the technology could supercharge the creation of fake images and videos.

Read More:– Google Cloud Launches a Web3 Startup Program

But in recent months, Google has overhauled its AI operations with the goal of launching products quickly, according to interviews with 11 current and former Google employees, most of whom spoke on the condition of anonymity to share private information.

It has lowered the bar for launching experimental AI tools to smaller groups, developing a new set of evaluation metrics and priorities in areas like fairness. It also merged Google Brain, an organization run by Dean and shaped by researchers’ interests, with DeepMind, a rival AI unit with a singular, top-down focus, to “accelerate our progress in AI,” Pichai wrote in an announcement. This new division will not be run by Dean, but by Demis Hassabis, CEO of DeepMind, a group seen by some as having a fresher, more hard-charging brand.

Also Read– How to Use a Live Photo as a Wallpaper on Your iPhone

At a conference earlier this week, Hassabis said AI was potentially closer to achieving human-level intelligence than most other AI experts have predicted. “We could be just a few years, maybe … a decade away,” he said.

Google’s acceleration comes as a cacophony of voices — including notable company alumnae and industry veterans — are calling for the AI developers to slow down, warning that the tech is developing faster than even its inventors anticipated. Geoffrey Hinton, one of the pioneers of AI tech who joined Google in 2013 and recently left the company, has since gone on a media blitz warning about the dangers of supersmart AI escaping human control. Pichai, along with the CEOs of OpenAI and Microsoft, will meet with White House officials on Thursday, part of the administration’s ongoing effort to signal progress amid public concern, as regulators around the world discuss new rules around the technology.

Read More: McCarthy takes Congress back to school on AI

Meanwhile, an AI arms race is continuing without oversight, and companies’ concerns of appearing reckless may erode in the face of competition.

“It’s not that they were cautious, they weren’t willing to undermine their existing revenue streams and business models,” said DeepMind co-founder Mustafa Suleyman, who left Google in 2022 and launched Pi, a personalized AI from his new start-up Inflection AI this week. “It’s only when there is a real external threat that they then start waking up.”

Pichai has stressed that Google’s efforts to speed up does not mean cutting corners. “The pace of progress is now faster than ever before,” he wrote in the merger announcement. “To ensure the bold and responsible development of general AI, we’re creating a unit that will help us build more capable systems more safely and responsibly.”

Read More:– Critics Say DeSantis Is Undermining Florida’s Vaunted Education System

One former Google AI researcher described the shift as Google going from “peacetime” to “wartime.” Publishing research broadly helps grow the overall field, Brian Kihoon Lee, a Google Brain researcher who was cut as part of the company’s massive layoffs in January, wrote in an April blog post. But once things get more competitive, the calculus changes.

“In wartime mode, it also matters how much your competitors’ slice of the pie is growing,” Lee said. He declined to comment further for this story.

“In 2018, we established an internal governance structure and a comprehensive review process — with hundreds of reviews across product areas so far — and we have continued to apply that process to AI-based technologies we launch externally,” Google spokesperson Brian Gabriel said. “Responsible AI remains a top priority at the company.”

Read More:– This woman was told her mortgage was paid off: 10 years later, she received a foreclosure notice in the mail. She decided to fight.

Pichai and other executives have increasingly begun talking about the prospect of AI tech matching or exceeding human intelligence, a concept known as artificial general intelligence, or AGI. The once fringe term, associated with the idea that AI poses an existential risk to humanity, is central to OpenAI’s mission and had been embraced by DeepMind, but was avoided by Google’s top brass.

To Google employees, this accelerated approach is a mixed blessing. The need for additional approval before publishing on relevant AI research could mean researchers will be “scooped” on their discoveries in the lightning-fast world of generative AI. Some worry it could be used to quietly squash controversial papers, like a 2020 study about the harms of large language models, co-authored by the leads of Google’s Ethical AI team, Timnit Gebru and Margaret Mitchell.

Read More:-11 Expenses That Almost Totally Disappear When You Retire

But others acknowledge Google has lost many of its top AI researchers in the last year to start-ups seen as cutting edge. Some of this exodus stemmed from frustration that Google wasn’t making seemingly obvious moves, like incorporating chatbots into search, stymied by concerns about legal and reputational harms.

On the live stream of the quarterly meeting, Dean’s announcement got a favorable response, with employees sharing upbeat emoji, in the hopes that the pivot would help Google win back the upper hand. “OpenAI was beating us at our own game,” said one employee who attended the meeting.

For some researchers, Dean’s announcement at the quarterly meeting was the first they were hearing about the restrictions on publishing research. But for those working on large language models, a technology core to chatbots, things had gotten stricter since Google executives first issued a “Code Red” to focus on AI in December, after ChatGPT became an instant phenomenon.

Read More:– This woman was told her mortgage was paid off: 10 years later, she received a foreclosure notice in the mail. She decided to fight.

Getting approval for papers could require repeated intense reviews with senior staffers, according to one former researcher. Many scientists went to work at Google with the promise of being able to continue participating in the wider conversation in their field. Another round of researchers left because of the restrictions on publishing.

Shifting standards for determining when an AI product is ready to launch has triggered unease. Google’s decision to release its artificial intelligence chatbot Bard and implement lower standards on some test scores for experimental AI products has triggered internal backlash, according to a report in Bloomberg.

Read More:-11 Expenses That Almost Totally Disappear When You Retire

But other employees feel Google has done a thoughtful job of trying to establish standards around this emerging field. In early 2023, Google shared a list of about 20 policy priorities around Bard developed by two AI teams: the Responsible Innovation team and Responsible AI. One employee called the rules “reasonably clear and relatively robust.”

Others had less faith in the scores to begin with and found the exercise largely performative. They felt the public would be better served by external transparency, like documenting what is inside the training data or opening up the model to outside experts.

Consumers are just beginning to learn about the risks and limitations of large language models, like the AI’s tendency to make up facts. But El Mahdi El Mhamdi, a senior Google Research scientist, who resigned in February over the company’s lack of transparency over AI ethics, said tech companies may have been using this technology to train other systems in ways that can be challenging for even employees to track.

Also Read– Before You Buy VICI Properties: Here’s a Net-Lease REIT Stock I’d Buy First

When he uses Google Translate and YouTube, “I already see the volatility and instability that could only be explained by the use of,” these models and data sets, El Mhamdi said.

Many companies have already demonstrated the issues with moving fast and launching unfinished tools to large audiences.

“To say, ‘Hey, here’s this magical system that can do anything you want,’ and then users start to use it in ways that they don’t anticipate, I think that is pretty bad,” said Stanford professor Percy Liang, adding that the small print disclaimers on ChatGPT don’t make its limitations clear.

It’s important to rigorously evaluate the technology’s capabilities, he added. Liang recently co-authored a paper examining AI search tools like the new Bing. It found that only about 70 percent of its citations were correct.

Read More:– 5 Women-Led Businesses Absolutely Crushing the Stock Market Right Now

Google has poured money into developing AI tech for years. In the early 2010s it began buying AI start-ups, incorporating their tech into its ever-growing suite of products. In 2013, it brought on Hinton, the AI software pioneer whose scientific work helped form the bedrock for the current dominant crop of technologies. A year later, it bought DeepMind, founded by Hassabis, another leading AI researcher, for $625 million.

Soon after being named CEO of Google, Pichai declared that Google would become an “AI first” company, integrating the tech into all of its products. Over the years, Google’s AI research teams developed breakthroughs and tools that would benefit the whole industry. They invented “transformers” — a new type of AI model that could digest larger data sets. The tech became the foundation for the “large language models” that now dominate the conversation around AI — including OpenAI’s GPT3 and GPT4.

Read More:– Want $1 Million in Retirement? Invest $250,000 in These 3 Stocks and Wait a Decade.

Despite these steady advancements, it was ChatGPT — built by the smaller upstart OpenAI — that triggered a wave of broader fascination and excitement in AI. Founded to provide a counterweight to Big Tech companies’ takeover of the field, OpenAI faced less scrutiny than its bigger rivals and was more willing to put its most powerful AI models into the hands of regular people.

Meanwhile, Google has been careful to label its chatbot Bard and its other new AI products as “experiments.” But for a company with billions of users, even small-scale experiments affect millions of people and it’s likely much of the world will come into contact with generative AI through Google tools. The company’s sheer size means its shift to launching new AI products faster is triggering concerns from a broad range of regulators, AI researchers and business leaders.

Read More:– A Florida nurse is facing charges after authorities say she used her 88-year-old Alzheimer patient’s identity to open up a credit card to pay for a cosmetic procedure

The invitation to Thursday’s White House meeting reminded the chief executives that President Biden had already “made clear our expectation that companies like yours must make sure their products are safe before making them available to the public.”

Source :
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Most Popular

To Top