All for Joomla All for Webmasters
TECH

Elon Musk Calls to Stop New AI for 6 Months, Fearing Risks to Society

elon musk
  • An open letter—signed by Elon Musk and over 1,000 others with knowledge, power, and influence in the tech space—calls for the halt to all “giant AI experiments” for six months.
  • Anything more powerful than OpenAI’s GPT-4 is deemed too risky for society.
  • Human-competitive AI is becoming a more real concern by the day.

The risks posed to society by artificial intelligence were once far-off wonderings. But it’s no secret that the development of this technology moves fast enough that it’s outpacing efforts to mitigate its risks. The guard rails are off.

Also ReadChina Built a Hypersonic Generator That Could Power Unimaginable Weapons

Elon Musk and over 1,000 others came together to sign an open letter stating that they believe those risks are imminent if we don’t slow down our creation of powerful AI systems. The backers—which include Stability AI CEO Emad Mostaque, researchers at Alphabet-owned DeepMind, and historic AI minds Yoshua Begio and Stuart Russell—join, according to a Reuters article, with the Future of Life Institute, largely funded by the Musk Foundation, Founders Pledge, and Silicon Valley Community Foundation.

And there’s urgency here. The group is calling for a six-month pause in all “giant AI experiments.”

Read More:-9 tips to boost cell signal on Android and iPhone

In the letter, the signers asked for a six-month pause in the development of powerful AI systems, defined as anything more potent than OpenAI’s GPT-4.

“Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” the letter reads. “Society has hit pause on other technologies with potentially catastrophic effects on society. We can do so here.”

Also Read– TikTok Fined $16M By UK Commissioner Over Children’s Data Breaches

Saying that AI can represent a “profound change in the history of life on Earth,” the letter’s backers say there isn’t a level of planning and management happening currently that matches this potential, especially as AI labs continue an “out-of-control race to develop and deploy ever more powerful digital minds that no one—not even their creators—can understand, predict, or reliably control.”

As AI systems become more able to keep up with human abilities in general tasks, the letter asks a series of “should we” questions regarding whether or not we should be letting machines flood information channels with propaganda, automating away jobs, developing nonhuman minds that could replace humans, or risking the loss of control of civilization in our hunger to create better and better neural networks.

Read More:- How a HELOC can advance your business

But, as expected, not everyone agrees. OpenAI CEO Sam Altman hasn’t signed the letter and AI researcher Johanna Bjorklund at Umea University tells Reuters the AI concern is all puff. “These kinds of statements are meant to raise hype,” Bjorklund says. “It’s meant to get people worried. I don’t think there’s a need to pull the handbrake.”

OpenAi has said that at some point it may be important to get independent review before starting to train future systems, and for the most advanced effort to agree to limit the rate of growth of new models.

Source :
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Most Popular

To Top