Current:Home > NewsTech leaders urge a pause in the 'out-of-control' artificial intelligence race -QuantumFunds
Tech leaders urge a pause in the 'out-of-control' artificial intelligence race
View
Date:2025-04-16 11:51:01
Are tech companies moving too fast in rolling out powerful artificial intelligence technology that could one day outsmart humans?
That's the conclusion of a group of prominent computer scientists and other tech industry notables such as Elon Musk and Apple co-founder Steve Wozniak who are calling for a 6-month pause to consider the risks.
Their petition published Wednesday is a response to San Francisco startup OpenAI's recent release of GPT-4, a more advanced successor to its widely used AI chatbot ChatGPT that helped spark a race among tech giants Microsoft and Google to unveil similar applications.
What do they say?
The letter warns that AI systems with "human-competitive intelligence can pose profound risks to society and humanity" — from flooding the internet with disinformation and automating away jobs to more catastrophic future risks out of the realms of science fiction.
It says "recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control."
"We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4," the letter says. "This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium."
A number of governments are already working to regulate high-risk AI tools. The United Kingdom released a paper Wednesday outlining its approach, which it said "will avoid heavy-handed legislation which could stifle innovation." Lawmakers in the 27-nation European Union have been negotiating passage of sweeping AI rules.
Who signed it?
The petition was organized by the nonprofit Future of Life Institute, which says confirmed signatories include the Turing Award-winning AI pioneer Yoshua Bengio and other leading AI researchers such as Stuart Russell and Gary Marcus. Others who joined include Wozniak, former U.S. presidential candidate Andrew Yang and Rachel Bronson, president of the Bulletin of the Atomic Scientists, a science-oriented advocacy group known for its warnings against humanity-ending nuclear war.
Musk, who runs Tesla, Twitter and SpaceX and was an OpenAI co-founder and early investor, has long expressed concerns about AI's existential risks. A more surprising inclusion is Emad Mostaque, CEO of Stability AI, maker of the AI image generator Stable Diffusion that partners with Amazon and competes with OpenAI's similar generator known as DALL-E.
What's the response?
OpenAI, Microsoft and Google didn't respond to requests for comment Wednesday, but the letter already has plenty of skeptics.
"A pause is a good idea, but the letter is vague and doesn't take the regulatory problems seriously," says James Grimmelmann, a Cornell University professor of digital and information law. "It is also deeply hypocritical for Elon Musk to sign on given how hard Tesla has fought against accountability for the defective AI in its self-driving cars."
Is this AI hysteria?
While the letter raises the specter of nefarious AI far more intelligent than what actually exists, it's not "superhuman" AI that some who signed on are worried about. While impressive, a tool such as ChatGPT is simply a text generator that makes predictions about what words would answer the prompt it was given based on what it's learned from ingesting huge troves of written works.
Gary Marcus, a New York University professor emeritus who signed the letter, said in a blog post that he disagrees with others who are worried about the near-term prospect of intelligent machines so smart they can self-improve themselves beyond humanity's control. What he's more worried about is "mediocre AI" that's widely deployed, including by criminals or terrorists to trick people or spread dangerous misinformation.
"Current technology already poses enormous risks that we are ill-prepared for," Marcus wrote. "With future technology, things could well get worse."
veryGood! (98543)
Related
- Krispy Kreme offers a free dozen Grinch green doughnuts: When to get the deal
- China reaffirms its military threats against Taiwan weeks before the island’s presidential election
- Pierce Brosnan cited for walking in dangerous thermal areas at Yellowstone National Park
- School bus camera captures reckless truck driver in Minnesota nearly hit children
- Woman dies after Singapore family of 3 gets into accident in Taiwan
- Old Navy’s Activewear Sale Is Going Strong & I’m Stocking Up on These Finds For a Fit New Year
- Bulgaria and Romania overcome Austria’s objections and get partial approval to join Schengen Area
- Woman sues dentist after 4 root canals, 8 dental crowns and 20 fillings in a single visit
- Tarte Shape Tape Concealer Sells Once Every 4 Seconds: Get 50% Off Before It's Gone
- China appoints a new defense minister after months of uncertainty following sacking of predecessor
Ranking
- Selena Gomez engaged to Benny Blanco after 1 year together: 'Forever begins now'
- Almost 5 million blenders sold at Costco, Target and Walmart are recalled because blades are breaking off
- AP Week in Pictures: Latin America and Caribbean
- A cargo ship picking up Ukrainian grain hits a Russian floating mine in the Black Sea, officials say
- What do we know about the mysterious drones reported flying over New Jersey?
- Gypsy Rose Blanchard Shares First Selfie of Freedom After Release From Prison
- What are the Dry January rules? What to know if you're swearing off alcohol in 2024.
- Workers in New England states looking forward to a bump up in minimum wages in 2024
Recommendation
Charges tied to China weigh on GM in Q4, but profit and revenue top expectations
What stores are open and closed for New Year’s Eve 2023? See hours for Walmart, Target, CVS and more
North Carolina retiree fatally struck by U.S. Postal Service truck, police say
Recall of nearly 5 million portable blenders under way for unsafe blades and dozens of burn injuries
Senate begins final push to expand Social Security benefits for millions of people
Ohio State sold less than two-thirds of its ticket allotment for Cotton Bowl
Idaho Murder Case: House Where 4 College Students Were Killed Is Demolished
France heightens security for New Year’s Eve, with 90,000 police officers to be mobilized