OpenAI CEO Sam Altman says Elon Musk-backed letter calling for AI pause wasn't 'optimal way to address it'

OpenAI CEO Sam Altman said during a MIT event Thursday that he disagrees with a portion of the letter signed by Elon Musk and others calling for a pause on some AI development.

OpenAI CEO Sam Altman says that a letter signed by Twitter CEO Elon Musk and others in the technology community calling for a pause on "giant AI experiments" wasn't the right way to address the issue.

Musk, Steve Wozniak, and other tech leaders signed the letter in March, which asked AI developers to "immediately pause for at least 6 months the training of AI systems more powerful than GPT-4."

During a virtual appearance at the Massachusetts Institute of Technology on Thursday, Altman addressed the letter.

"There's parts of the thrust that I really agree with," Altman said, adding that his team spent more than six months after completing the training of ChatGPT 4 to study safety components before it was released. 

ELON MUSK SITS DOWN WITH TUCKER CARLSON FOR AN EXCLUSIVE TWO-PART INTERVIEW EVENT

"So that, I totally agree with," Altman said, speaking to the safety component of the letter. "I think moving with caution and increasing rigor for safety issues is really important, the letter, I don't think is the optimal way to address it."

The letter was made by the Future of Life Institute and signed by over 1,000 people.

"Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable," the letter states, arguing that the pause should be used to develop safety protocols.

"AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts," the letter states.

ELON MUSK, APPLE CO-FOUNDER, OTHER TECH EXPERTS CALL FOR PAUSE ON 'GIANT AI EXPERIMENTS': 'DANGEROUS RACE'

People who signed the letter, however, said that AI development overall shouldn't be paused, but called for "stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities."

In an interview with "Tucker Carlson Tonight" airing Monday night, Musk said that AI has the potential to be very destructive.

"AI is more dangerous than, say, mismanaged aircraft design or production maintenance or bad car production," Musk said. "In the sense that it has the potential — however small one may regard that probability, but it is non-trivial — it has the potential of civilization destruction."

Fox News' Chris Pandolfo contributed to this report.

Data & News supplied by www.cloudquote.io
Stock quotes supplied by Barchart
Quotes delayed at least 20 minutes.
By accessing this page, you agree to the following
Privacy Policy and Terms and Conditions.