OpenAI’s AI Project Q* Poses No Threat to Humanity, Researchers Say

Date:

Updated: [falahcoin_post_modified_date]

Has OpenAI invented an AI technology with the potential to threaten humanity? From some of the recent headlines, you might be inclined to think so.

Reuters and The Information first reported last week that several OpenAI staff members had flagged the prowess and potential danger of an internal research project known as Q*. This AI project, according to the reporting, could solve certain math problems but had a chance of building toward an elusive technical breakthrough.

There’s now debate as to whether OpenAI’s board ever received such a letter. But the framing of Q* aside, Q* might not be as monumental or threatening as it sounds. It might not even be new.

AI researchers including Yann LeCun, Meta’s chief AI scientist, were immediately skeptical that Q* was anything more than an extension of existing work at OpenAI — and other AI research labs besides. In a post on X, Rick Lamers, who writes the Substack newsletter Coding with Intelligence, pointed to an MIT guest lecture OpenAI co-founder John Schulman gave seven years ago during which he described a mathematical function called Q*.

Several researchers believe the Q in the name Q* refers to Q-learning, an AI technique that helps a model learn and improve at a particular task by taking specific correct actions. Researchers say the asterisk, meanwhile, could be a reference to A*, an algorithm for checking the nodes that make up a graph and exploring the routes between these nodes.

Google DeepMind applied Q-learning to build an AI algorithm that could play Atari 2600 games at human level… in 2014. A* has its origins in an academic paper published in 1968. And researchers at UC Irvine several years ago explored improving A* with Q-learning — which might be exactly what OpenAI’s now pursuing.

Nathan Lambert, a research scientist at the Allen Institute for AI, believes that Q* is connected to approaches in AI mostly [for] studying high school math problems — not destroying humanity.

OpenAI even shared work earlier this year improving the mathematical reasoning of language models with a technique called process reward models, Lambert said, but what remains to be seen is how better math abilities do anything other than make [OpenAI’s AI-powered chatbot] ChatGPT a better code assistant.

Mark Riedl, a computer science professor at Georgia Tech, was similarly critical of Reuters’ and The Information’s reporting on Q* — and the broader media narrative around OpenAI and its quest toward artificial general intelligence (i.e. AI that can perform any task as well as a human can). Reuters implied that Q* could be a step toward artificial general intelligence (AGI). But researchers — including Riedl — dispute this.

There’s no evidence that suggests that large language models [like ChatGPT] or any other technology under development at OpenAI are on a path to AGI or any of the doom scenarios, Riedl told TechCrunch. OpenAI itself has at best been a ‘fast follower,’ having taken existing ideas … and found ways to scale them up. While OpenAI hires top-rate researchers, much of what they’ve done can be done by researchers at other organizations. It could also be done if OpenAI researchers were at a different organization.

Riedl didn’t guess at whether Q* might entail Q-learning or A*. But if it involved either — or a combination of the two — it’d be consistent with the current trends in AI research, he said.

These are all ideas being actively pursued by other researchers across academia and industry, with dozens of papers on these topics in the last six months or more, Riedl added. It’s unlikely that researchers at OpenAI have had ideas that have not also been had by the substantial number of researchers also pursuing advances in AI.

That’s not to suggest that Q* — which reportedly had the involvement of Ilya Sutskever, OpenAI’s chief scientist — might not move the needle forward.

Lamers asserts that, if Q* uses some of the techniques described in a paper published by OpenAI researchers in May, it could significantly increase the capabilities of language models. Based on the paper, OpenAI might’ve discovered a way to control the reasoning chains of language models, Lamers says — enabling them to guide models to follow more desirable and logically sound paths to reach outcomes.

This would make it less likely that models follow ‘foreign to human thinking’ and spurious-patterns to reach malicious or wrong conclusions, Lamers said. I think this is actually a win for OpenAI in terms of alignment … Most AI researchers agree we need better ways to train these large models, such that they can more efficiently consume information.

But whatever emerges of Q*, it — and the relatively simple math equations it solves — won’t spell doom for humanity.

[single_post_faqs]
Tanvi Shah
Tanvi Shah
Tanvi Shah is an expert author at The Reportify who explores the exciting world of artificial intelligence (AI). With a passion for AI advancements, Tanvi shares exciting news, breakthroughs, and applications in the Artificial Intelligence category. She can be reached at tanvi@thereportify.com for any inquiries or further information.

Share post:

Subscribe

Popular

More like this
Related

Revolutionary Small Business Exchange Network Connects Sellers and Buyers

Revolutionary SBEN connects small business sellers and buyers, transforming the way businesses are bought and sold in the U.S.

District 1 Commissioner Race Results Delayed by Recounts & Ballot Reviews, US

District 1 Commissioner Race in Orange County faces delays with recounts and ballot reviews. Find out who will come out on top in this close election.

Fed Minutes Hint at Potential Rate Cut in September amid Economic Uncertainty, US

Federal Reserve minutes suggest potential rate cut in September amid economic uncertainty. Find out more about the upcoming policy decisions.

Baltimore Orioles Host First-Ever ‘Faith Night’ with Players Sharing Testimonies, US

Experience the powerful testimonies of Baltimore Orioles players on their first-ever 'Faith Night.' Hear how their faith impacts their lives on and off the field.