Biden’s AI Executive Order Faces Criticism Over Safety and Security Flaws, US

Date:

Updated: [falahcoin_post_modified_date]

As the gears start turning to implement President Joe Biden’s immense executive order on AI, questions have been percolating in the tech world: Yes, it’s long and sweeping, but does it focus on the right things?

Two computer science professors — Swarat Chaudhuri of The University of Texas at Austin, and Armando Solar-Lezama of MIT — wrote us with their concerns about flaws in the order that might hinder our abilities to improve safety and cybersecurity in an increasingly AI-driven world.

A year to the day ChatGPT launched, we invited them to elaborate on their concerns with the White House approach to AI in a guest essay.

The Biden administration’s AI executive order sets new standards for the safety and security of artificial intelligence, and specifically calls out security risks from foundation models, the general-purpose statistical models trained on massive datasets that power AI systems like ChatGPT and DALL-E.

As researchers, we agree the safety and security concerns around these models are real.

But the approach in the executive order has the potential to make those risks worse, by focusing on the wrong things and closing off access to the people trying to fix the problem.

Large foundation models have shown an astounding ability to generate code, text and images, and the executive order considers scenarios where such models — like the AI villain in last summer’s Mission: Impossible — create deadly weapons, perform cyberattacks and evade human oversight. The order’s response is to impose a set of reporting requirements on foundation models whose training takes more than a certain (very large) amount of computing power.

The specific focus on the risks of the largest models, though well-intentioned, is flawed in three major ways. First, it’s inadequate: by focusing on large foundation models, it overlooks the havoc smaller models can wreak. Second, it’s unnecessary: we can build targeted mechanisms for protecting ourselves from the bad applications. Third, it represents a regulatory creep that could, in the long run, end up favoring a few large Silicon Valley companies at the expense of broader AI innovation.

FraudGPT, a malicious AI service already available on the dark web, is a good illustration of the shortcomings of the Biden approach. Think of FraudGPT as an evil cousin of ChatGPT: While ChatGPT has built-in safety guardrails, FraudGPT excels at writing malicious code that forms the basis of cyberattacks.

To build a system like FraudGPT, you would start with a general-purpose foundation model and then fine-tune it using additional data — in this case, malicious code downloaded from seedy corners of the internet. The foundation model itself doesn’t have to be a regulation-triggering behemoth. You could build a significantly more powerful FraudGPT completely under the radar of Biden’s executive order. This doesn’t make FraudGPT benign.

Just because one can build models like FraudGPT and sneak them under the reporting threshold doesn’t mean that cybersecurity is a lost cause, however. In fact, AI technology may offer a way to strengthen our software infrastructure.

Most cyberattacks work by exploiting bugs in the programs being attacked. In fact the world’s software systems are, to an embarrassing degree, full of bugs. If we could make our software more robust overall, the threat posed by rogue AIs like FraudGPT — or by human hackers — could be minimized.

This may sound like a tall order, but the same technologies that make rogue AIs such a threat can also help create secure software. There’s an entire sub-area of computer science called formal verification that focuses on methods to mathematically prove a program is bug-free. Historically, formal verification has been too labor-intensive and expensive to be broadly deployed — but new foundation-model-based techniques for automatically solving mathematical problems can bring down their costs.

To its credit, the executive order does acknowledge the potential of AI technology to help build secure software. This is consistent with other positive aspects of the order, which call for solving specific problems such as algorithmic discrimination or the potential risks posed by AI in healthcare.

By contrast, the order’s requirements on large foundation models do not respond to a specific harm. Instead, they respond to a narrative that focuses on potential existential dangers posed by foundation models, and on how a model is created rather than how it is used.

Focusing too tightly on the big foundation models also poses a different kind of security risk.

The current AI revolution was built on decades of decentralized, open academic research and open-source software development. And solving difficult, open-ended problems like AI safety or security also requires an open exchange of ideas.

Tight regulations around the most powerful AI models could, however, shut this off and leave the keys to the AI kingdom in the hands of a few Silicon Valley companies.

Over the past year, companies like OpenAI and Anthropic have feverishly warned the world about the risks of foundation models while developing those very models themselves. The subtext is that they alone can be trusted to safeguard foundation model technology.

Looking ahead, it’s reasonable to worry that the modest reporting requirements in the executive order may morph into the sort of licensing requirements for AI work that OpenAI CEO Sam Altman called for last summer. Especially as new ways to train models with limited resources emerge, and as the price of computing goes down, such regulations could start hurting the outsiders — the researchers, small companies, and other independent organizations whose work will be necessary to keep a fast-moving technology in check.

As you wade through the barrage of assessments of Henry Kissinger’s legacy (he died this week, age 100), it’s worth remembering his late-life interest in AI.

In 2019, the former statesman co-authored with Google mogul Eric Schmidt and the computer scientist Daniel Huttenlocher a book modestly titled The Age of A.I. and our Human Future, warning that AI could disrupt civilization and required global responses.

Although it wasn’t always kindly received — Kevin Roose, in the Times, called it cursory and shallow in places, and many of its recommendations are puzzlingly vague — Kissinger did not let go of the subject. He recorded lengthy videos on AI, and this spring, at a sprightly 99, proclaimed in a Wall Street Journal op-ed that generative AI presented challenges on a scale not experienced since the beginning of the Enlightenment – an observation that gave the U.S. business elite a wake-up call.

As recently as last month, Kissinger co-wrote an essay in Foreign Affairs on The Path to AI Arms Control, with Harvard’s Graham Allison.

It’s hard to know exactly what Kissinger wrote himself, or what motivated this final intellectual chapter — we did email one of his co-authors, who didn’t respond by presstime. (He was reportedly introduced to the topic by Eric Schmidt at a Bilderberg conference.) But it’s not hard to imagine that as a persuasive, unorthodox thinker often accused of inhumanity, Kissinger saw an alien new thought process that was even more unorthodox, even less human, potentially even more persuasive — and he wanted people to know it was time to worry. — Stephen Heuser

[single_post_faqs]
Tanvi Shah
Tanvi Shah
Tanvi Shah is an expert author at The Reportify who explores the exciting world of artificial intelligence (AI). With a passion for AI advancements, Tanvi shares exciting news, breakthroughs, and applications in the Artificial Intelligence category. She can be reached at tanvi@thereportify.com for any inquiries or further information.

Share post:

Subscribe

Popular

More like this
Related

Revolutionary Small Business Exchange Network Connects Sellers and Buyers

Revolutionary SBEN connects small business sellers and buyers, transforming the way businesses are bought and sold in the U.S.

District 1 Commissioner Race Results Delayed by Recounts & Ballot Reviews, US

District 1 Commissioner Race in Orange County faces delays with recounts and ballot reviews. Find out who will come out on top in this close election.

Fed Minutes Hint at Potential Rate Cut in September amid Economic Uncertainty, US

Federal Reserve minutes suggest potential rate cut in September amid economic uncertainty. Find out more about the upcoming policy decisions.

Baltimore Orioles Host First-Ever ‘Faith Night’ with Players Sharing Testimonies, US

Experience the powerful testimonies of Baltimore Orioles players on their first-ever 'Faith Night.' Hear how their faith impacts their lives on and off the field.