AI lobbying at the U.S. federal level is intensifying in the midst of a continued generative AI boom and an election year that could significantly influence future AI regulation.
New data from OpenSecrets, a nonprofit group tracking and publishing metrics on campaign financing and lobbying, shows the number of groups lobbying the federal government on AI-related issues grew from 459 in 2023 to 556 in the first half of 2024 from January to July. At the same time, top AI startups have ramped up their lobbying initiatives, OpenSecrets data reveals.
ChatGPT maker OpenAI has dramatically increased its lobbying expenditures, spending $800,000 in the first six months of 2024 compared to $260,000 in all of 2023. The company has also expanded its team of outside lobbyists from three consultants last year to around 15 in the first half of 2024. In March, OpenAI retained former Republican Senator Norm Coleman to advocate on research and development issues. Other prominent law firms, including Akin Gump Strauss Hauer & Feld and DLA Piper, have registered lobbyists for OpenAI, according to OpenSecrets.
OpenAI rival Anthropic is on track to spend half a million dollars on lobbying in the coming months. So far in 2024, the company has invested $250,000 in its five-lobbyist team. Cohere, which builds custom generative AI models for enterprise customers, increased its lobbying spend from $70,000 to $120,000 in the first half of this year.
Leading presidential nominees have made clear their divergent stances on AI regulation, offering different paths for the future. In an election year that feeds urgency, AI startups are facing potential antitrust cases that have led companies like Microsoft and Amazon to navigate carefully in this space.
The surge in lobbying efforts by AI startups underscores the high stakes involved in the future of AI regulation. With an election year adding to the urgency, companies are investing heavily in influencing policy decisions that could shape the industry for years to come. As federal and state governments grapple with how to regulate this rapidly evolving technology, the actions taken in the coming months will have far-reaching implications for both AI developers and the broader public.