
AI Risks Move to the Forefront as S&P 500 Firms Sound the Alarm in SEC Filings
In an age where artificial intelligence is touted as the next frontier of business innovation, a surprising shift is taking place behind the scenes of America’s corporate elite. According to a new report by The Autonomy Institute, a staggering 75% of S&P 500 companies have updated their official risk disclosures to include concerns about AI. This move, reflected in recent Form 10-K filings submitted to the U.S. Securities and Exchange Commission (SEC), suggests that while AI is publicly promoted as a boon for business, it is also privately regarded as a growing threat.
AI Is Now a Material Risk for Big Business
The report reveals that across every sector—from tech and finance to manufacturing and health care—companies are flagging AI-related challenges. In total, 57 major firms have warned investors they may never see a return on their massive artificial intelligence investments, citing unclear ROI and rapidly evolving compliance concerns.
“This new analysis sheds light on how artificial intelligence is really viewed at the highest levels of business—not just as a miracle technology, but as a potentially destabilizing force,” said Will Stronge, CEO of The Autonomy Institute.
The findings are derived from an exhaustive review of public 10-K filings for the fiscal year ending in 2024. These filings are legally mandated disclosures outlining financial health, operational risks, and legal liabilities—and any change in language is often a harbinger of shifting corporate priorities.
Key AI Risks Identified by S&P 500 Firms
- Cybersecurity Vulnerabilities
- 193 companies now explicitly link AI to the risk of cyberattacks, including the use of generative models for phishing, malware creation, and deepfakes.
- Salesforce warned of “increasingly automated and coordinated attacks” powered by artificial intelligence.
- 193 companies now explicitly link AI to the risk of cyberattacks, including the use of generative models for phishing, malware creation, and deepfakes.
- Deepfake Technology
- Mentions of deepfakes in risk disclosures have doubled. These include concerns about reputation damage, fraud, and impersonation of executives.
- Mentions of deepfakes in risk disclosures have doubled. These include concerns about reputation damage, fraud, and impersonation of executives.
- Regulatory Uncertainty
- Many filings referenced the EU AI Act and U.S. regulatory actions. The lack of clarity around future compliance has been flagged as a legal and financial risk.
- Some companies report ongoing investigations in high-risk domains like automotive AI.
- Many filings referenced the EU AI Act and U.S. regulatory actions. The lack of clarity around future compliance has been flagged as a legal and financial risk.
- Vendor Dependence & IP Risk
- Firms relying on third-party models (e.g., OpenAI, Anthropic) note they may have limited rights to audit or verify these models.
- GE Healthcare highlighted concern over lack of transparency in LLM operations.
- Firms relying on third-party models (e.g., OpenAI, Anthropic) note they may have limited rights to audit or verify these models.
- ROI and Business Sustainability
- 11% of the S&P 500 disclosed that they may never recoup the money spent on AI.
- Several firms cautioned that current investment levels are unsustainable without clearer monetization pathways.
- 11% of the S&P 500 disclosed that they may never recoup the money spent on AI.
Why Companies Are Cautiously Sounding the Alarm
Despite widespread enthusiasm for AI, particularly following the explosion of tools like ChatGPT and agentic AI platforms, firms are beginning to recalibrate expectations. Internally, leadership teams are weighing the potential for operational improvements against significant downsides—particularly when AI introduces legal liability or creates dependencies on opaque third-party tools.
The Autonomy Institute’s research notes that risk disclosures are not necessarily predictive but are considered “early signals” of what boards and legal teams are preparing for. Many of these risks mirror concerns in the public sector, from ethical data use and algorithmic bias to national security issues related to AI weaponization.

Divergence Between Public Hype and Private Concerns
One striking theme in the report is the discrepancy between public messaging and formal filings. While press releases, earnings calls, and marketing materials often celebrate AI capabilities, SEC disclosures paint a more sobering picture.
“This isn’t about skepticism,” said Stronge. “It’s about realism. These firms are saying, ‘Yes, AI has potential—but the path forward is messy, risky, and far from guaranteed.’”
Interestingly, concerns voiced by corporations are more focused on legal exposure, cybersecurity, and strategic risk—not on societal issues like job displacement or AI ethics, which are typically discussed in broader public debates.
FAQs
Q: What is Form 10-K?
A: It’s an annual report filed by publicly traded U.S. companies that includes detailed financial statements, risk factors, and disclosures required by the SEC.
Q: Why are AI risks suddenly appearing in SEC filings?
A: As AI adoption has surged since late 2022, so have its associated legal, ethical, and operational challenges. These risks are now material enough to warrant investor disclosure.
Q: Are these companies reducing their AI investments?
A: Not yet. Most are diversifying their AI vendors or building internal models, but very few are pulling back. However, they’re preparing for long-term turbulence.
Q: What are “agentic AI” systems?
A: These are advanced AI models designed to operate with autonomy—taking actions based on goals rather than following direct commands. They present both promise and heightened risk.
Q: How does the EU AI Act affect U.S. companies?
A: Firms with global operations must now consider new compliance burdens, including fines and reporting obligations, especially for “high-risk” AI systems.
A Cautious Future for Corporate AI
The findings from the Autonomy Institute offer a glimpse into the complex dance between innovation and liability in the corporate adoption of AI. While tech giants and financial firms race to integrate intelligent systems, they’re also hedging their bets—legally, financially, and strategically.
In the months ahead, investors, regulators, and customers alike will be watching closely—not just how these companies use AI, but how they contain its risks.