IBM Report Reveals AI-Driven Breaches Cost U.S. Firms Millions
According to the newly released IBM “Cost of a Data Breach 2025” report, 13% of the 600 organizations surveyed experienced security breaches directly involving their own AI systems or applications. Alarmingly, 97% of those incidents lacked fundamental access control protections.
The report also warns of a growing trend: cybercriminals are repurposing AI to exploit vulnerabilities. One out of every six breaches featured attackers deploying AI—most commonly to craft realistic phishing emails and deepfake impersonations.
Unauthorized AI tools, referred to as “shadow AI,” emerged as an even greater threat. These employee-deployed systems, introduced without formal approval, were blamed for 20% of the reported breaches. On average, such incidents tacked on an additional $670,000 to the total loss. Breaches involving shadow AI averaged $4.74 million in damages, significantly more than the $4.07 million reported when such tools were absent.
Recent real-world examples underscore the risks. In 2023, a single misconfigured Azure sharing link in a Microsoft AI research repository inadvertently exposed 38 terabytes of internal company data, including over 30,000 Microsoft Teams messages.
That same year, Samsung imposed a temporary ban on generative AI platforms after engineers uploaded confidential chip designs into ChatGPT, creating a major potential leak of proprietary information.
AI vendors themselves are not immune. In March 2023, a bug in OpenAI’s ChatGPT allowed brief exposure of some users’ billing addresses and partial payment card data.
Despite mounting red flags, a vast majority—87%—of companies surveyed still lack formal governance frameworks or risk mitigation protocols for AI usage. This is especially concerning given that supply chain vulnerabilities now account for nearly one-third of AI-related breaches.
Experts say the first line of defense must be identity security. That includes enforcing strict credential and key management, frequent key rotation, and encryption of all training and prompt data used by AI models.
Regular “AI health checks” involving both business and cybersecurity leaders can flag unauthorized deployments early. Additionally, automated threat detection systems are becoming essential for lean security teams needing to separate credible threats from noise.
The report delivers a stark conclusion: “Security AI and automation lower costs, while shadow AI raises them.” Organizations with robust, mature safeguards saw their breach expenses cut by nearly 40%.
With the average cost of a U.S. data breach now reaching $10.22 million—and regulators in both Washington and Brussels working on legislation to curb AI-related risks—analysts stress that boards have a direct financial imperative to secure every AI system. From chat interfaces to training notebooks, every component should be protected using multifactor authentication, expiring access links, and ongoing audit mechanisms—before the next wave of AI hits.
Legal Disclaimer:
EIN Presswire provides this news content "as is" without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the author above.
