AI Vulnerability Tsunami: Key Findings from a New Survey
In an age where the speed of software development reigns supreme, Artificial Intelligence (AI) coding tools have arrived as the saviors of productivity. However, a recent survey conducted by Sapio Research for Aikido Security has sounded a serious alarm regarding the security of this high-speed code. The survey, which polled 450 IT professionals across the U.S. and Europe, indicates that 69% of organizations have discovered vulnerabilities in code generated by AI tools—a figure that is shocking on its own. Worse yet, 20% of these organizations reported that these vulnerabilities have resulted in a serious security incident.
This data clearly illustrates how quickly AI is permeating the underlying layers of our operational software. According to the survey, an average of 24% of the code currently running in organizations’ production environments has been generated using AI tools. This rapid adoption has left 92% of respondents worried to some degree about vulnerabilities in AI-generated code, with 25% being seriously concerned. This is no laughing matter when insecure code is quickly building the foundation of our digital world—it demands attention.
Code Review Quality Dips Due to Alert Fatigue
Another critical section of this news report on Karina Web points to the questionable quality of existing code review processes. It was found that software engineers spend an average of 6.1 hours each week checking and triaging security tool alerts. The truly painful part? 72% of that time is wasted due to false positives!
Imagine a developer, pressured to deliver new features on time, being forced to spend hours chasing leads that amount to nothing. The result? Nearly two-thirds of respondents (65%) confessed that their teams, suffering from “alert fatigue,” either bypass security checks, delay fixes, or simply dismiss findings. This means the defensive walls against insecure AI-generated code are essentially full of holes. Mike Wilkes, CISO at Aikido Security, rightly questions this situation. He notes, “regardless of how code was written the pressure application development teams are under to deliver new features on time means that best DevSecOps practices will continue to be bypassed.” He adds the sad truth that, in the name of expediency, many organizations are only paying lip service to best DevSecOps practices.
The Fog of Accountability: Developer, Security Team, or AI?
When flawed code enters the production environment, one of the biggest questions arises: Who is to blame? The survey reveals that accountability is strangely ambiguous. Over half of respondents (53%) blame security teams for not discovering the vulnerabilities. In contrast, 45% blame the developer, and 42% blame the person who merged the code into the main branch.
This confusion over accountability poses a significant managerial and cultural challenge. When everyone is unclear, no one assumes full ownership of security. If the AI writes the code, the developer reviews it, and the security team’s tool dismisses it due to false positives, who is ultimately responsible for a serious breach? This ambiguous situation, unfortunately, benefits no one but the attackers.
Optimism for the Future and the “Technical Debt” Dilemma
Despite all these concerns, IT professionals remain optimistic about the potential of AI. A full 96% of respondents believe that AI will eventually be able to write secure code. However, only 21% think this goal can be achieved without human oversight. In another notable prediction, 90% expect AI to replace the need for humans to conduct penetration testing.
But for now, the reality is different. Even though 79% of organizations are relying more on AI to help fix vulnerabilities, an equal percentage also noted that remediating critical vulnerabilities still takes longer than a single day, and every organization is grappling with a significant backlog of issues that need to be addressed.
As Mike Wilkes explains, AI is merely exposing flaws in the software development process that have long existed. He warns, “The challenge is that at the current rate and scale applications are being built and deployed in the AI era, it’s now only a matter of time before these flaws manifest themselves in a way that creates major disruptions.”
His conclusion raises an important philosophical question: Do these disruptions warrant revisiting how software is constructed, or should they simply be viewed as the “cost of doing business” in a multi-trillion dollar global economy that relies on flawed software?
The final piece of advice is this: Hopefully, the code being generated by AI coding tools is the worst it will ever be as additional advances are made. Until then, however, savvy DevOps teams would be wise to closely monitor the amount of “technical debt” that is starting to pile up as application developers rely more on AI with each passing day. Leaving the security trash heap to the AI will be a cruel joke.
Source: devops.com