Mike Vizard presents insights from a survey of IT professionals, revealing the widespread security vulnerabilities found in AI-generated code, and discussing the implications for DevOps and security teams.

Survey Reveals Security Risks in AI-Generated Code

Author: Mike Vizard

Overview

A recent survey conducted by Sapio Research on behalf of Aikido Security, encompassing 450 IT professionals across the U.S. and Europe, highlights a significant rise in vulnerabilities within code generated by artificial intelligence (AI) tools. Key findings indicate organizational concern about AI’s impact on code security, challenges in DevSecOps practices, and a shifting attitude toward AI’s role in fixing vulnerabilities.

Key Findings

  • Prevalence of AI-Generated Code:
    • On average, 24% of production code has been generated by AI tools.
  • Security Vulnerabilities:
    • 69% of organizations report having discovered vulnerabilities in AI-generated code.
    • 20% have experienced a serious incident tied to such vulnerabilities.
  • Organizational Concern:
    • 92% of respondents expressed concern about vulnerabilities from AI-generated code.
    • 25% are seriously concerned.
  • DevSecOps Practice Challenges:
    • Teams dedicate roughly 6.1 hours weekly to reviewing security alerts, with 72% of this time spent on false positives.
    • 65% of teams avoid security checks, delay fixes, or dismiss findings, largely due to alert fatigue.
    • There’s ambiguity in responsibility for security lapses: 53% blame security teams, 45% blame developers, and 42% blame those who merged code.
  • AI’s Role in Remediation:
    • 79% of organizations are increasingly relying on AI to help fix vulnerabilities.
    • Nonetheless, addressing critical vulnerabilities often exceeds a one-day turnaround, with most organizations managing a backlog.
  • Perspective on AI and Future Security:
    • 96% believe AI will eventually write secure code, though only 21% expect this without human oversight.
    • 90% expect AI to eventually replace human penetration testing.

Implications

Mike Wilkes, CISO at Aikido Security, notes that development teams frequently compromise on DevSecOps best practices to meet feature delivery timelines. While alert fatigue contributes to bypassed security checks, AI is simultaneously a source of vulnerabilities and an increasingly relied-upon tool for remediation. The survey suggests that current flaws in code review and security practices are being amplified by the rapid adoption of AI coding tools, highlighting the importance of maintaining robust technical oversight and vigilant DevSecOps methodologies.

Conclusion

The rapid growth in the use of AI for code generation carries both promise and risk. While most IT professionals are optimistic that AI will eventually produce secure code, current practices require organizations to pay close attention to technical debt, alert fatigue, and the imperative for human oversight in the security review process.

This post appeared first on “DevOps Blog”. Read the entire article here