-

4 min read

The Trust Gap Behind the AI Coding Boom: What 200 Security Practitioners Just Told Us

The Trust Gap Behind the AI Coding Boom: What 200 Security Practitioners Just Told Us

New research from ProjectDiscovery surfaces an uncomfortable truth: Engineering has accelerated, and Security has been left to absorb the impact, mostly by hand.

If you work in application security right now, you already know the shape of the problem. Pull requests are landing faster than they used to. The diffs are bigger. The author on the commit is increasingly your engineering team's AI assistant, not the engineer themselves. And somewhere downstream, you and a small team are expected to keep up.

You're not imagining it, and it isn't a local problem. We wanted real data on the gap, so in March 2026 ProjectDiscovery commissioned a blind survey of 200 cybersecurity practitioners across North America and Western Europe. Every respondent works at a mid to large enterprise. Every respondent uses AI-assisted coding in some form. More than half help select or approve the cyber tools their organization buys. We asked 12 questions about what's changing, what's breaking, and what they'd need to see before trusting the next wave of AI cybersecurity tools.

Here's a preview of what came back.

The whole industry is shipping faster. Security is not scaling at the same rate.

100% of respondents said engineering delivery has gotten faster in the last twelve months. Not most. All of them. 49% credit most or all of the lift to AI-assisted coding.

That tracks with what analysts have been signaling for a while. Gartner projects that 70% of professional developers will use AI coding assistants by 2027, up from less than 10% in 2023. Stack Overflow's 2024 developer survey found that more than three quarters of developers are already using AI tools or planning to. The adoption curve isn't a forecast anymore.

Only 38% of the security teams in our survey said they're comfortably keeping up with the resulting code volume. The rest are feeling the squeeze or already falling behind.

"All respondents are seeing engineering ship faster. Only 38% of security teams say they're comfortably keeping up."

That gap isn't closing on its own.

The bugs AI coding amplifies are the bugs traditional scanners struggle with most

We asked practitioners to rank the top challenges AI-assisted coding has introduced or amplified in their environment:

  • Secrets exposure (78%)
  • Insecure dependency usage and supply chain risk (73%)
  • Business logic vulnerabilities (72%)
  • Reduced code review quality (66%)
  • Injection-class vulnerabilities (66%)

Four of the top five are context-heavy. They depend on understanding what the application is trying to do, who the user is, and what state the system is in. Pure pattern matching struggles with all of them.

European respondents flagged secrets exposure higher than their North American counterparts, which likely reflects the day-to-day pressure of GDPR and the broader regulatory posture around data handling. ENISA has been consistent in calling supply chain compromise one of the most significant threats facing European organizations, and the data here suggests practitioners are feeling that pressure inside their AI-touched codebases.

Two thirds of the security week is going to manual validation, not fixes

This is the number worth sitting with.

66% of respondents said more than half of their working week goes to manually validating and reproducing findings before anything can be prioritized or routed.

66% of practitioners spend the majority of their week validating findings, not fixing them.

The triggers for all that manual work are logical, and bleak. 59% need to prove exploitability in their own environment before anyone will pay attention. 54% said developers won't act without hard evidence. 53% are still working through false positives.

The legacy stack is part of the problem. Practitioners ranked dependency and SCA alerts (74%), SAST and code scanning (60%), and secrets scanning (58%) as the top three sources of low-value noise pulling them away from real exploitable issues. Those are also three of the most heavily funded categories in the typical appsec budget. By practitioners' own reckoning, the tools that should be helping the most are generating the most queue.

Practitioners want AI to help. They have a clear specification for what trustworthy looks like.

This part of the survey was the most useful to us, and the most underdiscussed in the wider conversation about AI security platforms.

Security teams aren't anti-AI. Asked about preferred use cases for AI-driven penetration testing, they had ready answers: targeted testing tied to pull requests, validation of scanner findings, business logic abuse testing, authenticated testing of critical user flows, authorization testing. The appetite is real.

What they're clear about is the conditions. The full ranked list is in the report, but every requirement near the top points to the same underlying principle. Show your work. Operate within bounds. Be auditable. Don't move fast and break the production environment.

That's a buyer specification, and it's one the AI security tools market has not, on the whole, met yet.

Why this matters for how we are building Neo

We will not turn this post into a product pitch. The findings stand on their own, and the report goes much deeper than the slice we have shared here.

What we will say is this. ProjectDiscovery Neo was designed against the same problem the survey describes. The validation tax that is consuming most of the security week. The growing volume of AI-generated code that has to be assessed in context, not just pattern-matched. The non-negotiable need for an audit trail, scoped credentials, and human review before anything risky executes.

We did not retrofit those properties onto Neo. They are how it was built. If you read the survey and find yourself nodding at the trust requirements, you are reading a description of the AI security platform we have been building toward.

Get the full findings

The blog post is the opening chapter. The full report includes:

  • Regional breakdowns across all questions, including where North American and European practitioners diverge
  • The full ranked list of trust requirements for AI-driven testing
  • Top vulnerability triage bottlenecks and where automation has the highest leverage
  • The complete picture on time allocation across appsec activities
  • Methodology and respondent profile

If you are responsible for application security, AI governance, or cyber tooling decisions inside your organization, this is the data you want in the room the next time the conversation turns to AI risk.


ProjectDiscovery surveyed 200 cybersecurity practitioners across North America and Western Europe in February and March 2026. All respondents work at organizations with more than 500 employees and use AI-assisted coding in some capacity. More than half are involved in cyber product and service selection.