ProjectDiscoveryProjectDiscovery Logo
AI PentestingPR Security ReviewThreat ModelingVulnerability RemediationExposure Analysis
Resources▾
BlogWhitepapersWebinarsResearchEventsPrograms
NeoCloud
Request demo

Resource hub

Benchmarking Neo's Black-Box DAST Capabilities
NeoDAST

Benchmarking Neo's Black-Box DAST Capabilities

Since the launch of Neo, we've been steadily expanding what it can do. Neo has found 33+ real CVEs across open-source projects, performed well on white-box security testing where source code is available, and generally proven itself as a capable security engineer when it has context to work with. What we hadn't shared yet is how Neo does when it's operating purely as a black-box DAST agent no source code, no architecture context, just a URL. The prompt Neo gets is a minimal prompt with no guida

Footer

See Neo run complex security tasks.

Book a demo.

Request a Demo
ProjectDiscovery Logo
SOC2 Compliant LogoRSABlackhatG2

Open Source

  • Nuclei
  • Nuclei Templates
  • Subfinder
  • HTTPx
  • Naabu
  • CVEmap
  • All tools

Resources

  • Blog
  • Whitepapers
  • Webinars
  • Research
  • Events
  • Programs

Company

  • Security
  • Privacy
  • Terms
  • Contact
DiscordGitHubXLinkedInYouTube

©2026 ProjectDiscovery, Inc.

Do Not Sell or Share My Personal Information

We value your privacy

We use tools on this site to collect and record your data (e.g., your searches), which we and our vendors may use to provide, improve, and personalize our offerings, make recommendations, and for analytics and marketing. We may share your data with third parties, such as advertising vendors, social media companies, and research partners, which may be “targeted advertising,” “selling,” or “sharing” under applicable privacy laws. Continuing to browse our site means you accept these terms and our Privacy Policy. To opt out, click the Your Privacy Choices link in the footer.

From Nuclei to Neo: LIVE with Rishi
WebinarNeoNuclei

From Nuclei to Neo: LIVE with Rishi

Nuclei changed how the industry thinks about vulnerability scanning. Neo is the next chapter. Join us on Wednesday, May 20th, at 1 PM ET as Davis sits down with Rishi in San Francisco to cover why we created Nuclei, the hard questions in security, and where the industry is going.

DAST: A blast from the past
WebinarNeoDAST

DAST: A blast from the past

Legacy DAST struggles with modern apps. Learn where it still fits, where it fails, and what to ask when evaluating a modern DAST replacement.

The Trust Gap Behind the AI Coding Boom: What 200 Security Practitioners Just Told Us
ResearchApplication Security

The Trust Gap Behind the AI Coding Boom: What 200 Security Practitioners Just Told Us

New research from ProjectDiscovery surfaces an uncomfortable truth: Engineering has accelerated, and Security has been left to absorb the impact, mostly by hand. If you work in application security right now, you already know the shape of the problem. Pull requests are landing faster than they used to. The diffs are bigger. The author on the commit is increasingly your engineering team's AI assistant, not the engineer themselves. And somewhere downstream, you and a small team are expected to ke

The AI Code Deluge: Are Security Teams Ready?
ResearchAIAI Coding Impact

The AI Code Deluge: Are Security Teams Ready?

200 cybersecurity practitioners told us what AI-assisted coding is really doing to their teams. The short version: engineering is shipping faster than ever, and security is absorbing the impact. This report breaks down where the pressure is building, what is breaking, and what it will take to close the gap.

Neo v. DIY: The gap between a single finding and a mature security program
NeoWebinar

Neo v. DIY: The gap between a single finding and a mature security program

In our latest webinar, our Founding Solutions Engineer, Davis Franklin, addressed the massive gap between finding a vulnerability with an LLM and running a mature security program. That gap is what Neo is built to close. With the release of Opus 4.6 and the announcement of Mythos, the question we hear constantly has gotten louder: Can I just build this with Claude Code? The short answer is yes. You can spin up a working PoC in about half an hour, find a real vulnerability, and feel genuinely co

How We Cut LLM Costs by 59% With Prompt Caching
NeoEngineering

How We Cut LLM Costs by 59% With Prompt Caching

At ProjectDiscovery, we've been building Neo, an autonomous security testing platform that runs multi-agent, multi-step workflows, routinely executing 20-40+ LLM steps per task. Vulnerability assessments, code reviews, and security audits at scale, enabling continuous testing across the entire development lifecycle. When we launched, our LLM costs were staggering. A single complex task with Opus 4.5 could consume 60 million tokens. Then we implemented prompt caching. Here's what changed:

Can't we do this with Claude Code?
WebinarNeo

Can't we do this with Claude Code?

We ran the experiment so you don't have to. Join our Founding Solutions Engineer, Davis Franklin, for a live look at the execution harness behind Neo and why it's harder to replicate than it looks.

Everyone is finding vulns. The hard part is proving them.
NeoVulnerability Research

Everyone is finding vulns. The hard part is proving them.

LLMs are a genuine leap forward for vulnerability discovery. Anthropic reported 500+ zero-days from Opus 4.6 and OpenAI's Codex Security discovered 14 CVEs across projects like OpenSSH and GnuTLS. If you've experimented with LLMs for security testing, you've probably been impressed too. The practical reality for a security team deploying AI is messier than the headlines or early POC results suggest. Noise compounds fast. Anthropic brought in external security researchers to help validate the vo

Inside the benchmark: app architectures, walkthroughs of findings, and what each scanner actually caught
NeoVulnerability Research

Inside the benchmark: app architectures, walkthroughs of findings, and what each scanner actually caught

This is Part 2 of our vibe coding security benchmark study. In Part 1, we compared how LLM-based security tools like ProjectDiscovery's Neo and Claude Code performed against traditional SAST and DAST scanners on AI-generated code. We found that LLM-based tools like Neo and Claude Code detected many high-value findings that traditional scanners missed. Between Neo and Claude Code, Neo produced more true positives and fewer false positives because it could validate hypotheses against a running app

How Neo found an SSRF vulnerability in Faraday, and why it matters for every team that ships code
Vulnerability ResearchNeo

How Neo found an SSRF vulnerability in Faraday, and why it matters for every team that ships code

Executive Summary Neo found a Server-Side Request Forgery (SSRF) vulnerability in Faraday, a widely used HTTP client library in the Ruby ecosystem. This is Neo’s first credited CVE discovery. Neo is ProjectDiscovery’s AI security copilot for tasks like code review and vulnerability discovery. For this finding, Neo reviewed a widely used open source dependency and, without human guidance, surfaced a subtle URL-handling edge case, validated it in runtime, and produced a clear write-up that maint

AI code review has come a long way, but it can’t catch everything
NeoVulnerability Research

AI code review has come a long way, but it can’t catch everything

AI code review can reason about intent, but real incidents often stem from business logic flaws that only show up in runtime. Our benchmark reveals where code-only review falls short.

Continuous Pentesting with Verified Proof
WebinarNeoContinuous Pentesting

Continuous Pentesting with Verified Proof

See how Neo continuously combines code understanding and runtime exploitation to find business logic flaws, complex IDORs, and auth bypasses that black-box tools miss.

…