Featured Stories

How We Cut LLM Costs by 59% With Prompt Caching

How We Cut LLM Costs by 59% With Prompt Caching

At ProjectDiscovery, we've been building Neo, an autonomous security testing platform that runs multi-agent, multi-step workflows, routinely executing 20-40+ LLM steps per task. Vulnerability assessments, code reviews, and security audits at scale, enabling continuous testing across the entire development lifecycle. When we launched, our LLM costs were staggering. A single complex task with Opus 4.5 could consume 60 million tokens. Then we implemented prompt caching. Here's what changed:

Vulnerability Research

Related stories

Nuclei & Nuclei Template

Related stories

Vulnerability Management

Related stories

Customer Stories

Related stories

Educational Stories

Related stories

Company Announcements

Related stories