How 5 AI Agents Found 90 Critical Bugs On Our Own Website

TL;DR — Key Numbers
| Metric | Result |
|---|---|
| AI agents running in parallel | 5 |
| Total tool calls | 200+ |
| Issues discovered | 90 |
| Critical security vulnerabilities | 4 |
| SEO blockers | 3 |
| WCAG violations | 6 |
| Time spent on analysis | ~15 minutes |
| Time spent on fixes (4 parallel agents) | ~8 minutes |
| Files changed | 49 |
| Lines removed | 1,306 |
What Would You Do?
Imagine this scenario: You run an AI consultancy. You sell services that help businesses become visible to AI search engines like ChatGPT, Perplexity, and Claude.
And then you discover that your own website blocks these AI crawlers from 80% of your content.
That's exactly what we discovered when we ran a multi-agent code audit on echoalgoridata.no.
The 5 Agent Specialists
We deployed 5 parallel AI agents, each with their own area of expertise:
1. 🔒 Security Agent
Task: Find vulnerabilities, API leaks, XSS risks, CORS issues
Tools used: Code analysis, git history, environment variables
Findings: 24 issues (4 critical)
2. 🔍 SEO/AEO Agent
Task: Audit robots.txt, sitemap, JSON-LD schemas, AI crawler access
Tools used: Live site fetch, source code search, schema validation
Findings: 19 issues (3 critical)
3. 🌍 i18n Agent
Task: Translation system, hreflang implementation, locale handling
Tools used: Deep translation audit, 87 tool calls over 620 seconds
Findings: 22 issues (4 critical)
4. ♿ Accessibility Agent
Task: WCAG compliance, screen reader compatibility, keyboard navigation
Tools used: Component analysis, Framer Motion config, ARIA attributes
Findings: 18 issues (3 critical)
5. 🌐 Live Validation Agent
Task: Verify production environment against source code
Tools used: WebFetch, domain check, SSL validation
Findings: 7 issues (cross-referenced with other agents)
The 5 Worst Findings (Tier 0 — Fix Today)
1. 🚨 AI Crawlers Blocked From 80% of the Site
The problem: We had two robots.txt files:
public/robots.txt— Static file blocking GPTBot, ClaudeBot, PerplexityBot from everything except/blog/and/services/app/robots.ts— Dynamic file with correct, permissive rules
Next.js always serves static files first. Our dynamic robots.ts was never used.
# What AI crawlers actually saw:
User-agent: GPTBot
Disallow: /
Allow: /blog/
Allow: /services/
# Homepage, about, pricing, contact, startups = INVISIBLE
Impact: As an AI consultancy selling AEO services, we were actively blocking our own visibility in AI search engines.
Fix: rm public/robots.txt — 1 minute, massive impact.
2. 🚨 Fake Reviews in JSON-LD = Google Penalty Risk
The problem: Our schema generators included hardcoded fake reviews:
// lib/schemas/enhanced-schema.ts
aggregateRating: {
ratingValue: '4.9',
reviewCount: '50',
bestRating: '5',
worstRating: '1'
}
// A completely different file, different numbers:
// utils/structured-data.ts
aggregateRating: {
ratingValue: '4.8',
reviewCount: '127'
}
Impact: Google explicitly penalizes fabricated structured data with manual actions. We risked losing all visibility.
Fix: Removed all aggregateRating and review from schemas until we connect to a real review platform.
3. 🚨 Stripe Webhook Verification Was a No-Op
The problem: Our webhook endpoint had this "verification":
// Before (VULNERABLE):
if (!signature?.startsWith('t=')) {
return new Response('Invalid', { status: 400 });
}
// Accepts ALL webhooks that start with "t="
Impact: Anyone could send fake payment confirmations to our API.
Fix: Replaced with stripe.webhooks.constructEvent() which actually verifies signatures.
4. 🚨 API Key Leaked in Git History
The problem: A Gemini API key (AIzaSy...) was committed in .env.local.example — a file tracked by git.
Impact: Anyone with repo access could use our API key.
Fix: Revoked key in Google Cloud Console, generated new one, purged from git history.
5. 🚨 Double TranslationProvider = 334KB Loaded Twice
The problem: Both our layout files wrapped the app in <TranslationProvider>:
// app/layout.tsx — ROOT layout
<TranslationProvider translations={translations}>
{children}
</TranslationProvider>
// app/[locale]/layout.tsx — LOCALE layout
<TranslationProvider translations={translations}>
{children}
</TranslationProvider>
Impact: 334.7 KB of translation data loaded and parsed twice per request.
Fix: Removed provider from root layout. Halved overhead immediately.
The Fix Process: 4 Parallel Agents
After identifying all 90 issues, we deployed 4 parallel agents to fix them:
| Agent | Task | Tool Calls | Time |
|---|---|---|---|
| Security agent | Webhooks, CORS, XSS, RegExp | 30 | 142s |
| SEO agent | Schemas, hreflang, sitemap | 68 | 185s |
| i18n agent | Providers, translations | 31 | 155s |
| A11y agent | MotionConfig, ARIA, focus ring | 19 | 116s |
Total fix time: ~8 minutes for 40+ files.
What Actually Got Fixed?
Security ✅
- Stripe webhook now uses
constructEvent()with fail-closed - CORS restricted to echoalgoridata.no/com on payment endpoints
- XSS: Chat links now validate
https://protocol only - RegExp injection escaped in search field
- Auth bypass blocked in production environments
- Rate limiting default changed to
false
SEO/AEO ✅
- Deleted
public/robots.txt(unblocked AI crawlers) - Deleted
public/sitemap.xml(enabled dynamic sitemap) - Removed fake reviews from all schemas
- Created
lib/company-data.tsas single source of truth - English hreflang now uses
echoalgoridata.cominstead of.no/en/
i18n ✅
- Removed duplicate TranslationProvider from app/layout.tsx
- "Launch" → "Lansering" in Norwegian translations
- Footer "GDPR Compliant" is now locale-aware
<html lang>based on URL, not cookie
Accessibility (WCAG) ✅
<MotionConfig reducedMotion="user">wraps entire apparia-describedbynow includes error IDs- ContactForm uses CVA Button with focus ring
- Skip link is localized (NO/EN)
Lessons Learned
1. Static files trump dynamic in Next.js
If you have both public/robots.txt and app/robots.ts, the static one always wins. This is documented but easy to forget.
2. Multi-agent audits find more than single-pass
A single pass would never have found that robots.txt was blocking AI crawlers AND hreflang pointed to the wrong domain AND webhook verification was broken. Specialized agents with deep focus find deeper problems.
3. Parallel fixing is safe with good typing
Running 4 agents editing code simultaneously sounds risky. But with TypeScript strict mode and good separation of concerns, we got zero conflicts.
4. Audit your own site first
We were selling AEO services while being invisible to AI search engines ourselves. Embarrassing, but an important lesson: Test on yourself before selling to others.
How to Run Your Own Multi-Agent Audit
Tools You Need
- Claude Code CLI (or similar agentic coding tool)
- MCP servers for extended functionality (Playwright, GitHub, Supabase)
- Good project structure with CLAUDE.md or similar context file
Agents to Deploy
- Security agent — API keys, webhooks, CORS, XSS, CSRF
- SEO/AEO agent — robots.txt, sitemap, JSON-LD, AI crawler access
- i18n agent — Translations, hreflang, locale handling
- A11y agent — WCAG, ARIA, keyboard navigation, reduced motion
- Performance agent — Bundle size, lazy loading, caching
Prompt Template for Agents
You are a [DOMAIN] expert auditing this codebase.
Goals:
1. Identify all [DOMAIN]-related issues
2. Rank by severity (Critical/High/Medium/Low)
3. Provide concrete fix instructions
Use available MCP tools for live validation.
Cross-reference findings against source code to eliminate false positives.
Conclusion
90 issues. 14 critical. Found in 15 minutes. Fixed in 8 minutes.
Multi-agent code audits aren't the future — they're the present. And the ironic lesson? We who sell AI visibility were ourselves invisible to AI.
Now we're visible. And you've read to the end, so you know how to do the same.
FAQ — Frequently Asked Questions
What is a multi-agent code audit?
A multi-agent code audit is a process where multiple AI agents work in parallel to analyze a codebase from different perspectives (security, SEO, accessibility, etc.). Each agent specializes in one area and can find issues that a generalist would miss.
How long does a multi-agent audit take?
In our case, analysis took ~15 minutes with 5 parallel agents making 200+ tool calls. The fix process took ~8 minutes with 4 parallel agents.
Is it safe to let AI agents fix code?
With proper guardrails, yes. We ran TypeScript strict mode, had good test coverage, and agents worked on separate domains. Zero merge conflicts, zero runtime errors.
What is AEO (Answer Engine Optimization)?
AEO is optimization for AI search engines like ChatGPT, Perplexity, and Claude. Unlike traditional SEO, AEO is about structuring content so AI systems can understand and cite it in their responses.
Why did robots.txt block AI crawlers?
Many older robots.txt templates include restrictive rules for unknown bots. GPTBot, ClaudeBot, and PerplexityBot are relatively new, and old config files often block them unintentionally.
How do I check if my site is visible to AI crawlers?
Check your robots.txt for lines like User-agent: GPTBot followed by Disallow: /. If you see this, you're blocking ChatGPT from indexing your site.
Want us to run a similar audit on your codebase? Contact us for a free assessment.
Stay Updated
Subscribe to our newsletter for the latest AI insights and industry updates.
Get in touch