Why your AI-built MVP scored 12 on security: a case study
A walk-through of a real CodeClanker scan on an anonymized AI-built repo that returned 12/100 on security. Every finding, the dependency or file that proves it, and what the fix actually looks like.
The repo: a single-page React app, scaffolded with Create React App, intended as the frontend for a no-code automation product. The founder built it with Cursor over a weekend. The demo works. They were two days from inviting their first beta users.
Overall score: 16 / 100. Security: 12 / 100. Here is why, line by line.
Every score below is real output from a CodeClanker scan. The summaries are quoted directly. The findings are deterministic — meaning we did not infer them, we found them with real tooling against the real file tree.
The headline finding
axios@1.10.0 carries 17 CVEs including 3 high-severity (GHSA-3p68-rc4w-qgx5, GHSA-3w6x-2g7m-8v23, GHSA-43fc-jf86-j433).
Three things to notice:
- The CVE IDs are real and verifiable. Plug any of those GHSA IDs into github.com/advisories and you will see a CVSS score, a description, and the affected versions. CodeClanker did not invent them — it pulled them from OSV.dev's public vulnerability database.
- This dependency was unpinned in the LLM's choice of version. When the AI wrote
"axios": "^1.10.0"inpackage.json, that meant "any 1.x version, including 1.10.0." The 1.10.0 entry happened to land in CVE territory. Newer 1.x patches exist; the project just never updated. - Three of these are high-severity. One of them (GHSA-3p68-rc4w-qgx5) is a server-side request forgery vulnerability. If your app uses
axioswith any user-supplied URL, an attacker can pivot.
The fix: npm install axios@latest, run npm audit to verify, commit. Total time: 5 minutes. Cost of skipping: a high-severity SSRF in a customer-facing app.
Why the score is 12, not 25
CodeClanker uses an explicit rubric for security. Two of the rules that fired here:
- Any package with 5+ CVEs OR any critical-severity CVE → security score below 10. axios@1.10.0 has 17 CVEs. That alone caps the score under 10.
- Floor adjustments: the model gave a 12 because the deterministic check hit "any package with 5+ CVEs," but no committed secret or env file was found, which would have driven it lower still.
If the founder had also committed a .env file, the score would have been a 5. If a service-role Supabase key was in the bundle, also a 5. CodeClanker is not "creative" with these scores — they follow rules.
The other findings, ranked
high No CI workflow detected
The repo has no .github/workflows/ directory. Meaning: no automated linting, no automated tests, no automated build verification, no automated dependency audit. The founder runs everything manually before pushing — sometimes.
Effect on related dimensions: CI absence is a multi-dimension penalty. It caps devops at 25 and qa_testing at 20. The QA score for this repo is 9.
The fix: a 30-line GitHub Actions YAML, copy-pasted from our checklist. Stop merging code that does not type-check.
high No error tracking, no structured logging
No error tracking SDK, no structured logging, no custom metrics; production failures will be undebuggable.
The repo has no Sentry, no Bugsnag, no Rollbar, no LogRocket. No console.log structure either — just unstructured console.error calls scattered through the code.
Result: when the app breaks for a paying user, the founder finds out via email, possibly hours later. Reproducing the bug requires asking the customer to "open dev tools and send me a screenshot."
The fix: 20 minutes to wire Sentry. One initialization line. Done. Now every uncaught exception lands in a dashboard with stack trace and breadcrumbs the moment it happens.
medium No license declared
The repo's package.json has no license field, and no LICENSE file at the root. GitHub reports the repo's license as "Unknown."
Implication: legally, the code is "All Rights Reserved." Anyone who uses or contributes is in a gray zone. If the founder ever wants to take outside investment, the lawyer will flag this in due diligence.
The fix: two minutes. Add "license": "MIT" to package.json and create a LICENSE file with the MIT template. Or pick another permissive license. Just declare it.
medium Default Create React App template, no project-specific docs
The README is the unmodified CRA-generated template — "This project was bootstrapped with Create React App. You can run npm start..." The founder never overwrote it. There is no architecture description, no contributing guide, no environment-setup doc.
Implication: anyone who joins the project (a contractor, a future hire, future-self in six months) has zero context to start from. They need to read the code to learn what the project is.
The fix: 30 minutes. Replace the README with: one paragraph of "what this is," an environment-setup section, and a list of the major files. Done.
What this scan did not find (and why that matters)
The deterministic checks reported:
- 0 secrets in committed files. Good. No AWS keys, no Stripe keys, no JWTs, no DB credentials.
- 0 committed env files. Good.
.envis properly in.gitignore. - 1 vulnerable dependency. Just axios. Not 5, not 17 separate packages — just one.
This matters because it tells us this founder is not careless. They followed env-var hygiene basics. They did not paste secrets into the repo. They just did not know to update axios.
That is the typical pattern. AI-built MVPs are not 100% disasters. They are partial disasters — usually one or two genuinely bad findings plus a long tail of "missing best practices."
What the scan looked like end-to-end
For transparency, here is what CodeClanker actually did to produce this report:
- Parsed the GitHub URL into
owner/repo. - Two parallel GitHub API calls: repo metadata and the recursive file tree (no clone, no execution).
- Pulled the contents of up to 20 key files:
package.json,tsconfig.json,Dockerfile,README.md, deployment configs, etc. - Ran the deterministic checks: regex secret scan over every fetched file, OSV.dev batch query for the declared dependencies, license classification, committed env file detection.
- Sent the deterministic findings + repo metadata + file tree (first 120 paths) + key file contents to a large language model with an explicit scoring rubric.
- Parsed the JSON response, returned the report.
Total time: ~60 seconds. Total external API cost: ~$0.005 in tokens. Total founder time saved: a week of figuring out why their first paying customer's session keeps timing out, three months from now.
The pattern
This score is not unusual. The first batch of repos we scanned for the public landing page averaged below 25 overall. Each one had a different specific failure pattern, but the categories were the same: outdated deps, no CI, no observability, no license, default README.
None of these are exotic problems. None require a senior engineering team to fix. They require a list — which is exactly what CodeClanker exists to produce.
What scored 80+ in this scan: nothing. What scored 50+: nothing. The highest dimension was architecture at 12 (the project has a clear single-purpose structure). That is the optimistic finding.
The founder's reaction, after seeing the report: "OK, I see the list. Most of these are an afternoon. The axios one I would have shipped without knowing." That is the value of a scan — making a one-line fix you would have missed visible.
Where would your repo land?
One repo URL. 60 seconds. Real findings, not generic advice.
Run a free scan →