// security

Common security gaps in AI-built apps

The same ten security holes show up in nearly every repo built primarily with Cursor, Lovable, Bolt, v0, or Claude Code. Each is fixable in an afternoon. Each becomes a real incident when you skip it.

Published 2026-05-08 11 min read

AI assistants are great at generating features. They are not great at the security hygiene that ships with those features in production-grade code: rate limiting, input validation, secret rotation, dependency hygiene. Those things look like overhead until they are not.

This is the list we keep finding in real scans. None of these are exotic. All of them are common because the AI did not spontaneously volunteer the fix.

1. Secrets committed to git

critical

What it looks like: a .env, .env.local, or .env.production file pushed to the repo, often in the very first commit. Or hardcoded keys in config.js or src/utils/api.js.

Why AI tools do this: when you ask the LLM to "set up the OpenAI integration," it writes const apiKey = "sk-..." directly because that is the simplest correct-looking code. It does not warn you to use an env var. The "best practice" warnings are the first thing that gets cut from generation for token-budget reasons.

The fix:

  1. Add .env* to .gitignore immediately, except .env.example.
  2. If you already pushed a real secret, rotate it now, then rewrite history with git filter-repo or accept that the secret is permanently public.
  3. Use process.env.OPENAI_API_KEY with a clear failure if missing.

2. Outdated dependencies with known CVEs

critical

What it looks like: axios@1.6.0, next@13.4.0, express@4.17.1 — versions that were current six months ago and now carry public advisories. We see axios with 17 CVEs in nearly every CRA / Vite repo we scan.

Why AI tools do this: training cutoff. The LLM remembers the version that was current when it was trained. Even with web search, the model often picks the version it knows over the latest.

The fix: run npm audit fix regularly. Better, schedule it: GitHub's Dependabot is free, takes five minutes to set up, and opens PRs automatically. Even better, add a CI step that fails the build on any critical advisory using npm audit --audit-level=high. Plug your repo into OSV.dev's API or just trust the scan we already do for free at codeclanker.com.

3. No rate limiting on public endpoints

critical

What it looks like: a /api/chat endpoint that calls OpenAI for every request, with no auth and no rate limit. One scraper or bored teenager later, your monthly OpenAI bill is $4,000.

Why AI tools do this: rate limiting is plumbing. The LLM was asked to wire up the chat feature, not to protect it. Adding express-rate-limit is two lines you need to know to ask for.

The fix:

import rateLimit from 'express-rate-limit';

app.use('/api/', rateLimit({
  windowMs: 60 * 1000,
  max: 30,
  standardHeaders: true,
  legacyHeaders: false,
}));

If you are using Vercel or Cloudflare, use their built-in rate limiting at the edge. The default of "anyone can hit my endpoint as fast as they want" is the security default no human would write.

4. CORS configured as *

high

What it looks like: app.use(cors()) with no options, or res.setHeader('Access-Control-Allow-Origin', '*') in production.

Why AI tools do this: wide-open CORS makes "it works in dev" easier. The LLM was solving the CORS error you complained about, not threat-modeling.

The fix: explicit allowlist of your real origins.

const ALLOWED = ['https://yourapp.com', 'https://www.yourapp.com'];
app.use(cors({
  origin: (origin, cb) => {
    if (!origin || ALLOWED.includes(origin)) cb(null, true);
    else cb(new Error('Not allowed by CORS'));
  },
  credentials: true,
}));

5. SQL or NoSQL injection through unparameterized queries

critical

What it looks like: string concatenation building a query — db.query(`SELECT * FROM users WHERE email = '${email}'`) — instead of parameterized queries.

Why AI tools do this: the unparameterized version is shorter and "looks cleaner." Schools have been teaching against this for 20 years and the LLM still does it half the time.

The fix: use the parameterized form your driver provides. With pg: db.query('SELECT * FROM users WHERE email = $1', [email]). With Prisma, Drizzle, or any ORM, this is the default — use the ORM and never write raw SQL with template strings.

6. JWT with no expiry, weak secret, or stored in localStorage

high

What it looks like: jwt.sign(payload, 'secret123') with no expiresIn, then storing the token in localStorage where any XSS can grab it.

Why AI tools do this: the simplest auth tutorial does exactly this. The LLM learned from those tutorials.

The fix:

7. Service-role / admin keys exposed to the client

critical

What it looks like: Supabase service-role key bundled into the client JS. Stripe secret key in a Next.js NEXT_PUBLIC_ env var. Postgres admin URL in the frontend.

Why AI tools do this: the LLM does not always understand which key is server-only. Supabase has both anon (public, safe) and service_role (full database access). The LLM picks one of them based on what the example it remembered did.

The fix: any env var prefixed NEXT_PUBLIC_, VITE_, REACT_APP_, or PUBLIC_ ends up in the client bundle. Service role keys, Stripe secret keys, and database admin URLs must never have those prefixes. Audit your env vars: anything that says "secret" or "service-role" should be server-only.

8. No input validation on user-controlled data

high

What it looks like: app.post('/api/users', (req, res) => { db.insert(req.body); }). Whatever the user sends becomes a database row.

Why AI tools do this: validation is a multi-line schema. The LLM was generating the happy path, not the threat model.

The fix: validate every input at the boundary. Use zod, yup, or similar:

import { z } from 'zod';

const CreateUser = z.object({
  email: z.string().email().max(254),
  name: z.string().min(1).max(100),
  role: z.enum(['user', 'admin']),
});

app.post('/api/users', (req, res) => {
  const data = CreateUser.parse(req.body);  // throws on invalid
  db.insert(data);
});

This single pattern catches injection, prototype pollution, mass assignment, and oversized payloads in one place.

9. Missing security headers

medium

What it looks like: no Content-Security-Policy, no X-Frame-Options, no X-Content-Type-Options, no Strict-Transport-Security. The browser is left to default behaviors that let your app get framed, MIME-sniffed, or downgraded.

The fix: use helmet on Node, or your framework's headers config.

import helmet from 'helmet';
app.use(helmet());

Then test what you got at securityheaders.com. Aim for an A. The default Helmet config is reasonable for most apps; tweak only the Content-Security-Policy if you have inline scripts or third-party domains to allow.

10. Logging that leaks sensitive data

medium

What it looks like: console.log('User signed up:', userObject) where userObject contains the password hash, the API key, or the full request body. Then those logs end up in a third-party log aggregator your free tier did not promise to scrub.

Why AI tools do this: "log everything for debugging" is a normal dev habit that becomes a liability the moment logs leave your machine.

The fix: use a structured logger that has a redaction config — pino's redact option is great. List the field paths to redact: ['password', 'apiKey', 'authorization', 'req.body.password']. Test the redaction works in dev before relying on it in prod.


The pattern

Every item on this list has the same shape: the AI generated something that works in the demo, and the production-grade hardening was an opt-in question nobody asked. None of these are bugs in the AI. They are gaps in the prompt.

The good news: each one is a one-day fix. The bad news: nobody is going to do it for you, and the cost of skipping any of them ranges from "embarrassing" to "company-ending."

If you want to know which of these are actually present in your repo right now, run a free scan. CodeClanker checks 1, 2, and 7 deterministically (real tooling, no guessing) and uses an LLM to check the rest based on what your config and code actually contain.

How does your repo score on security?

Free scan. We name every issue with the file or dependency that proves it.

Run a free scan →