openclaw/openclaw

43/100
// permanent record of scan from 2026-05-09 · stack: TypeScript · Node.js · pnpm · GitHub Actions · Docker · CodeQL · Discord · Slack · Telegram · Matrix · iMessage · WhatsApp · Signal
// permalink — this URL always shows scan oTOB9waY from 2026-05-09, even if a newer public scan exists for this repo.

Nine dimensions

DevOps
38
Rich CI/CD with CodeQL, multi-platform builds, and Docker confirmed, but no deployment config for self-hosted customers.
Security
22
No secrets found and zero declared dependencies means no CVE surface — but no dependency lock file confirmed, creating indirect supply chain risk.
Cost & infra
36
Docker image builds for multiple architectures, self-hosted model implies customer-controlled compute costs, no visible cost controls.
QA & testing
24
5274 test files claimed but no test directories visible in file tree; CI runs but test quality and coverage are unproven without source evidence.
Performance
27
No caching, profiling, load tests, or database indices visible in file tree despite multi-platform runtime requirements.
Architecture
32
Plugin system, agent skills, and channel extensions suggest sound design, but no backups/migrations/rollback strategy documented.
Code quality
28
Complex multi-agent architecture with plugins/extensions, but no source files fetched to verify lint rules, type coverage, or architectural enforcement.
Observability
18
No error tracking system detected despite complex multi-platform architecture handling Discord/Slack/Telegram channels.
Maintainability
31
MIT license and structured agent skills exist, but 7671 open issues (21% of stars) signals unsustainable maintenance load.

Top findings (AI)

critical

7671 Open Issues — Maintenance Collapse

7,671 open issues against 370k stars means 2.1% of stargazers have filed issues. At this ratio, two-person engineering teams will spend all cycles triaging, not shipping. No issue SLA or moderation policy visible in .github/ISSUE_TEMPLATE/. This is a maintenance debt spiral, not a healthy open-source project.

critical

No Error Tracking Infrastructure

Error tracking: false confirmed. The application handles AI model responses across 20+ channel integrations (Discord, Slack, Telegram, etc.) with agent execution loops. Without Sentry/DataDog/equivalent, runtime failures in production customer environments are invisible. A single LLM response parsing error silently drops user requests.

high

Zero Dependencies Declared in Scan

OSV scan ran against 0 declared dependencies. This means either: (a) no package.json exists, (b) dependencies are managed but not scanned, or (c) the scan was malformed. For a TypeScript project with agent skills, plugins, and extensions, this is implausible and creates unquantified supply chain risk.

high

No Deployment Config for Self-Hosted Customers

.github/workflows/docker-release.yml builds images, but no vercel.json, fly.toml, or cloud-native config visible. For a 'personal AI assistant' product, self-hosted customers need Helm charts, docker-compose, or systemd unit files. Without this, onboarding paying customers requires undocumented manual deployment steps.

medium

No Structured Logging Framework

No logging library or log level configuration visible. The agent execution layer handles LLM tool calls, file system access, and 20+ channel integrations. Without structured logs (pino/winston), debugging production issues across platforms is archaeology, not engineering.

medium

CodeQL Only — No Unit/Integration Test Evidence

5274 test files claimed but no test/ directory in file tree. CodeQL is static analysis, not functional testing. For a multi-agent system where LLM outputs drive downstream actions, only runtime integration tests can validate behavior. Absence suggests tests may be framework-generated or nonexistent.

Scan your own repo

Free 60-second scan. No signup.

Run a free scan →