openclaw/openclaw
oTOB9waY from 2026-05-09, even if a newer public scan exists for this repo.
Nine dimensions
Top findings (AI)
7671 Open Issues — Maintenance Collapse
7,671 open issues against 370k stars means 2.1% of stargazers have filed issues. At this ratio, two-person engineering teams will spend all cycles triaging, not shipping. No issue SLA or moderation policy visible in .github/ISSUE_TEMPLATE/. This is a maintenance debt spiral, not a healthy open-source project.
No Error Tracking Infrastructure
Error tracking: false confirmed. The application handles AI model responses across 20+ channel integrations (Discord, Slack, Telegram, etc.) with agent execution loops. Without Sentry/DataDog/equivalent, runtime failures in production customer environments are invisible. A single LLM response parsing error silently drops user requests.
Zero Dependencies Declared in Scan
OSV scan ran against 0 declared dependencies. This means either: (a) no package.json exists, (b) dependencies are managed but not scanned, or (c) the scan was malformed. For a TypeScript project with agent skills, plugins, and extensions, this is implausible and creates unquantified supply chain risk.
No Deployment Config for Self-Hosted Customers
.github/workflows/docker-release.yml builds images, but no vercel.json, fly.toml, or cloud-native config visible. For a 'personal AI assistant' product, self-hosted customers need Helm charts, docker-compose, or systemd unit files. Without this, onboarding paying customers requires undocumented manual deployment steps.
No Structured Logging Framework
No logging library or log level configuration visible. The agent execution layer handles LLM tool calls, file system access, and 20+ channel integrations. Without structured logs (pino/winston), debugging production issues across platforms is archaeology, not engineering.
CodeQL Only — No Unit/Integration Test Evidence
5274 test files claimed but no test/ directory in file tree. CodeQL is static analysis, not functional testing. For a multi-agent system where LLM outputs drive downstream actions, only runtime integration tests can validate behavior. Absence suggests tests may be framework-generated or nonexistent.