Hey HN - I built agnix because I kept losing time to the same class of bug: AI tool configs that are almost right but silently wrong.
The trigger: I had a Claude Code skill named `Review-Code`. It never auto-triggered. No error message. I spent 20 minutes debugging prompt engineering before realizing the Agent Skills spec requires kebab-case names (`review-code`). The spec is clear about this, but Claude Code doesn't validate it - it just silently ignores the skill.
This happens across the entire AI tool ecosystem. Cursor has .mdc rule files with YAML frontmatter - invalid YAML means your rule metadata is silently dropped. MCP server configs can use deprecated transport types with no warning. Claude Code hooks support `type: "prompt"` only on Stop and SubagentStop events - use it on PreToolUse and nothing happens. GitHub Copilot instruction files need valid glob patterns in their `applyTo` field - a malformed glob matches nothing.
None of these tools validate their own configuration files. It's the same gap that ESLint filled for JavaScript or clippy fills for Rust: catching things that are syntactically valid but semantically wrong.
agnix currently has 156 validation rules across 28 categories, covering 11 tools (Claude Code, Cursor, GitHub Copilot, Codex CLI, Cline, MCP, OpenCode, Gemini CLI, and more). Every rule is traced to an authoritative source - official specs, vendor documentation, or research papers.
Technical choices:
- Written in Rust, parallel validation via rayon
- LSP server for real-time IDE diagnostics (VS Code, JetBrains, Neovim, Zed)
- 57 auto-fixable rules (`agnix --fix .`)
- SARIF 2.1.0 output for CI/security workflows
- GitHub Action for CI integration
- Deterministic benchmarks via iai-callgrind in CI (blocks merge on perf regression)
- Single file: <10ms. 100 files: ~200ms.
Try it: `npx agnix .` (zero install, zero config)
I'd like feedback on two things: (1) whether the rule coverage feels right - are there config mistakes I'm not catching? and (2) whether anyone has thoughts on the cross-platform validation approach (detecting conflicts between CLAUDE.md, AGENTS.md, and .cursorrules in the same project).
Hey HN - I built agnix because I kept losing time to the same class of bug: AI tool configs that are almost right but silently wrong.
The trigger: I had a Claude Code skill named `Review-Code`. It never auto-triggered. No error message. I spent 20 minutes debugging prompt engineering before realizing the Agent Skills spec requires kebab-case names (`review-code`). The spec is clear about this, but Claude Code doesn't validate it - it just silently ignores the skill.
This happens across the entire AI tool ecosystem. Cursor has .mdc rule files with YAML frontmatter - invalid YAML means your rule metadata is silently dropped. MCP server configs can use deprecated transport types with no warning. Claude Code hooks support `type: "prompt"` only on Stop and SubagentStop events - use it on PreToolUse and nothing happens. GitHub Copilot instruction files need valid glob patterns in their `applyTo` field - a malformed glob matches nothing.
None of these tools validate their own configuration files. It's the same gap that ESLint filled for JavaScript or clippy fills for Rust: catching things that are syntactically valid but semantically wrong.
agnix currently has 156 validation rules across 28 categories, covering 11 tools (Claude Code, Cursor, GitHub Copilot, Codex CLI, Cline, MCP, OpenCode, Gemini CLI, and more). Every rule is traced to an authoritative source - official specs, vendor documentation, or research papers.
Technical choices:
- Written in Rust, parallel validation via rayon - LSP server for real-time IDE diagnostics (VS Code, JetBrains, Neovim, Zed) - 57 auto-fixable rules (`agnix --fix .`) - SARIF 2.1.0 output for CI/security workflows - GitHub Action for CI integration - Deterministic benchmarks via iai-callgrind in CI (blocks merge on perf regression) - Single file: <10ms. 100 files: ~200ms.
Try it: `npx agnix .` (zero install, zero config)
I'd like feedback on two things: (1) whether the rule coverage feels right - are there config mistakes I'm not catching? and (2) whether anyone has thoughts on the cross-platform validation approach (detecting conflicts between CLAUDE.md, AGENTS.md, and .cursorrules in the same project).
I also wrote a longer post about the problem space: https://dev.to/avifenesh/your-ai-agent-configs-are-probably-...
MIT/Apache-2.0.