Semantic code analysis powered by neuro-symbolic reasoning. Extract structural facts from source code, query for patterns and issues with deterministic Prolog rules, and verify that code changes align with stated goals — catching drift, scope creep, and unintended side effects.
AI agents write code confidently but frequently drift from the stated goal — adding unrelated changes, deleting important functions, or introducing subtle dependency shifts. Static analysis catches syntax errors but can't evaluate intent. Invariant bridges the gap: it uses LLM inference to understand what the code does semantically, then uses deterministic symbolic reasoning to verify whether that matches what was intended. Add a few lines to your agent's system prompt and every code change gets verified before it's presented as complete.
The most powerful use of Invariant is as an automatic verification step in an agent's coding loop.
After making code changes, the agent calls invariant.diff_analyzer with its stated goal.
If the alignment score is low or unexpected changes are flagged, the agent revises before presenting
its work. This creates a neuro-symbolic feedback loop: fuzzy reasoning writes the code,
deterministic reasoning verifies it, and fuzzy reasoning interprets the feedback to self-correct.
Use with the Invariant CLI for local
tree-sitter fact extraction and CI pipeline integration. Facts uploaded by the CLI are
queryable via invariant.code_query through any MCP client.
Combine with Logic tools to persist analysis results across sessions,
or pipe into Flow for automated code review workflows.
We can't find the internet
Something went wrong!