Operational analysts reviewing live systems and documents

Independent evidence layer for AI infrastructure

Audit AI routes before production trusts them.

TokenAuditor turns gateway, model-route, and fallback claims into local, reproducible evidence.

Transparent to users. Fair to suppliers. We only stand with evidence.

Verified Gateway Review

Supplier support route

Probe
Claimed route
LiteLLM support triage / GPT-4 class
Observed signals
Latency drift, premium claim, fallback not disclosed
Risk state
Probe
Disclosure state
Incomplete
Next action
Generate scoped probe plan before rollout

Claims are not failed by tone. They are reviewed against route, disclosure, drift, and reproducibility.

Transparent to users

Every audit states what was sampled, what was redacted, and what stayed local.

Fair to suppliers

Public findings carry scope, confidence, limits, and room for supplier response.

Local-first evidence

Route inspection begins beside the workflow, before secrets or raw prompts leave the machine.

Verified Gateway

What a gateway claim has to survive.

Before benchmark claims, rankings, or price promises, the route itself has to hold under review.

01

Model identity consistency

Does the route behave like the model the supplier claims to be serving?

02

Fallback disclosure

Do buyers know when premium claims are backed by silent downgrade logic?

03

Tool-call boundary integrity

Can a route preserve agent instructions without injecting or reshaping tool behavior?

Evidence Bundle

A claim becomes useful when the evidence is portable.

TokenAuditor produces a memo that buyers, operators, and suppliers can inspect without pretending one anomaly proves everything.

Sample evidence memo

local draft
Scope
Support-triage route review for a shared-key gateway
Claim
Supplier states GPT-4-class service with premium handling
Signals
Fallback omission, latency drift, route alias mismatch
Sampling
1% baseline, probe recommended after disclosure gap
Decision
Probe before broad production trust
Confidence
Moderate, pending baseline comparison window
Limitations
No raw prompt collection, no active probe executed yet

Review posture

current phase

Evidence first. Product surface second.

At this stage, TokenAuditor should read like a serious review standard, not a claim playground. The evidence path matters more than interactive flourish.

Disclosure state
Supplier-facing review before public accusation
Recommended next action
Expand evidence collection and repeatability before broader workflow demos.
  • Keep the homepage focused on trust boundaries, evidence standards, and audit posture.
  • Move interactive claim testing to a later product page once the public thesis is more established.
  • Let the sample memo carry the narrative instead of asking visitors to role-play the workflow.

Fair Audit Protocol

Audit rules that protect both sides of the trust boundary.

TokenAuditor does not ask buyers to trust a black box in order to measure another black box.

100% metadata visibility
1% baseline sample
5% deep-audit ceiling
0% secret collection
Consent before active probes

Users see the audit boundary.

Route, model, token, retry, latency, and schema signals are visible before any content-level review.

Suppliers see the basis of judgment.

Scope, sample size, confidence, limitations, and response status stay attached to public-facing conclusions.

Public claims wait for reproducible evidence.

No definitive fraud label from one anomaly, no paid path to a passing result, no hidden commercial influence in scores.

Search and AI Retrieval

What Google and buyers should be able to understand quickly.

TokenAuditor should be easy to summarize accurately: what it is, what it audits today, and what it will not do.

What is TokenAuditor.ai?

TokenAuditor.ai is an MCP-first evidence layer for AI infrastructure. It audits gateway integrity, model-route claims, fallback disclosure, and tool-call boundaries before production trust sets in.

What does TokenAuditor MCP audit today?

Today the MCP focuses on route discovery, redacted trace review, model identity comparison, degradation-window checks, policy gating, and local evidence memos for AI gateways, routers, and aggregators.

Does TokenAuditor collect prompts or secrets?

No. TokenAuditor is local-first and secret-blind by default. It does not require API key values, does not upload raw prompts, and does not run active probes without explicit approval.

Secondary entry

Install the MCP locally once the thesis is clear.

TokenAuditor begins as a local stdio MCP server for route discovery, redacted trace review, policy gating, and local evidence capture.

npm

Run locally

npx -y tokenauditor

Use this for direct local runs or MCP clients that start servers through a command.

Codex

Client setup

[mcp_servers.tokenauditor]
command = "npx"
args = ["-y", "tokenauditor"]

Read the full setup guide for Codex, Claude Desktop, and Cursor.

Boundary

Local-first by default

No default API key reading. No secret upload. No active probes unless the operator approves the probe plan.

Read client setup