API Reference
Post-quantum AI compliance infrastructure. Every AI decision signed in microseconds, anchored permanently on Hedera. Built for EU AI Act (Articles 9–15, 17, 26, 49, 72, 73, Annexes IV, XI, XII), SR 11-7, NIST AI RMF, TX TRAIGA, CO AI Act, ISO 42001, FINRA, SEC Rule 17a-4, and MiFID II.
API Keys
Include your API key in every request via the x-api-key header.
# Every request requires this header
x-api-key: your-api-key-hereGet your API key by emailing Scott@Rubric-Protocol.com or from the dashboard.
Federation Endpoints
Connect to the nearest node for lowest latency. All nodes are interoperable.
| Region | Base URL | Location |
|---|---|---|
| US | https://rubric-protocol.com/verify | Virginia |
| EU | https://eu.rubric-protocol.com/verify | Frankfurt |
| CA | https://ca.rubric-protocol.com/verify | Toronto |
| SG | https://sg.rubric-protocol.com/verify | Singapore |
| JP | https://jp.rubric-protocol.com/verify | Tokyo |
Limits by Tier
Exceeding limits returns 429 Too Many Requests. See full pricing →
API Explorer
Make real API calls against the Rubric federation. Enter your API key and try any endpoint instantly.
POST /v1/attest
Standard attestation. Signed with ML-DSA-65, queued for Merkle batching and HCS anchoring. Returns immediately.
Request Body
| Field | Type | Description |
|---|---|---|
| sourceId | string | required AI system identifier. Max 256 chars. |
| data | object | required Decision payload. Hashed before signing. |
| sessionId | string | optional Links to session. Required for Article 12(3)(a). |
| modelId | string | optional Model identifier. Required for Article 12(3)(b). |
| modelVersion | string | optional Model version. Required for Article 12(3)(b). |
| confidenceScore | number | optional 0.0–1.0. Article 13 transparency. |
| riskLevel | string | optional low/medium/high/critical. Article 9. |
| environment | string | optional production/staging/test. Article 17. |
curl -X POST https://rubric-protocol.com/verify/v1/attest \ -H "x-api-key: YOUR_API_KEY" \ -H "Content-Type: application/json" \ -d '{"sourceId":"my-model","data":{"decision":"approved","confidence":0.94},"modelId":"gpt-4o","modelVersion":"2024-11-01","confidenceScore":0.94,"riskLevel":"medium","environment":"production"}'
const res = await fetch('https://rubric-protocol.com/verify/v1/attest', { method: 'POST', headers: { 'x-api-key': process.env.RUBRIC_API_KEY, 'Content-Type': 'application/json' }, body: JSON.stringify({ sourceId: 'my-model', data: { decision: 'approved' }, modelId: 'gpt-4o', environment: 'production' }) }); const proof = await res.json();
import requests proof = requests.post('https://rubric-protocol.com/verify/v1/attest', headers={'x-api-key': 'YOUR_API_KEY'}, json={'sourceId': 'my-model', 'data': {'decision': 'approved'}, 'modelId': 'gpt-4o', 'environment': 'production'} ).json()
Response
{ "success": true, "status": "pending", "attestationId": "5d5b203a-...", "submittedAt": "2026-03-21T12:00:00.000Z", "topic": "0.0.10416909" }Response Codes
sourceId or data.GET /v1/status/:attestationId
Poll anchoring status. Once confirmed the attestation has an immutable HCS consensus timestamp.
| Status | Meaning |
|---|---|
| pending | Signed and queued, awaiting Merkle batch flush |
| anchored | Merkle root submitted to HCS |
| confirmed | HCS consensus achieved — immutable timestamp assigned |
GET /v1/verify/:attestationId
Cryptographically verify an attestation. Returns ML-DSA-65 signature verification result and per-article compliance status. Any third party — including regulators — can independently verify.
{ "verified": true, "sigValid": true, "attestationId": "5d5b203a-...", "seqNum": 1847, "securityLevel": "ML-DSA-65", "confirmedAt": "2026-03-21T12:00:34Z",
"complianceStatus": { "article12_1": true, "article12_2": true, "article12_3b": true, "article14": false } }/v1/attest for EU AI Act Annex IV, SR 11-7, NIST AI RMF, and state law compliance coverage.POST /v1/session/start
Article 12(3)(a) — Record the start of an AI system use period.
{ "sourceId": "my-model", "purpose": "credit-decisioning", "environment": "production" }
// Response
{ "success": true, "sessionId": "9326da14-...", "startTime": "2026-03-21T20:29:10Z", "attestationId": "4a4511ae-..." }POST /v1/session/end
Article 12(3)(a) — Close a session, recording end timestamp and outcome.
{ "sessionId": "9326da14-...", "sourceId": "my-model", "outcome": "approved", "decisionCount": 1 }POST /v1/human-override
Articles 12(3)(d) + 14 — Attest a human reviewer overriding an AI decision. reviewerId is required.
{ "sourceId": "my-model", "originalAttestationId": "4a4511ae-...", "reviewerId": "officer-jane", "overrideDecision": "rejected", "overrideReason": "Manual review identified risk" }POST /v1/incident
Article 73 — File a serious incident report. Severity critical/fatal: 2-day regulatory notification. serious: 15-day.
{ "sourceId": "my-model", "severity": "serious", "description": "Bias detected in Q1", "incidentPeriodStart": "2026-01-01T00:00:00Z", "incidentPeriodEnd": "2026-03-31T23:59:59Z", "reportedBy": "compliance-officer" }POST /v1/retention-lock
Article 26(5) — Lock records for regulatory retention. Minimum 1,825 days (5 years) for EU AI Act.
{ "sourceId": "my-model", "retentionPeriodDays": 1825, "scope": "all", "regulatoryBasis": "EU AI Act Article 26(5)", "lockedBy": "compliance-officer" }POST /v1/risk-assessment
Clause 6.1 — Document an AI risk assessment. Satisfies Annex A control A.6.1.3.
{ "sourceId": "my-model", "risksIdentified": ["bias", "drift"], "mitigations": ["monthly audits"], "residualRisk": "low", "assessedBy": "risk-officer-id" }
// Response
{ "success": true, "assessmentId": "ab7378b5-...", "isoClause": "6.1 — Risk Assessment", "standard": "ISO/IEC 42001:2023" }POST /v1/impact-assessment
Clause 8.2 — Record an AI impact assessment before deployment. Satisfies Annex A control A.8.2.
{ "sourceId": "my-model", "intendedPurpose": "Credit decisioning", "affectedStakeholders": ["applicants"], "potentialHarms": ["discriminatory denial"], "overallRating": "medium-risk", "assessedBy": "ethics-officer-id" }POST /v1/corrective-action
Clause 10.1 — Document corrective actions for a nonconformity. Links to an incident record. Satisfies Annex A control A.10.1.
{ "sourceId": "my-model", "incidentId": "INC-1711234567890", "nonconformityDescription": "Bias in Q1", "rootCause": "Training data imbalance", "correctiveActions": ["Retrained model"], "implementedBy": "ml-lead-id", "status": "implemented" }FINRA 2026 — AI Governance
FINRA's 2026 Annual Regulatory Oversight Report requires firms to maintain prompt and output logging, version tracking, audit trails of AI agent actions, and explicit human checkpoints before execution.
| FINRA Requirement | Rubric Coverage |
|---|---|
| Prompt and output logging | /v1/attest → inputHash + outputHash |
| Version tracking | modelId + modelVersion fields |
| Audit trails of AI agent actions | Every decision HCS-anchored, tamper-evident |
| Human checkpoints before execution | /v1/human-override with mandatory reviewerId |
| Books-and-records for AI communications | ML-DSA-65 signed, retention-locked records |
| AI governance documentation | /v1/risk-assessment + /v1/impact-assessment |
SEC Rule 17a-4 — Books and Records
Requires broker-dealers to preserve records in a non-rewriteable, non-erasable format. Rubric's HCS anchoring provides tamper-evident immutable records.
| Requirement | Rubric Coverage |
|---|---|
| Non-rewriteable records | HCS consensus — immutable once anchored |
| Tamper detection | ML-DSA-65 signature invalidated by any alteration |
| Retention period enforcement | /v1/retention-lock with configurable period |
| Record accessibility | Public /v1/verify — regulator-accessible |
MiFID II Article 17 — Algorithmic Trading
Requires records sufficient to reconstruct all orders including the specific algorithm responsible for each decision.
| Requirement | Rubric Coverage |
|---|---|
| Algorithm identification | sourceId + modelId + modelVersion |
| Decision reconstruction | Full attestation record with input/output hashes |
| Session-level records | /v1/session/start + /v1/session/end |
| Post-quantum integrity | ML-DSA-65 — records cannot be forged with quantum hardware |
SR 11-7 — Model Risk Management
Federal Reserve and OCC guidance requiring effective model challenge, independent validation, and ongoing monitoring for banks deploying AI.
| Requirement | Rubric Coverage |
|---|---|
| Model validation documentation | environment: "test" attestations create a paper trail |
| AI risk assessment | /v1/risk-assessment — ISO 42001 Clause 6.1 |
| Model failure documentation | /v1/incident + /v1/corrective-action |
| Version change management | modelVersion tracks changes across deployments |
GET /v1/health
Node health check. Use for uptime monitoring and load balancer health probes.
{ "status": "ok", "ts": "2026-03-21T12:00:00.000Z" }Federation Nodes
5 independent nodes across 5 continents. All interoperable — attestations created on any node are verifiable on all others.
| Region | Base URL | Health |
|---|---|---|
| US · Virginia | https://rubric-protocol.com/verify | /v1/health |
| EU · Frankfurt | https://eu.rubric-protocol.com/verify | /v1/health |
| CA · Toronto | https://ca.rubric-protocol.com/verify | /v1/health |
| SG · Singapore | https://sg.rubric-protocol.com/verify | /v1/health |
| JP · Tokyo | https://jp.rubric-protocol.com/verify | /v1/health |
Installation
npm install @rubric-protocol/sdk
Quickstart
import { RubricClient } from '@rubric-protocol/sdk'; const rubric = new RubricClient({ apiKey: process.env.RUBRIC_API_KEY, region: 'eu', localSigning: true }); const session = await rubric.session.start({ sourceId: 'my-model', purpose: 'credit-decisioning', environment: 'production' }); const proof = await rubric.attest({ sourceId: 'my-model', sessionId: session.sessionId, data: { decision: 'approved' }, modelId: 'gpt-4o', modelVersion: '2024-11-01', riskLevel: 'medium', environment: 'production' }); await rubric.session.end({ sessionId: session.sessionId, sourceId: 'my-model', outcome: 'approved' });
LangChain
import { RubricCallbackHandler } from '@rubric-protocol/sdk/plugins/langchain'; const chain = new LLMChain({ llm: model, callbacks: [new RubricCallbackHandler({ apiKey: process.env.RUBRIC_API_KEY, sourceId: 'my-app' })] }); // Every LLM call is automatically attested
OpenAI
import { createRubricOpenAI } from '@rubric-protocol/sdk/plugins/openai'; const openai = createRubricOpenAI({ apiKey: process.env.OPENAI_API_KEY, rubricApiKey: process.env.RUBRIC_API_KEY, sourceId: 'my-app' }); // Drop-in replacement — every call attested automatically const response = await openai.chat.completions.create({ model: 'gpt-4o', messages: [{ role: 'user', content: '...' }] });