Skip to main content

TypeScript SDK - Getting Started

Add invisible AI governance to your TypeScript/JavaScript applications in 3 lines of code. No UI changes. No user training. Just drop-in enterprise protection.

Current Version: 1.4.0 | npm | GitHub

Installation

npm install @axonflow/sdk

Or with yarn:

yarn add @axonflow/sdk

Or with pnpm:

pnpm add @axonflow/sdk

Quick Start

Gateway Mode provides the most reliable integration by explicitly separating policy checks, LLM calls, and audit logging:

import { AxonFlow } from '@axonflow/sdk';
import OpenAI from 'openai';

const axonflow = new AxonFlow({
endpoint: process.env.AXONFLOW_AGENT_URL, // Agent endpoint (port 8080)
licenseKey: process.env.AXONFLOW_LICENSE_KEY,
tenant: process.env.AXONFLOW_TENANT || 'default',
});
const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });

async function queryWithGovernance(query: string) {
// 1. Pre-check: Get policy approval
const ctx = await axonflow.getPolicyApprovedContext({
userToken: 'user-123',
query,
});

if (!ctx.approved) {
throw new Error(`Query blocked: ${ctx.blockReason}`);
}

// 2. Make direct LLM call
const start = Date.now();
const response = await openai.chat.completions.create({
model: 'gpt-4',
messages: [{ role: 'user', content: query }]
});
const latencyMs = Date.now() - start;

// 3. Audit the call
await axonflow.auditLLMCall({
contextId: ctx.contextId,
responseSummary: response.choices[0].message.content?.substring(0, 100) || '',
provider: 'openai',
model: 'gpt-4',
tokenUsage: {
promptTokens: response.usage?.prompt_tokens || 0,
completionTokens: response.usage?.completion_tokens || 0,
totalTokens: response.usage?.total_tokens || 0
},
latencyMs
});

return response.choices[0].message.content;
}

Proxy Mode

For the simplest integration where AxonFlow handles everything (policy + LLM routing + audit):

import { AxonFlow } from '@axonflow/sdk';

const axonflow = new AxonFlow({
endpoint: process.env.AXONFLOW_AGENT_URL,
licenseKey: process.env.AXONFLOW_LICENSE_KEY,
});

// Single call handles: policy check → LLM routing → audit
const response = await axonflow.executeQuery({
userToken: 'user-123',
query: 'What are the benefits of AI governance?',
requestType: 'llm_chat',
context: {
provider: 'openai',
model: 'gpt-4',
},
});

if (response.blocked) {
console.log('Blocked:', response.blockReason);
} else {
console.log('Response:', response.data);
}

See Proxy Mode for more details.

LLM Interceptors Deprecated

The wrapOpenAIClient() and similar interceptor functions are deprecated as of v1.4.0 due to compatibility issues with modern LLM SDKs. Use Gateway Mode or Proxy Mode instead.

Framework Integration

Next.js API Route

// pages/api/chat.ts
import { AxonFlow } from '@axonflow/sdk';
import OpenAI from 'openai';
import type { NextApiRequest, NextApiResponse } from 'next';

const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
const axonflow = new AxonFlow({
endpoint: process.env.AXONFLOW_AGENT_URL,
licenseKey: process.env.AXONFLOW_LICENSE_KEY,
tenant: process.env.AXONFLOW_TENANT || 'default',
});

export default async function handler(
req: NextApiRequest,
res: NextApiResponse
) {
const { prompt, userToken } = req.body;

try {
// 1. Pre-check policy approval
const ctx = await axonflow.getPolicyApprovedContext({
userToken,
query: prompt,
});

if (!ctx.approved) {
return res.status(403).json({ error: ctx.blockReason });
}

// 2. Make LLM call
const start = Date.now();
const response = await openai.chat.completions.create({
model: 'gpt-4',
messages: [{ role: 'user', content: prompt }]
});
const latencyMs = Date.now() - start;

// 3. Audit the call
await axonflow.auditLLMCall({
contextId: ctx.contextId,
responseSummary: response.choices[0].message.content?.substring(0, 100) || '',
provider: 'openai',
model: 'gpt-4',
tokenUsage: {
promptTokens: response.usage?.prompt_tokens || 0,
completionTokens: response.usage?.completion_tokens || 0,
totalTokens: response.usage?.total_tokens || 0
},
latencyMs
});

res.status(200).json({ success: true, response: response.choices[0].message.content });
} catch (error) {
res.status(500).json({ error: error.message });
}
}

Express.js

import express from 'express';
import { AxonFlow } from '@axonflow/sdk';
import OpenAI from 'openai';

const app = express();
app.use(express.json());

const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
const axonflow = new AxonFlow({
endpoint: process.env.AXONFLOW_AGENT_URL,
licenseKey: process.env.AXONFLOW_LICENSE_KEY,
tenant: process.env.AXONFLOW_TENANT || 'default',
});

app.post('/api/chat', async (req, res) => {
const { prompt, userToken } = req.body;

try {
// 1. Pre-check
const ctx = await axonflow.getPolicyApprovedContext({
userToken,
query: prompt,
});

if (!ctx.approved) {
return res.status(403).json({ error: ctx.blockReason });
}

// 2. LLM call
const start = Date.now();
const response = await openai.chat.completions.create({
model: 'gpt-4',
messages: [{ role: 'user', content: prompt }]
});
const latencyMs = Date.now() - start;

// 3. Audit
await axonflow.auditLLMCall({
contextId: ctx.contextId,
responseSummary: response.choices[0].message.content?.substring(0, 100) || '',
provider: 'openai',
model: 'gpt-4',
tokenUsage: {
promptTokens: response.usage?.prompt_tokens || 0,
completionTokens: response.usage?.completion_tokens || 0,
totalTokens: response.usage?.total_tokens || 0
},
latencyMs
});

res.json({ success: true, response: response.choices[0].message.content });
} catch (error) {
res.status(500).json({ error: error.message });
}
});

app.listen(3000);

Configuration

Basic Configuration

const axonflow = new AxonFlow({
licenseKey: 'your-license-key', // Required (License Key from AxonFlow)
mode: 'production', // or 'sandbox' for testing
endpoint: 'https://staging-eu.getaxonflow.com', // Default public endpoint
tenant: 'your-tenant-id', // For multi-tenant setups
debug: false, // Enable debug logging
});

Advanced Configuration

const axonflow = new AxonFlow({
licenseKey: process.env.AXONFLOW_LICENSE_KEY,
mode: 'production',
endpoint: 'https://staging-eu.getaxonflow.com',

// Retry configuration
retry: {
enabled: true,
maxAttempts: 3,
delay: 1000 // milliseconds
},

// Cache configuration
cache: {
enabled: true,
ttl: 60000 // 1 minute in milliseconds
},

// Debug mode
debug: process.env.NODE_ENV === 'development',
});

VPC Private Endpoint (Low-Latency)

For customers running within AWS VPC, use the private endpoint for optimal latency:

const axonflow = new AxonFlow({
licenseKey: process.env.AXONFLOW_LICENSE_KEY,
endpoint: 'https://YOUR_VPC_IP:8443', // VPC private endpoint (replace YOUR_VPC_IP with your internal IP)
tenant: process.env.AXONFLOW_TENANT,
mode: 'production'
});

Performance Comparison:

  • Public endpoint: ~100ms (internet routing)
  • VPC private endpoint: Single-digit ms P99 (intra-VPC routing)

Note: VPC endpoints require AWS VPC peering setup with AxonFlow infrastructure. Contact sales for setup.

Sandbox Mode

For testing without affecting production:

import { AxonFlow, PolicyViolationError } from '@axonflow/sdk';

// Use sandbox mode for testing
const axonflow = AxonFlow.sandbox('demo-key');

// Test with PII detection (will be blocked)
try {
const response = await axonflow.executeQuery({
userToken: 'test-user',
query: 'My SSN is 123-45-6789',
requestType: 'chat'
});
} catch (error) {
if (error instanceof PolicyViolationError) {
// Expected: PII detected and blocked
console.log('Correctly blocked:', error.blockReason);
}
}

What Gets Protected?

AxonFlow automatically:

  • Blocks prompts containing sensitive data (PII, credentials, financial data)
  • Redacts personal information from responses
  • Enforces rate limits and usage quotas per tenant
  • Prevents prompt injection and jailbreak attempts
  • Logs all requests for compliance audit trails
  • Monitors costs and usage patterns in real-time

Error Handling

import {
AxonFlow,
PolicyViolationError,
AuthenticationError,
RateLimitError,
APIError
} from '@axonflow/sdk';

try {
const response = await axonflow.executeQuery({
userToken: 'user-123',
query: prompt,
requestType: 'chat'
});

console.log('Success:', response.data);
} catch (error) {
if (error instanceof PolicyViolationError) {
// Request violated a policy
console.log('Policy violation:', error.blockReason);
console.log('Policies:', error.policies);
} else if (error instanceof RateLimitError) {
// Rate limit exceeded
console.log(`Rate limit: ${error.remaining}/${error.limit}, resets at ${error.resetAt}`);
} else if (error instanceof AuthenticationError) {
// Authentication failed
console.error('Auth error:', error.message);
} else if (error instanceof APIError) {
// API error
console.error(`API error ${error.statusCode} ${error.statusText}:`, error.body);
} else {
// Other errors
console.error('Error:', error);
}
}

TypeScript Support

The SDK is written in TypeScript and provides full type definitions:

import { AxonFlow, ExecuteQueryResponse, AxonFlowError } from '@axonflow/sdk';

// Full type safety
const axonflow = new AxonFlow({
licenseKey: string,
mode?: 'production' | 'sandbox',
endpoint?: string,
tenant?: string,
debug?: boolean,
retry?: {
enabled: boolean;
maxAttempts: number;
delay: number;
},
cache?: {
enabled: boolean;
ttl: number;
}
});

// Response types
const response: ExecuteQueryResponse = await axonflow.executeQuery({
userToken: 'user-123',
query: 'Your query here',
requestType: 'chat'
});

Gateway Mode (Lowest Latency)

Gateway Mode lets you make direct LLM calls while AxonFlow handles governance. Use this when you need the lowest possible latency or want full control over your LLM provider.

How It Works

  1. Pre-check: Get policy approval before making LLM call
  2. Direct LLM call: Call your LLM provider directly with approved data
  3. Audit: Log the call for compliance

Pre-Check Policy Approval

import { AxonFlow } from '@axonflow/sdk';

const axonflow = new AxonFlow({
licenseKey: process.env.AXONFLOW_LICENSE_KEY,
});

// 1. Pre-check: Get policy approval
const ctx = await axonflow.getPolicyApprovedContext({
userToken: 'user-jwt-token',
query: 'Analyze this customer data',
dataSources: ['postgres'],
context: { department: 'analytics' }
});

if (!ctx.approved) {
throw new Error(`Request blocked: ${ctx.blockReason}`);
}

// ctx.approvedData contains filtered data safe to send to LLM
// ctx.contextId is used to correlate with audit

Make Direct LLM Call

import OpenAI from 'openai';

const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });

// 2. Make direct LLM call with approved data
const start = Date.now();
const response = await openai.chat.completions.create({
model: 'gpt-4',
messages: [{ role: 'user', content: JSON.stringify(ctx.approvedData) }]
});
const latencyMs = Date.now() - start;

Audit the Call

// 3. Audit: Log the call for compliance
await axonflow.auditLLMCall({
contextId: ctx.contextId,
responseSummary: response.choices[0].message.content?.substring(0, 100) || '',
provider: 'openai',
model: 'gpt-4',
tokenUsage: {
promptTokens: response.usage?.prompt_tokens || 0,
completionTokens: response.usage?.completion_tokens || 0,
totalTokens: response.usage?.total_tokens || 0
},
latencyMs
});

Complete Gateway Mode Example

import { AxonFlow, TokenUsage } from '@axonflow/sdk';
import OpenAI from 'openai';

async function gatewayModeExample() {
// Initialize clients
const axonflow = new AxonFlow({
licenseKey: process.env.AXONFLOW_LICENSE_KEY,
debug: true
});

const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });

// 1. Pre-check policy approval
const ctx = await axonflow.getPolicyApprovedContext({
userToken: 'user-jwt-token',
query: 'Analyze customer churn patterns',
dataSources: ['postgres']
});

if (!ctx.approved) {
throw new Error(`Blocked: ${ctx.blockReason}`);
}

console.log(`Policy approved, context ID: ${ctx.contextId}`);
console.log(`Rate limit: ${ctx.rateLimitInfo?.remaining}/${ctx.rateLimitInfo?.limit}`);

// 2. Make direct LLM call
const start = Date.now();
const response = await openai.chat.completions.create({
model: 'gpt-4',
messages: [{ role: 'user', content: JSON.stringify(ctx.approvedData) }]
});
const latencyMs = Date.now() - start;

// 3. Audit the call
const auditResult = await axonflow.auditLLMCall({
contextId: ctx.contextId,
responseSummary: response.choices[0].message.content?.substring(0, 100) || '',
provider: 'openai',
model: 'gpt-4',
tokenUsage: {
promptTokens: response.usage?.prompt_tokens || 0,
completionTokens: response.usage?.completion_tokens || 0,
totalTokens: response.usage?.total_tokens || 0
},
latencyMs
});

console.log(`Audit logged: ${auditResult.auditId}`);
return response.choices[0].message.content;
}

gatewayModeExample().catch(console.error);

When to Use Gateway Mode

Use CaseRecommended Mode
Lowest latency requirementsGateway Mode
Full control over LLM providerGateway Mode
Framework integrations (LangChain, LlamaIndex)Gateway Mode
Simple integration, single API callProxy Mode (executeQuery())
Response filtering (PII detection)Proxy Mode (executeQuery())

See Choosing a Mode for detailed comparison.

MCP Connector Integration

Connect to external data sources using Model Context Protocol (MCP) connectors:

List Available Connectors

const connectors = await axonflow.listConnectors();

connectors.forEach(conn => {
console.log(`Connector: ${conn.name} (${conn.type})`);
console.log(` Description: ${conn.description}`);
console.log(` Installed: ${conn.installed}`);
console.log(` Capabilities: ${conn.capabilities.join(', ')}`);
});

Install a Connector

await axonflow.installConnector({
connector_id: 'amadeus-travel',
name: 'amadeus-prod',
tenant_id: 'your-tenant-id',
options: {
environment: 'production'
},
credentials: {
api_key: process.env.AMADEUS_API_KEY,
api_secret: process.env.AMADEUS_API_SECRET
}
});

console.log('Amadeus connector installed successfully!');

Query a Connector

// Query the Amadeus connector for flight information
const resp = await axonflow.queryConnector(
'amadeus-prod',
'Find flights from Paris to Amsterdam on Dec 15',
{
origin: 'CDG',
destination: 'AMS',
date: '2025-12-15'
}
);

if (resp.success) {
console.log('Flight data:', resp.data);
// resp.data contains real Amadeus GDS flight offers
} else {
console.error('Query failed:', resp.error);
}

Multi-Agent Planning (MAP)

Generate and execute complex multi-step plans using AI agent orchestration:

Generate a Plan

// Generate a travel planning workflow
const plan = await axonflow.generatePlan(
'Plan a 3-day trip to Paris with moderate budget',
'travel' // Domain hint (optional)
);

console.log(`Generated plan ${plan.planId} with ${plan.steps.length} steps`);
console.log(`Complexity: ${plan.complexity}, Parallel: ${plan.parallel}`);

plan.steps.forEach((step, i) => {
console.log(` Step ${i + 1}: ${step.name} (${step.type})`);
console.log(` Description: ${step.description}`);
console.log(` Agent: ${step.agent}`);
if (step.dependsOn.length > 0) {
console.log(` Depends on: ${step.dependsOn.join(', ')}`);
}
});

Execute a Plan

// Execute the generated plan
const execResp = await axonflow.executePlan(plan.planId);

console.log(`Plan Status: ${execResp.status}`);
console.log(`Duration: ${execResp.duration}`);

if (execResp.status === 'completed') {
console.log(`Result:\n${execResp.result}`);

// Access individual step results
Object.entries(execResp.stepResults || {}).forEach(([stepId, result]) => {
console.log(` ${stepId}:`, result);
});
} else if (execResp.status === 'failed') {
console.error(`Error: ${execResp.error}`);
}

Complete Example: Trip Planning

import { AxonFlow } from '@axonflow/sdk';

async function planTrip() {
const axonflow = new AxonFlow({
licenseKey: process.env.AXONFLOW_LICENSE_KEY,
debug: true
});

// 1. Generate multi-agent plan
const plan = await axonflow.generatePlan(
'Plan a 3-day trip to Paris for 2 people with moderate budget',
'travel'
);

console.log(`✅ Generated plan with ${plan.steps.length} steps (parallel: ${plan.parallel})`);

// 2. Execute the plan
console.log('\n🚀 Executing plan...');
const execResp = await axonflow.executePlan(plan.planId);

// 3. Display results
if (execResp.status === 'completed') {
console.log(`\n✅ Plan completed in ${execResp.duration}`);
console.log(`\n📋 Complete Itinerary:\n${execResp.result}`);
} else {
console.error(`\n❌ Plan failed: ${execResp.error}`);
}
}

planTrip().catch(console.error);

Production Best Practices

1. Environment Variables

Never hardcode API keys:

// ✅ Good
const axonflow = new AxonFlow({
licenseKey: process.env.AXONFLOW_LICENSE_KEY
});

// ❌ Bad
const axonflow = new AxonFlow({
licenseKey: 'hardcoded-key-123' // Never do this!
});

2. Fail-Open Strategy

In production, AxonFlow fails open if unreachable. This ensures your app stays operational:

const axonflow = new AxonFlow({
licenseKey: process.env.AXONFLOW_LICENSE_KEY,
mode: 'production' // Fail-open in production
});

// If AxonFlow is down, the original call proceeds with a warning

3. Tenant Isolation

For multi-tenant applications, use tenant IDs:

const axonflow = new AxonFlow({
licenseKey: process.env.AXONFLOW_LICENSE_KEY,
tenant: getCurrentTenantId() // Dynamic tenant isolation
});

4. Enable Caching

Reduce latency for repeated queries:

const axonflow = new AxonFlow({
licenseKey: process.env.AXONFLOW_LICENSE_KEY,
cache: {
enabled: true,
ttl: 60000 // 1 minute
}
});

5. Enable Retry Logic

Handle transient failures automatically:

const axonflow = new AxonFlow({
licenseKey: process.env.AXONFLOW_LICENSE_KEY,
retry: {
enabled: true,
maxAttempts: 3,
delay: 1000
}
});

Performance Optimization

Connection Pooling

The SDK automatically reuses HTTP connections. For high-throughput applications:

// Create once, reuse everywhere
const axonflow = new AxonFlow({
licenseKey: process.env.AXONFLOW_LICENSE_KEY,
endpoint: 'https://staging-eu.getaxonflow.com'
});

// Export and import in other modules
export { axonflow };

Latency Benchmarks

Public Endpoint (Internet):

  • P50: ~80ms
  • P95: ~120ms
  • P99: ~150ms

VPC Private Endpoint (AWS):

  • P50: 3ms
  • P95: 6ms
  • P99: 9ms

Examples

Full working examples are available in the GitHub repository:

  • Basic Usage: Simple governance wrapper
  • Next.js Integration: Full Next.js app with API routes
  • React Hooks: Custom React hooks for AxonFlow
  • MCP Connectors: Working with external data sources
  • Multi-Agent Planning: Complex workflow orchestration

Support & Resources

Next Steps

License

MIT - See LICENSE