Skip to main content

Java SDK - Getting Started

Current Version: 1.1.0

Add AI governance to your Java applications with the official AxonFlow Java SDK. Thread-safe, production-ready, with full support for Gateway Mode, Proxy Mode, and Multi-Agent Planning.

Installation

Maven

<dependency>
<groupId>com.getaxonflow</groupId>
<artifactId>axonflow-sdk</artifactId>
<version>1.1.0</version>
</dependency>

Gradle

implementation 'com.getaxonflow:axonflow-sdk:1.1.0'

Requirements

  • Java 11 or higher
  • Maven 3.6+ or Gradle 6.0+

Quick Start

import com.getaxonflow.sdk.AxonFlow;
import com.getaxonflow.sdk.AxonFlowConfig;
import com.getaxonflow.sdk.types.*;

public class QuickStart {
public static void main(String[] args) {
// Initialize client
AxonFlow client = AxonFlow.create(AxonFlowConfig.builder()
.agentUrl("https://your-agent.axonflow.com")
.licenseKey("your-license-key")
.build());

// Execute a governed query
ClientResponse response = client.executeQuery(
ClientRequest.builder()
.userPrompt("What is AI governance?")
.userId("user-123")
.model("gpt-3.5-turbo")
.build()
);

if (response.isAllowed()) {
System.out.println(response.getLlmResponse());
} else {
System.out.println("Blocked: " + response.getBlockedReason());
}
}
}

Gateway Mode (Lowest Latency)

Gateway Mode lets you make direct LLM calls while AxonFlow handles governance. Use this when you need the lowest possible latency or want full control over your LLM provider.

See Choosing a Mode for detailed comparison with Proxy Mode.

import com.getaxonflow.sdk.AxonFlow;
import com.getaxonflow.sdk.AxonFlowConfig;
import com.getaxonflow.sdk.types.*;

// Initialize AxonFlow
AxonFlow axonflow = AxonFlow.create(AxonFlowConfig.builder()
.agentUrl("https://your-agent.axonflow.com")
.licenseKey("your-license-key")
.build());

// 1. Pre-check: Get policy approval
PolicyApprovalResult preCheck = axonflow.getPolicyApprovedContext(
ClientRequest.builder()
.userPrompt("Find patient records")
.userId("user-jwt")
.metadata(Map.of("data_sources", Arrays.asList("postgres")))
.build()
);

if (!preCheck.isAllowed()) {
throw new RuntimeException("Blocked: " + preCheck.getBlockedReason());
}

// 2. Make LLM call directly (lowest latency)
long startTime = System.currentTimeMillis();
// ... your OpenAI/Anthropic call here ...
String llmResponse = callYourLLM(preCheck.getModifiedPrompt());
long latencyMs = System.currentTimeMillis() - startTime;

// 3. Audit the call
ClientResponse audit = axonflow.auditLLMCall(
AuditRequest.builder()
.requestId(preCheck.getRequestId())
.llmResponse(llmResponse.substring(0, Math.min(100, llmResponse.length())))
.model("gpt-4")
.tokenUsage(TokenUsage.builder()
.promptTokens(150)
.completionTokens(200)
.totalTokens(350)
.build())
.latencyMs(latencyMs)
.build()
);

See Gateway Mode Deep Dive for more details on API reference, error handling, and framework integrations.

Framework Integration

Spring Boot

The recommended way to integrate AxonFlow with Spring Boot applications.

Configuration:

@Configuration
public class AxonFlowConfiguration {

@Value("${axonflow.agent-url}")
private String agentUrl;

@Value("${axonflow.license-key:}")
private String licenseKey;

@Bean
public AxonFlow axonFlowClient() {
AxonFlowConfig.Builder builder = AxonFlowConfig.builder()
.agentUrl(agentUrl)
.timeout(Duration.ofSeconds(60));

if (licenseKey != null && !licenseKey.isEmpty()) {
builder.licenseKey(licenseKey);
}

return AxonFlow.create(builder.build());
}
}

Service Layer:

@Service
public class AIAssistantService {

private final AxonFlow axonFlow;

public AIAssistantService(AxonFlow axonFlow) {
this.axonFlow = axonFlow;
}

public String processQuery(String userId, String query) {
// Pre-check
PolicyApprovalResult preCheck = axonFlow.getPolicyApprovedContext(
ClientRequest.builder()
.userPrompt(query)
.userId(userId)
.build()
);

if (!preCheck.isAllowed()) {
throw new PolicyViolationException(preCheck.getBlockedReason());
}

// Your LLM call
String response = callLLM(query);

// Audit
axonFlow.auditLLMCall(AuditRequest.builder()
.requestId(preCheck.getRequestId())
.llmResponse(response)
.model("gpt-4")
.build());

return response;
}
}

application.yml:

axonflow:
agent-url: ${AXONFLOW_AGENT_URL:http://localhost:8080}
license-key: ${AXONFLOW_LICENSE_KEY:}
timeout-seconds: 60
debug: ${AXONFLOW_DEBUG:false}

See the full Spring Boot example for a complete implementation.

Plain Java

For applications without Spring:

public class AIService {

private static final AxonFlow client;

static {
client = AxonFlow.create(AxonFlowConfig.builder()
.agentUrl(System.getenv("AXONFLOW_AGENT_URL"))
.licenseKey(System.getenv("AXONFLOW_LICENSE_KEY"))
.build());
}

public String query(String userId, String prompt) {
ClientResponse response = client.executeQuery(
ClientRequest.builder()
.userPrompt(prompt)
.userId(userId)
.build()
);

if (response.isAllowed()) {
return response.getLlmResponse();
}
throw new RuntimeException("Blocked: " + response.getBlockedReason());
}
}

Configuration

AxonFlowConfig config = AxonFlowConfig.builder()
.agentUrl("https://your-agent.axonflow.com") // Required
.licenseKey("your-license-key") // Required for cloud
.timeout(Duration.ofSeconds(60)) // Default: 60s
.debug(true) // Enable request logging
.insecureSkipVerify(false) // SSL verification
.retryConfig(RetryConfig.builder() // Retry configuration
.maxAttempts(3)
.initialDelayMs(100)
.maxDelayMs(5000)
.multiplier(2.0)
.build())
.cacheEnabled(true) // Enable response caching
.cacheTtl(Duration.ofMinutes(5)) // Cache TTL
.cacheMaxSize(1000) // Max cached entries
.build();

AxonFlow client = AxonFlow.create(config);

VPC Private Endpoint (Low-Latency)

For customers running within AWS VPC:

AxonFlowConfig config = AxonFlowConfig.builder()
.agentUrl("https://YOUR_VPC_IP:8443") // VPC private endpoint
.licenseKey(System.getenv("AXONFLOW_LICENSE_KEY"))
.insecureSkipVerify(false) // Set true only for self-signed certs in dev
.build();

Performance Comparison:

  • Public endpoint: ~100ms (internet routing)
  • VPC private endpoint: under 10ms P99 (intra-VPC routing)

MCP Connector Integration

List Available Connectors

List<ConnectorInfo> connectors = client.listConnectors();

for (ConnectorInfo conn : connectors) {
System.out.printf("Connector: %s (%s)%n", conn.getName(), conn.getType());
System.out.printf(" Installed: %s%n", conn.isInstalled());
System.out.printf(" Capabilities: %s%n", String.join(", ", conn.getCapabilities()));
}

Query a Connector

ConnectorResponse result = client.queryConnector(
ConnectorQuery.builder()
.connectorName("postgres")
.operation("query")
.parameters(Map.of("sql", "SELECT * FROM users LIMIT 10"))
.userId("user-jwt")
.build()
);

if (result.isSuccess()) {
System.out.println("Data: " + result.getData());
} else {
System.out.println("Error: " + result.getError());
}

Multi-Agent Planning (MAP)

Generate a Plan

PlanResponse plan = client.generatePlan(
PlanRequest.builder()
.goal("Book a flight and hotel for my trip to Paris")
.domain("travel")
.userId("user-123")
.maxSteps(5)
.build()
);

System.out.printf("Plan %s has %d steps%n", plan.getPlanId(), plan.getSteps().size());
for (PlanStep step : plan.getSteps()) {
System.out.printf(" - %s: %s%n", step.getName(), step.getDescription());
}

Get Plan Status

PlanStatusResponse status = client.getPlanStatus(plan.getPlanId());

if ("completed".equals(status.getStatus())) {
System.out.println("Result: " + status.getResult());
} else if ("failed".equals(status.getStatus())) {
System.out.println("Error: " + status.getError());
} else {
System.out.println("In progress: " + status.getCurrentStep());
}

LLM Interceptors

For existing code using LLM providers, use interceptors for automatic governance:

import com.getaxonflow.sdk.AxonFlow;
import com.getaxonflow.sdk.interceptors.OpenAIInterceptor;
import com.getaxonflow.sdk.interceptors.ChatCompletionRequest;
import com.getaxonflow.sdk.exceptions.PolicyViolationException;

// Initialize AxonFlow
AxonFlow axonflow = AxonFlow.builder()
.agentUrl(System.getenv("AXONFLOW_AGENT_URL"))
.clientId(System.getenv("AXONFLOW_CLIENT_ID"))
.clientSecret(System.getenv("AXONFLOW_CLIENT_SECRET"))
.build();

// Create interceptor
OpenAIInterceptor interceptor = OpenAIInterceptor.builder()
.axonflow(axonflow)
.userToken("user-123")
.asyncAudit(true) // Non-blocking audit logging
.build();

// Wrap your OpenAI call - governance is automatic
try {
ChatCompletionResponse response = interceptor.wrap(req -> {
// Your actual OpenAI SDK call here
return yourOpenAIClient.createChatCompletion(req);
}).apply(ChatCompletionRequest.builder()
.model("gpt-4")
.addUserMessage("Hello!")
.build());

System.out.println(response.getContent());
} catch (PolicyViolationException e) {
System.out.println("Blocked: " + e.getMessage());
}

Supported Providers:

  • OpenAI: OpenAIInterceptor
  • Anthropic: AnthropicInterceptor
  • Gemini: GeminiInterceptor
  • Ollama: OllamaInterceptor
  • Bedrock: BedrockInterceptor

See LLM Interceptors for complete documentation.

Error Handling

import com.getaxonflow.sdk.exceptions.*;

try {
ClientResponse response = client.executeQuery(request);
} catch (PolicyViolationException e) {
// Request blocked by policy
System.out.println("Blocked by policy: " + e.getPolicyName());
System.out.println("Reason: " + e.getMessage());
} catch (AuthenticationException e) {
// Invalid credentials
System.out.println("Authentication failed: " + e.getMessage());
} catch (RateLimitException e) {
// Rate limited
System.out.println("Rate limited. Retry after: " + e.getRetryAfterSeconds() + "s");
} catch (TimeoutException e) {
// Request timed out
System.out.println("Request timed out");
} catch (ConnectionException e) {
// Network/connectivity issues
System.out.println("Connection failed: " + e.getMessage());
} catch (AxonFlowException e) {
// Other SDK errors
System.out.println("Error: " + e.getMessage());
}

Thread Safety

The AxonFlow client is thread-safe and designed for reuse. Create a single instance and share it across your application:

// Create once at application startup
private static final AxonFlow client = AxonFlow.create(config);

// Reuse across threads
executorService.submit(() -> client.executeQuery(request1));
executorService.submit(() -> client.executeQuery(request2));

Logging

The SDK uses SLF4J for logging. Add your preferred logging implementation:

<!-- Logback (recommended) -->
<dependency>
<groupId>ch.qos.logback</groupId>
<artifactId>logback-classic</artifactId>
<version>1.4.14</version>
</dependency>

Enable debug logging for request/response details:

AxonFlowConfig config = AxonFlowConfig.builder()
.agentUrl("https://your-agent.axonflow.com")
.debug(true) // Enables detailed logging
.build();

Production Best Practices

1. Environment Variables

Never hardcode credentials:

// Good
AxonFlow client = AxonFlow.create(AxonFlowConfig.builder()
.agentUrl(System.getenv("AXONFLOW_AGENT_URL"))
.licenseKey(System.getenv("AXONFLOW_LICENSE_KEY"))
.build());

// Bad - Never do this!
AxonFlow client = AxonFlow.create(AxonFlowConfig.builder()
.agentUrl("https://hardcoded-url.com")
.licenseKey("hardcoded-key")
.build());

2. Singleton Pattern

Create the client once and reuse:

@Bean
@Scope("singleton")
public AxonFlow axonFlowClient() {
return AxonFlow.create(config);
}

3. Enable Caching

Reduce latency for repeated policy checks:

AxonFlowConfig config = AxonFlowConfig.builder()
.agentUrl(agentUrl)
.cacheEnabled(true)
.cacheTtl(Duration.ofMinutes(5))
.cacheMaxSize(1000)
.build();

4. Enable Retry Logic

Handle transient failures automatically:

AxonFlowConfig config = AxonFlowConfig.builder()
.agentUrl(agentUrl)
.retryConfig(RetryConfig.builder()
.maxAttempts(3)
.initialDelayMs(100)
.maxDelayMs(5000)
.multiplier(2.0)
.retryableStatusCodes(Set.of(429, 500, 502, 503, 504))
.build())
.build();

5. Non-Fatal Audit Failures

Audit logging should not block the response:

try {
client.auditLLMCall(auditRequest);
} catch (AxonFlowException e) {
log.warn("Audit failed (non-fatal): {}", e.getMessage());
}

Support & Resources

Next Steps

License

Apache 2.0 - See LICENSE