Skip to main content

Getting Started with Multi-Agent Planning

This guide walks you through setting up your first multi-agent workflow using AxonFlow's Multi-Agent Planning (MAP) system.

Prerequisites

  • AxonFlow running locally or deployed (see Local Development)
  • An LLM provider configured (OpenAI, Anthropic, or Bedrock)
  • Basic understanding of YAML configuration

Quick Start

1. Start AxonFlow

# Clone and start AxonFlow
git clone https://github.com/getaxonflow/axonflow.git
cd axonflow
docker compose up -d

Verify services are running:

# Check Agent health
curl http://localhost:8080/health

# Check Orchestrator health
curl http://localhost:8081/health

2. Define Your First Agent

Create an agent configuration file. Agents use a Kubernetes-style YAML format:

# config/agents/research-agent.yaml
apiVersion: axonflow.io/v1
kind: AgentConfig
metadata:
name: research-agent
domain: generic
spec:
type: specialist
description: Research and summarize information on any topic
capabilities:
- research
- summarization
- analysis
llm:
provider: openai
model: gpt-4
temperature: 0.7
maxTokens: 2000
promptTemplate: |
You are a research assistant. Your task is to research and provide
comprehensive information about the given topic.

Topic: {{input.query}}

Provide a well-structured response with key findings.

3. Load Agents

Place your agent configuration in the AxonFlow config directory:

# Copy agent to config directory (adjust path for your setup)
cp config/agents/research-agent.yaml /path/to/axonflow/config/agents/

Or mount the directory in docker-compose:

# docker-compose.override.yaml
services:
orchestrator:
volumes:
- ./config/agents:/etc/axonflow/agents:ro

Restart the orchestrator to load agents:

docker compose restart orchestrator

4. Generate and Execute a Plan

AxonFlow uses a single-call pattern that generates and executes plans atomically. Send your request through the Agent's /api/request endpoint:

curl -X POST http://localhost:8080/api/request \
-H "Content-Type: application/json" \
-d '{
"query": "Research the benefits of remote work for software teams",
"request_type": "multi-agent-plan",
"context": {
"domain": "generic"
}
}'

Response:

{
"success": true,
"plan_id": "plan_1765851929_abc123",
"result": "## Benefits of Remote Work for Software Teams\n\n### 1. Increased Productivity\n- Fewer office distractions...",
"steps": [
{
"id": "step_1",
"name": "research-benefits",
"type": "llm-call",
"agent": "research-agent"
}
],
"metadata": {
"tasks_executed": 1,
"execution_mode": "sequential",
"execution_time_ms": 2340,
"tasks": [
{"name": "research-benefits", "status": "completed", "time_ms": 2340}
]
}
}
Single-Call Architecture

The plan is generated AND executed in one request. The response includes both the plan structure (steps) and the execution result (result). This atomic approach ensures consistent execution and simplifies error handling.

Multi-Step Example

Here's a more complex example with multiple agents working together:

Define Multiple Agents

# config/agents/travel-agents.yaml
apiVersion: axonflow.io/v1
kind: AgentConfig
metadata:
name: flight-search
domain: travel
spec:
type: specialist
description: Search for flight options
capabilities:
- flight_search
- fare_comparison
llm:
provider: openai
model: gpt-4
promptTemplate: |
Search for flights based on:
- Origin: {{input.origin}}
- Destination: {{input.destination}}
- Date: {{input.date}}

Return top 3 flight options with prices.
---
apiVersion: axonflow.io/v1
kind: AgentConfig
metadata:
name: hotel-search
domain: travel
spec:
type: specialist
description: Search for hotel accommodations
capabilities:
- hotel_search
- rate_comparison
llm:
provider: openai
model: gpt-4
promptTemplate: |
Find hotels in {{input.destination}} for {{input.dates}}.
Budget: {{input.budget}}

Return top 3 hotel options.
---
apiVersion: axonflow.io/v1
kind: AgentConfig
metadata:
name: trip-planner
domain: travel
spec:
type: coordinator
description: Coordinate travel planning
capabilities:
- trip_planning
- itinerary_creation
delegatesTo:
- flight-search
- hotel-search
llm:
provider: openai
model: gpt-4
promptTemplate: |
Create a complete travel itinerary combining:
- Flights: {{steps.flight-search.output}}
- Hotels: {{steps.hotel-search.output}}

Provide a summary with total estimated cost.

Execute Multi-Agent Plan

With multiple agents defined, send your travel planning request through the Agent:

curl -X POST http://localhost:8080/api/request \
-H "Content-Type: application/json" \
-d '{
"query": "Plan a 3-day trip to Mumbai from Delhi, December 20-23",
"request_type": "multi-agent-plan",
"context": {
"domain": "travel"
}
}'

Response:

The plan is generated and executed atomically. Steps with the same parallelGroup execute concurrently:

{
"success": true,
"plan_id": "plan_travel_xyz789",
"result": "## Your Mumbai Trip Itinerary\n\n### Flights\n- IndiGo 6E-123: Delhi → Mumbai, Dec 20, 06:00 AM...\n\n### Hotels\n- The Taj Mahal Palace: ₹15,000/night...\n\n### Total Estimated Cost: ₹52,000",
"steps": [
{
"id": "step_1",
"name": "flight-search",
"type": "llm-call",
"agent": "flight-search"
},
{
"id": "step_2",
"name": "hotel-search",
"type": "llm-call",
"agent": "hotel-search"
},
{
"id": "step_3",
"name": "create-itinerary",
"type": "llm-call",
"agent": "trip-planner",
"depends_on": ["step_1", "step_2"]
}
],
"metadata": {
"tasks_executed": 3,
"execution_mode": "auto",
"execution_time_ms": 5240,
"tasks": [
{"name": "flight-search", "status": "completed", "time_ms": 1800},
{"name": "hotel-search", "status": "completed", "time_ms": 1750},
{"name": "create-itinerary", "status": "completed", "time_ms": 1690}
]
}
}
Parallel Execution

The orchestrator automatically detects that flight-search and hotel-search have no dependencies on each other and can run them in parallel. The create-itinerary step waits for both to complete before executing.

Using the SDK

For production applications, use the AxonFlow SDKs which handle authentication and routing automatically:

TypeScript SDK

import { AxonFlow } from '@axonflow/sdk';

const client = new AxonFlow({
endpoint: 'http://localhost:8080', // Agent URL
// For self-hosted: no license key needed if SELF_HOSTED_MODE=true
// For cloud: licenseKey: 'your-license-key'
});

// Generate and execute plan in one call
const result = await client.generatePlan(
'Research AI governance best practices',
'generic' // domain hint
);

console.log(`Plan ID: ${result.planId}`);
console.log(`Steps: ${result.steps.length}`);

// Access the result
for (const step of result.steps) {
console.log(` - ${step.name} (${step.type})`);
}

// Execution metadata is available immediately
console.log(`Execution metadata:`, result.metadata);

Python SDK

from axonflow import AxonFlow

async with AxonFlow(
agent_url="http://localhost:8080"
# For self-hosted: no license_key needed if SELF_HOSTED_MODE=true
# For cloud: license_key="your-license-key"
) as client:
# Generate and execute plan in one call
result = await client.generate_plan(
query="Research AI governance best practices",
domain="generic"
)

print(f"Plan ID: {result.plan_id}")
print(f"Steps executed: {len(result.steps)}")

# Access step information
for step in result.steps:
print(f" - {step.name} ({step.type})")

# The result is available immediately
print(f"Result preview: {result.result[:200]}...")

Next Steps

Now that you have a basic multi-agent workflow running:

Troubleshooting

Agent Not Found

If you get "agent not found" errors:

  1. Check agent file is in the correct directory
  2. Verify YAML syntax is valid
  3. Restart orchestrator to reload agents
  4. Check orchestrator logs: docker compose logs orchestrator

Plan Generation Fails

If plan generation fails:

  1. Verify the domain matches your agent's domain
  2. Check LLM provider is configured and has valid credentials
  3. Review the query for clarity

Step Execution Timeout

If steps timeout:

  1. Increase timeout in agent config: spec.timeout: 120s
  2. Check LLM provider status
  3. Simplify the prompt template