Planning Patterns
This guide covers common orchestration patterns for Multi-Agent Planning, including execution modes, data flow, and error handling strategies.
Execution Modes
Sequential Execution
Steps run one after another, with each step receiving output from the previous:
steps:
- name: collect-data
type: connector-call
connector:
name: postgresql
operation: query
parameters:
query: "SELECT * FROM sales WHERE date >= $1"
args: ["{{input.startDate}}"]
- name: analyze-data
type: llm-call
agent: data-analyst
dependsOn: [collect-data]
input:
data: "{{steps.collect-data.output.rows}}"
question: "{{input.analysisQuestion}}"
- name: generate-report
type: llm-call
agent: report-writer
dependsOn: [analyze-data]
input:
analysis: "{{steps.analyze-data.output}}"
format: "executive-summary"
Use when:
- Each step depends on the previous step's output
- Order matters for correctness
- Processing pipeline with transformations
Parallel Execution
Independent steps run simultaneously for faster completion:
steps:
# These three steps run in parallel
- name: search-flights
type: connector-call
parallelGroup: travel-search
connector:
name: amadeus
operation: flightOffers
parameters:
origin: "{{input.origin}}"
destination: "{{input.destination}}"
- name: search-hotels
type: connector-call
parallelGroup: travel-search
connector:
name: hotels-api
operation: search
parameters:
location: "{{input.destination}}"
- name: search-cars
type: connector-call
parallelGroup: travel-search
connector:
name: cars-api
operation: search
parameters:
location: "{{input.destination}}"
# This step waits for all parallel steps
- name: create-package
type: llm-call
agent: trip-planner
dependsOn: [search-flights, search-hotels, search-cars]
input:
flights: "{{steps.search-flights.output}}"
hotels: "{{steps.search-hotels.output}}"
cars: "{{steps.search-cars.output}}"
Use when:
- Steps are independent of each other
- Reducing total execution time is important
- Aggregating data from multiple sources
Conditional Execution
Branch execution based on runtime conditions:
steps:
- name: classify-request
type: llm-call
agent: classifier
input:
message: "{{input.userMessage}}"
- name: route-request
type: conditional
dependsOn: [classify-request]
condition: "{{steps.classify-request.output.category}}"
branches:
- when: "technical"
goto: handle-technical
- when: "billing"
goto: handle-billing
- when: "general"
goto: handle-general
- default:
goto: escalate
- name: handle-technical
type: llm-call
agent: technical-support
input:
issue: "{{input.userMessage}}"
- name: handle-billing
type: llm-call
agent: billing-support
input:
query: "{{input.userMessage}}"
- name: handle-general
type: llm-call
agent: general-support
input:
question: "{{input.userMessage}}"
- name: escalate
type: api-call
request:
method: POST
url: "{{config.escalationWebhook}}"
body:
message: "{{input.userMessage}}"
reason: "unclassified"
Use when:
- Different logic needed based on input
- Routing to specialized handlers
- Implementing decision trees
Common Patterns
Fan-Out / Fan-In
Distribute work across multiple agents, then aggregate results:
# Fan-out: Parallel research from multiple sources
steps:
- name: research-academic
type: llm-call
parallelGroup: research
agent: academic-researcher
input:
topic: "{{input.topic}}"
sources: "academic papers, journals"
- name: research-news
type: llm-call
parallelGroup: research
agent: news-researcher
input:
topic: "{{input.topic}}"
sources: "news articles, press releases"
- name: research-industry
type: llm-call
parallelGroup: research
agent: industry-researcher
input:
topic: "{{input.topic}}"
sources: "industry reports, whitepapers"
# Fan-in: Synthesize all findings
- name: synthesize
type: llm-call
agent: synthesizer
dependsOn: [research-academic, research-news, research-industry]
input:
academic: "{{steps.research-academic.output}}"
news: "{{steps.research-news.output}}"
industry: "{{steps.research-industry.output}}"
instruction: "Create a comprehensive synthesis"
Chain of Thought
Progressive reasoning with intermediate steps:
steps:
- name: understand-problem
type: llm-call
agent: problem-analyzer
input:
problem: "{{input.problem}}"
instruction: "Break down this problem into components"
- name: identify-approach
type: llm-call
agent: strategy-planner
dependsOn: [understand-problem]
input:
components: "{{steps.understand-problem.output}}"
instruction: "Identify the best approach for each component"
- name: solve-components
type: llm-call
agent: solver
dependsOn: [identify-approach]
input:
approaches: "{{steps.identify-approach.output}}"
instruction: "Solve each component using the identified approach"
- name: integrate-solution
type: llm-call
agent: integrator
dependsOn: [solve-components]
input:
solutions: "{{steps.solve-components.output}}"
original: "{{input.problem}}"
instruction: "Integrate component solutions into final answer"
Validation Pipeline
Multi-stage validation with gates:
steps:
- name: process-input
type: llm-call
agent: processor
input:
data: "{{input.data}}"
- name: validate-format
type: function-call
dependsOn: [process-input]
function:
name: validateSchema
input:
data: "{{steps.process-input.output}}"
schema: "{{config.outputSchema}}"
- name: check-format-result
type: conditional
dependsOn: [validate-format]
branches:
- if: "{{steps.validate-format.output.valid}} == true"
goto: validate-content
- default:
goto: format-error
- name: validate-content
type: llm-call
agent: content-validator
input:
content: "{{steps.process-input.output}}"
rules: "{{config.contentRules}}"
- name: check-content-result
type: conditional
dependsOn: [validate-content]
branches:
- if: "{{steps.validate-content.output.passed}} == true"
goto: finalize
- default:
goto: content-error
- name: finalize
type: function-call
function:
name: formatOutput
input:
data: "{{steps.process-input.output}}"
- name: format-error
type: api-call
request:
method: POST
url: "{{config.errorWebhook}}"
body:
type: "format_validation_failed"
errors: "{{steps.validate-format.output.errors}}"
- name: content-error
type: api-call
request:
method: POST
url: "{{config.errorWebhook}}"
body:
type: "content_validation_failed"
errors: "{{steps.validate-content.output.errors}}"
Iterative Refinement
Repeat until quality threshold met:
steps:
- name: generate-draft
type: llm-call
agent: writer
input:
topic: "{{input.topic}}"
requirements: "{{input.requirements}}"
- name: evaluate-quality
type: llm-call
agent: evaluator
dependsOn: [generate-draft]
input:
content: "{{steps.generate-draft.output}}"
criteria: "{{config.qualityCriteria}}"
- name: check-quality
type: conditional
dependsOn: [evaluate-quality]
branches:
- if: "{{steps.evaluate-quality.output.score}} >= 0.8"
goto: finalize-output
- if: "{{context.iterationCount}} >= 3"
goto: finalize-output # Max iterations reached
- default:
goto: refine-draft
- name: refine-draft
type: llm-call
agent: editor
input:
draft: "{{steps.generate-draft.output}}"
feedback: "{{steps.evaluate-quality.output.feedback}}"
instruction: "Improve based on feedback"
onComplete:
setContext:
iterationCount: "{{context.iterationCount + 1}}"
goto: evaluate-quality # Loop back
- name: finalize-output
type: function-call
function:
name: formatFinalOutput
input:
content: "{{steps.generate-draft.output}}"
qualityScore: "{{steps.evaluate-quality.output.score}}"
Fallback Chain
Try multiple approaches until one succeeds:
steps:
- name: try-primary-api
type: api-call
request:
method: GET
url: "{{config.primaryApiUrl}}/data"
onError:
action: continue
output:
primaryFailed: true
- name: check-primary
type: conditional
dependsOn: [try-primary-api]
branches:
- if: "{{steps.try-primary-api.output.primaryFailed}} != true"
goto: process-result
- default:
goto: try-secondary-api
- name: try-secondary-api
type: api-call
request:
method: GET
url: "{{config.secondaryApiUrl}}/data"
onError:
action: continue
output:
secondaryFailed: true
- name: check-secondary
type: conditional
dependsOn: [try-secondary-api]
branches:
- if: "{{steps.try-secondary-api.output.secondaryFailed}} != true"
goto: process-result
- default:
goto: use-cached
- name: use-cached
type: connector-call
connector:
name: redis
operation: get
parameters:
key: "cached:data:{{input.dataId}}"
- name: process-result
type: llm-call
agent: processor
input:
data: "{{steps.try-primary-api.output || steps.try-secondary-api.output || steps.use-cached.output}}"
Map-Reduce
Process items in parallel and combine results:
steps:
# Split input into items
- name: split-items
type: function-call
function:
name: splitArray
input:
array: "{{input.documents}}"
chunkSize: 5
# Process each chunk in parallel (dynamically created)
- name: process-chunk
type: llm-call
agent: document-processor
forEach: "{{steps.split-items.output.chunks}}"
parallelGroup: processing
input:
documents: "{{item}}"
# Reduce: Combine all results
- name: combine-results
type: llm-call
agent: result-combiner
dependsOn: [process-chunk]
input:
results: "{{steps.process-chunk.outputs}}"
instruction: "Synthesize findings from all document batches"
Data Flow Patterns
Template Variable Usage
steps:
- name: fetch-user
type: connector-call
connector:
name: postgresql
operation: query
parameters:
query: "SELECT * FROM users WHERE id = $1"
args: ["{{input.userId}}"]
- name: personalize-response
type: llm-call
agent: personalizer
dependsOn: [fetch-user]
input:
# Access nested output fields
userName: "{{steps.fetch-user.output.rows[0].name}}"
userPreferences: "{{steps.fetch-user.output.rows[0].preferences}}"
# Access original input
query: "{{input.query}}"
# Access context
timestamp: "{{context.timestamp}}"
Output Transformation
steps:
- name: get-raw-data
type: connector-call
connector: { ... }
output:
# Transform output before storing
users: "{{result.rows}}"
count: "{{result.rowCount}}"
fetchedAt: "{{context.timestamp}}"
- name: format-for-display
type: function-call
dependsOn: [get-raw-data]
function:
name: formatTable
input:
data: "{{steps.get-raw-data.output.users}}"
columns: ["name", "email", "created_at"]
Aggregation Strategies
# Merge strategy: Combine all outputs into single object
aggregation:
strategy: merge
output:
flights: "{{steps.search-flights.output}}"
hotels: "{{steps.search-hotels.output}}"
summary: "{{steps.create-summary.output}}"
# First success: Use first successful result
aggregation:
strategy: first-success
steps: [try-api-1, try-api-2, try-api-3]
# Custom: Use final step output
aggregation:
strategy: custom
outputStep: final-formatter
Error Handling Patterns
Graceful Degradation
steps:
- name: get-real-time-data
type: api-call
request:
url: "{{config.realtimeApi}}"
timeout: 5s
onError:
action: continue
output:
useFallback: true
- name: select-data-source
type: conditional
dependsOn: [get-real-time-data]
branches:
- if: "{{steps.get-real-time-data.output.useFallback}} == true"
goto: get-cached-data
- default:
goto: process-data
- name: get-cached-data
type: connector-call
connector:
name: redis
operation: get
parameters:
key: "cached:{{input.dataKey}}"
- name: process-data
type: llm-call
agent: processor
input:
data: "{{steps.get-real-time-data.output || steps.get-cached-data.output}}"
Retry with Backoff
steps:
- name: call-external-api
type: api-call
request:
method: POST
url: "{{config.externalApi}}"
body: "{{input}}"
retryPolicy:
maxRetries: 3
initialDelay: 1s
maxDelay: 30s
backoffMultiplier: 2.0
retryableErrors:
- timeout
- rate_limit
- 503
- 429
Circuit Breaker (Enterprise)
steps:
- name: call-flaky-service
type: api-call
request:
url: "{{config.flakyService}}"
circuitBreaker:
enabled: true
failureThreshold: 5 # Open after 5 failures
resetTimeout: 60s # Try again after 60s
halfOpenRequests: 3 # Allow 3 test requests
onCircuitOpen:
action: goto
goto: use-alternative
Best Practices
1. Minimize Dependencies
# Good: Independent steps run in parallel
steps:
- name: step-a
parallelGroup: init
- name: step-b
parallelGroup: init
- name: step-c
dependsOn: [step-a, step-b]
# Avoid: Unnecessary sequential dependencies
steps:
- name: step-a
- name: step-b
dependsOn: [step-a] # Does step-b really need step-a?
- name: step-c
dependsOn: [step-b]
2. Set Appropriate Timeouts
steps:
- name: quick-lookup
type: connector-call
timeout: 5s # Fast operation
- name: complex-analysis
type: llm-call
timeout: 120s # Complex LLM task
- name: external-api
type: api-call
timeout: 30s # External dependency
3. Handle Edge Cases
steps:
- name: process-items
type: llm-call
agent: processor
# Handle empty input
condition: "{{input.items.length}} > 0"
input:
items: "{{input.items}}"
- name: handle-empty
type: function-call
condition: "{{input.items.length}} == 0"
function:
name: returnEmpty
4. Use Descriptive Names
# Good: Clear, descriptive names
steps:
- name: fetch-customer-orders
- name: calculate-order-total
- name: generate-invoice-pdf
# Avoid: Vague names
steps:
- name: step1
- name: process
- name: final
Next Steps
- API Reference - Complete API documentation
- Step Types - All step type configurations
- Agent Configuration - Agent YAML schema