-
Notifications
You must be signed in to change notification settings - Fork 0
Step Types
CodeGenesis supports several step types that can be composed together to build complex pipelines.
graph TD
S[Step Types] --> A[Simple Step]
S --> B[Foreach]
S --> C[Parallel]
S --> D[Parallel Foreach]
S --> E[Approval]
S --> UP[Use Pipeline]
B -->|sequential| F[item 1 β item 2 β item N]
C -->|concurrent| G["branch A β branch B β branch C"]
D -->|concurrent| H["item 1 β item 2 β item N"]
E -->|interactive| I["β user confirms or rejects"]
UP -->|composition| K["π¦ sub-pipeline execution"]
The basic unit of a pipeline. Sends a prompt to Claude and collects the response.
- name: "Analyze code"
description: "Review the codebase"
prompt: "Analyze the project structure"
output_key: "analysis"
optional: falseπ All simple step fields
| Field | Required | Description |
|---|---|---|
name |
Yes | Step identifier |
prompt |
Yes* | Prompt sent to Claude |
context |
Yes* | Context bundle path (alternative to prompt) |
description |
No | Shown in the pipeline progress UI |
agent |
No | Agent label for display |
system_prompt |
No | System prompt for Claude |
model |
No | Model override for this step |
max_turns |
No | Max agentic turns |
output_key |
No | Store output for later steps |
allowed_tools |
No | Restrict which tools Claude can use |
mcp_servers |
No | MCP stdio servers for this step |
optional |
No | If true, failure returns Skipped instead of Failed |
fail_if |
No | Fail if output contains this string |
fail_message |
No | Custom failure message for fail_if
|
retry_max |
No | Max retry attempts |
retry_backoff_seconds |
No | Backoff between retries |
rate_limit_pause_seconds |
No | Pause duration on rate limit |
rate_limit_max_pauses |
No | Max rate limit pauses |
*Either prompt or context is required.
Loop over a collection and run sub-steps for each item sequentially:
- foreach:
collection: "{{steps.modules}}"
item_var: "module"
output_key: "module_results"
steps:
- name: "Analyze {{module}}"
prompt: "Analyze module: {{module}}"
output_key: "analysis"| Format | Example |
|---|---|
| JSON array | ["auth", "api", "db"] |
| Comma-separated | auth, api, db |
| Newline-separated | One item per line |
| Variable | Description |
|---|---|
{{<item_var>}} |
Current item value (e.g. {{module}}) |
{{loop.item}} |
Alias for the current item |
{{loop.index}} |
Zero-based index |
Note
Each iteration gets its own isolated context. Sub-step outputs from one iteration don't leak into the next. When output_key is set, all iteration results are aggregated into a JSON array.
Run multiple independent branches concurrently:
- parallel:
max_concurrency: 5
fail_fast: true
branches:
- name: "Security Review"
output_key: "security"
steps:
- name: "Check vulnerabilities"
prompt: "Review for security issues"
- name: "Performance Review"
output_key: "performance"
steps:
- name: "Check performance"
prompt: "Review for performance"| Option | Default | Description |
|---|---|---|
max_concurrency |
unlimited | Max branches running simultaneously |
fail_fast |
false |
Cancel remaining branches on first failure |
Important
Each branch runs in an isolated context. After all branches complete, their outputs are merged back into the parent context.
Combines foreach's collection parsing with parallel's concurrency model:
- parallel_foreach:
collection: "{{steps.modules}}"
item_var: "module"
max_concurrency: 3
fail_fast: false
output_key: "results"
steps:
- name: "Analyze {{module}}"
prompt: "Analyze module: {{module}}"
output_key: "analysis"| Option | Default | Description |
|---|---|---|
collection |
(required) | JSON array, comma-separated, or newline-separated |
item_var |
"item" |
Variable name for the current item |
max_concurrency |
unlimited | Max items processing simultaneously |
fail_fast |
false |
Cancel remaining items on first failure |
output_key |
null |
Store all results as a JSON array |
Pauses pipeline execution and prompts the user for confirmation:
- approval:
name: "Approve deployment plan"
message: "Review the plan and confirm you want to proceed."
display_key: deployment_plan| Field | Required | Description |
|---|---|---|
name |
No | Label in pipeline progress (default: "Approval") |
description |
No | Sub-label under the step name |
message |
No | Message in the approval panel |
display_key |
No | Shows a previous step's output as preview |
User input:
| Action | Accepted Input |
|---|---|
| β Approve |
y, yes, ok
|
| β Reject |
n, no, or Enter |
Caution
On rejection the pipeline stops immediately with a Pipeline Failed banner.
Executes another YAML pipeline as a sub-pipeline. Enables modular, reusable pipelines that can be composed together.
- name: "Run analysis"
use_pipeline: ./analysis.yml
inputs:
source: "{{steps.file_list}}"
focus: "architecture"
output_key: analysis_result
optional: false| Field | Required | Description |
|---|---|---|
name |
Yes | Step identifier |
use_pipeline |
Yes | Relative or absolute path to the child YAML pipeline |
inputs |
No | Input mappings from parent context to child pipeline inputs |
output_key |
No | Store the child pipeline's output for later steps |
optional |
No | If true, failure returns Skipped instead of Failed |
Parent variables and step outputs can be passed to the child pipeline via the inputs field. Template variables ({{steps.xxx}}, {{variable}}) are resolved in the parent's context before being passed:
- name: "Run sub-pipeline"
use_pipeline: ./child.yml
inputs:
task: "{{task}}" # from parent input
data: "{{steps.previous_output}}" # from parent step outputWhen the child pipeline finishes:
- If the child declares explicit
outputs, only those are exposed - If the child has no
outputssection, all child step outputs are available - When
output_keyis set, child outputs are stored under that single key - When
output_keyis not set, child outputs merge directly into the parent context (without overwriting)
CodeGenesis detects circular references (pipeline A calls B, B calls A) and fails immediately with a clear error. Detection is per async call chain, so parallel branches are isolated.
main.yml (parent):
steps:
- name: Prepare data
prompt: "List the key source files."
output_key: file_list
- name: Run analysis
use_pipeline: ./analysis.yml
inputs:
source: "{{steps.file_list}}"
focus: "architecture and code organization"
output_key: analysis_result
- name: Generate report
prompt: |
Generate a report based on:
{{steps.analysis_result}}analysis.yml (child):
inputs:
source:
description: "Source to analyze"
focus:
description: "Analysis focus area"
steps:
- name: Analyze code
prompt: "Analyze {{source}} for {{focus}}"
output_key: analysis
outputs:
analysis:
source: analysis
description: "Analysis results"Tip
Use use_pipeline to break large pipelines into focused, reusable modules. Each child pipeline can be tested and run independently.
Foreach and parallel can be nested. For example, "for each module, run lint and test in parallel":
- foreach:
collection: "{{steps.modules}}"
item_var: "module"
steps:
- parallel:
branches:
- name: "Lint"
steps:
- name: "Lint {{module}}"
prompt: "Lint module {{module}}"
- name: "Test"
steps:
- name: "Test {{module}}"
prompt: "Test module {{module}}"graph TD
FE[foreach modules] --> M1[module-auth]
FE --> M2[module-api]
FE --> M3[module-db]
M1 --> P1["Lint β Test"]
M2 --> P2["Lint β Test"]
M3 --> P3["Lint β Test"]
π Home
- π Getting Started
- π» CLI Reference
- π YAML Reference
- π Step Types
- π MCP Servers
- π¦ Context Bundles
- βοΈ Configuration
- ποΈ Project Structure
- π§ͺ Testing