Home / Docs / CaddisFly

CaddisFly

CaddisFly

CaddisFly is a pipeline orchestration engine for OpenCaddis. It lets agents define and execute multi-step command pipelines using either a compact inline DSL or structured YAML workflow files. Pipelines support parallel execution, approval gates, retry logic, variable substitution, and built-in safety controls.

Quick Start — Inline DSL
# Run a simple three-step pipeline
RunPipeline "echo hello >> dotnet build >> dotnet test"
Quick Start — YAML Workflow
RunWorkflow "build-and-test"

Tools

CaddisFly exposes 7 tools to the agent:

ToolParametersDescription
RunPipelinepipelineExecute an inline DSL pipeline string
RunWorkflowworkflowName, variables?Execute a named YAML workflow file with optional variable overrides
ResumeRunrunIdResume a paused run (e.g., after approval)
GetRunStatusrunIdGet the current status and step details of a run
CancelRunrunIdCancel a running or paused pipeline
ListWorkflowsList all available YAML workflow files
GetRunLogsrunIdRetrieve the full log output for a completed or running pipeline

Run Statuses

Every pipeline run has a status that the agent can check with GetRunStatus:

StatusMeaning
OkAll steps completed successfully
NeedsApprovalPipeline is paused at an approval gate — call ResumeRun to continue
CancelledPipeline was cancelled via CancelRun
ErrorA step failed and retries (if configured) were exhausted
TimedOutA step exceeded the configured timeout

Inline DSL

The inline DSL is a compact string syntax for defining pipelines directly in a RunPipeline call. Steps are separated by >> and executed sequentially. Each step runs a command and the pipeline stops on the first failure (unless retries are configured).

Syntax
DSL Format
# Sequential steps
"step1 >> step2 >> step3"

# Parallel group (steps inside [ ] run concurrently)
"step1 >> [stepA, stepB, stepC] >> step4"

# Approval gate
"build >> [APPROVE] >> deploy"

# With flags
"build --retry=3 >> test --timeout=120 >> deploy"
Step Flags
FlagExampleDescription
--retry=Ndotnet build --retry=3Retry the step up to N times on failure
--timeout=Ndotnet test --timeout=120Step timeout in seconds (overrides global)
Examples
DSL Examples
# Simple build-and-test
"dotnet build >> dotnet test"

# Parallel fetch, then merge
"echo starting >> [curl https://api.example.com/a, curl https://api.example.com/b] >> echo done"

# Build, approve, then deploy with retries
"dotnet build >> dotnet test >> [APPROVE] >> dotnet publish --retry=2"

# Using variables
"echo {{env}} >> dotnet publish -c {{config}}"

YAML Workflows

For complex or reusable pipelines, define workflows as YAML files in the configured workflow directory. Invoke them by name with RunWorkflow.

Schema
workflows/build-and-test.yaml
name: build-and-test
description: Build the project and run tests
variables:
  config: Release
  verbosity: minimal
steps:
  - name: Restore
    command: dotnet restore
  - name: Build
    command: dotnet build -c {{config}} --verbosity {{verbosity}}
  - name: Test
    command: dotnet test -c {{config}}
    retry: 2
    timeout: 120
Variable Substitution

Variables are defined in the workflow's variables section and referenced with {{name}} syntax. Variables passed to RunWorkflow override the defaults.

Overriding Variables
# Use defaults from YAML
RunWorkflow "build-and-test"

# Override the config variable
RunWorkflow "build-and-test" variables: { "config": "Debug" }
Workflow Resolution

CaddisFly looks for workflow files in the configured WorkflowPath directory. Files must have a .yaml or .yml extension. The workflow name used in RunWorkflow matches the filename without the extension.

Parallel Execution

Steps can run in parallel using groups. In the inline DSL, wrap steps in square brackets. In YAML, set parallel: true on a group of steps.

Behavior
  • All steps in a parallel group start at the same time
  • The pipeline waits for all parallel steps to complete before moving to the next step
  • If any parallel step fails, the entire group is marked as failed
  • Each parallel step's output is captured independently
YAML Parallel Group
steps:
  - name: Setup
    command: echo preparing
  - name: Parallel Fetch
    parallel: true
    steps:
      - name: Fetch API
        command: curl https://api.example.com/data
      - name: Fetch Config
        command: curl https://api.example.com/config
  - name: Process
    command: echo processing results
DSL Parallel Group
"echo preparing >> [curl https://api.example.com/data, curl https://api.example.com/config] >> echo processing results"

Approval Gates

Approval gates pause a pipeline and require explicit approval before continuing. This is useful for deployment pipelines or any workflow where a human should review progress before the next step.

DSL Syntax
Inline Approval
"dotnet build >> dotnet test >> [APPROVE] >> dotnet publish"
YAML Syntax
YAML Approval Step
steps:
  - name: Build
    command: dotnet build
  - name: Approve Deploy
    approve: true
  - name: Deploy
    command: dotnet publish
How It Works
  1. Pipeline reaches the approval step and pauses with status NeedsApproval
  2. The agent reports the paused status and run ID to the user
  3. The user (or agent) calls ResumeRun with the run ID to continue
  4. If the run is cancelled instead, remaining steps are skipped

Retry Logic

Individual steps can be configured to retry on failure. The step is re-executed up to the specified number of times before the pipeline is marked as failed.

DSL
"dotnet build --retry=3 >> dotnet test --retry=2"
YAML
steps:
  - name: Build
    command: dotnet build
    retry: 3
Behavior
  • Retries are immediate — no delay between attempts
  • Each attempt's output is captured in the run log
  • If all retries are exhausted, the step status is Error
  • Retry count does not apply to approval gates or built-in commands like echo

Built-in Commands

CaddisFly provides built-in commands that run inside the engine without spawning an external process:

CommandUsageDescription
echoecho Hello worldOutput a message — useful for logging, debugging, or producing pipeline output
set-varset-var name=valueSet a pipeline variable that subsequent steps can reference with {{name}}
approveapproveExplicit approval gate (equivalent to [APPROVE] in DSL syntax)

Supported Commands

CaddisFly can execute any command available on the host, subject to the safety blocklist. The following executables are commonly used:

CategoryCommands
.NETdotnet
Node.jsnode, npm, npx
Gitgit (read-only operations — see Safety)
HTTPcurl, wget
Dockerdocker
Utilitiesecho, cat, ls, dir, find, grep

You can also register custom commands via the CustomCommands configuration setting.

Safety

CaddisFly enforces a strict safety blocklist to prevent destructive or dangerous operations. Any command matching a blocked pattern is rejected before execution.

Blocked Patterns
Destructive Commands
rm -rf    rmdir /s    del /f    format    fdisk
diskpart  mkfs        dd        shred     wipe
Eval / Dynamic Execution
eval          invoke-expression    iex
exec          source               bash -c
cmd /c        powershell -command  pwsh -command
Language-Specific Execution
python -c    python3 -c    ruby -e    perl -e    php -r
Git (Destructive)
git push --force    git reset --hard    git clean -f
git checkout .      git branch -D
System Path Modifications
setx       export PATH=    set PATH=
reg add    reg delete      chmod 777
Custom Commands

Custom commands registered via configuration are still subject to the safety blocklist. You cannot bypass blocked patterns by registering them as custom commands.

Configuration

Configure CaddisFly via the Plugin Args section in opencaddis.json:

SettingDefaultDescription
CaddisFly:WorkingDirectory{TempPath}/OpenCaddis/Base directory for command execution and run logs
CaddisFly:WorkflowPath{TempPath}/OpenCaddis/workflows/Directory containing YAML workflow files
CaddisFly:TimeoutSeconds60Default timeout per step (overridable per-step)
CaddisFly:MaxOutputLength10000Maximum characters captured per step output
CaddisFly:CustomCommands(empty)Comma-separated list of additional allowed executables
opencaddis.json — CaddisFly Config
{
  "Agents": [
    {
      "Handle": "devops",
      "Plugins": ["CaddisFly", "FileSystem"],
      "Args": {
        "CaddisFly:WorkingDirectory": "C:\\Projects\\MyApp",
        "CaddisFly:WorkflowPath": "C:\\Projects\\MyApp\\workflows",
        "CaddisFly:TimeoutSeconds": "120",
        "CaddisFly:CustomCommands": "terraform,kubectl"
      }
    }
  ]
}

Running in Docker

By default, CaddisFly uses a temp directory inside the container (/tmp/OpenCaddis/) for its working directory, workflow files, and run logs. This means workflows and run history are lost when the container restarts. To persist them, mount a host volume to /working and configure CaddisFly to use it.

Important

Without a volume mount, workflow YAML files and pipeline run logs exist only inside the container. If the container is removed or recreated, they are gone. Always mount a volume for production or development use.

Step 1: Run with a Volume Mount

Mount a host directory to /working in the container. The OpenCaddis Dockerfile already creates this directory with the correct permissions.

Linux / macOS
docker run -d \
  -p 5000:5000 \
  -v ~/opencaddis/working:/working \
  -v ~/opencaddis/data:/app/data \
  -v ~/opencaddis/keys:/app/.keys \
  --name opencaddis \
  vulcan365/opencaddis
Windows (PowerShell)
docker run -d `
  -p 5000:5000 `
  -v C:\opencaddis\working:/working `
  -v C:\opencaddis\data:/app/data `
  -v C:\opencaddis\keys:/app/.keys `
  --name opencaddis `
  vulcan365/opencaddis
Step 2: Configure CaddisFly Paths

Point the plugin at the mounted volume. In opencaddis.json, set the working directory and workflow path to subdirectories under /working:

opencaddis.json — Docker Paths
{
  "Agents": [
    {
      "Handle": "devops",
      "Plugins": ["CaddisFly", "FileSystem"],
      "Args": {
        "CaddisFly:WorkingDirectory": "/working/caddisfly/",
        "CaddisFly:WorkflowPath": "/working/caddisfly/workflows/",
        "CaddisFly:TimeoutSeconds": "120",
        "CaddisFly:CustomCommands": "terraform,kubectl"
      }
    }
  ]
}
Container Path Mapping
SettingContainer PathHost Path (via mount)
CaddisFly:WorkingDirectory/working/caddisfly/C:\opencaddis\working\caddisfly\
CaddisFly:WorkflowPath/working/caddisfly/workflows/C:\opencaddis\working\caddisfly\workflows\
Run logs/working/caddisfly/runs/C:\opencaddis\working\caddisfly\runs\
Data (SQLite)/app/data/C:\opencaddis\data\
Keys/app/.keys/C:\opencaddis\keys\
Step 3: Add Workflow Files on the Host

Create YAML workflow files in the mounted directory on your host machine. They are immediately available to the agent inside the container.

Create a workflow from the host
# Create the workflows directory
mkdir C:\opencaddis\working\caddisfly\workflows

# Add a workflow file (or create it in Explorer / VS Code)
# C:\opencaddis\working\caddisfly\workflows\build.yaml
C:\opencaddis\working\caddisfly\workflows\build.yaml
name: build
description: Build and test the project
steps:
  - name: Restore
    command: dotnet restore
  - name: Build
    command: dotnet build -c Release
  - name: Test
    command: dotnet test -c Release
    retry: 2

The agent can now run: RunWorkflow "build"

Why bind mounts?

On Windows with Docker Desktop (WSL2), named volumes (e.g. opencaddis-data:/app/data) are stored inside the WSL2 virtual machine and are not easily browsable from Windows Explorer. Using bind mounts with explicit Windows paths (e.g. C:\opencaddis\data:/app/data) keeps all files visible and editable on your host.

Docker Compose
docker-compose.yaml
services:
  opencaddis:
    image: vulcan365/opencaddis
    ports:
      - "5000:5000"
    volumes:
      - C:\opencaddis\working:/working          # Windows
      - C:\opencaddis\data:/app/data            # Windows
      - C:\opencaddis\keys:/app/.keys           # Windows
      # ~/opencaddis/working:/working            # Linux/macOS
      # ~/opencaddis/data:/app/data              # Linux/macOS
      # ~/opencaddis/keys:/app/.keys             # Linux/macOS
    restart: unless-stopped
Other Plugins

The /working volume is shared across plugins. You can also configure FileSystem:RootPath, PowerShell:WorkingDirectory, and TaskManager:RootPath to use subdirectories under /working so all persistent data lives on the mount.

Run History

Every pipeline run is logged to disk for debugging and auditing. Logs are stored in a runs subdirectory under the working directory.

Log Directory Structure
{WorkingDirectory}/runs/
  abc123def456.json                            # Run state
  abc123def456/step-000-Restore.log             # Step logs
  abc123def456/step-001-Build.log
  abc123def456/step-002-Test.log
Log Format

Each log file is JSON and contains:

  • runId — unique run identifier
  • pipeline — the original DSL string or workflow name
  • status — final run status
  • startedAt / completedAt — timestamps
  • steps — array of step results (name, command, status, output, duration)

Use GetRunLogs to retrieve log contents programmatically, or read them directly from the log directory.

Limits

LimitValueDescription
Max steps per pipeline50Maximum number of steps (including parallel sub-steps) in a single run
Max parallel steps10Maximum concurrent steps in a single parallel group
Max output per step10,000 charsConfigurable via MaxOutputLength
Default timeout per step60 secondsConfigurable via TimeoutSeconds or per-step --timeout
Max retries per step5Maximum value for the retry setting
Max concurrent runs5Maximum number of pipelines running simultaneously per agent

Example Workflows

CI/CD Deploy
workflows/deploy.yaml
name: deploy
description: Build, test, approve, and deploy
variables:
  config: Release
  target: production
steps:
  - name: Restore
    command: dotnet restore
  - name: Build
    command: dotnet build -c {{config}}
  - name: Test
    command: dotnet test -c {{config}}
    retry: 2
  - name: Approve
    approve: true
  - name: Deploy
    command: dotnet publish -c {{config}} -o ./publish
Parallel Data Fetch
workflows/data-fetch.yaml
name: data-fetch
description: Fetch data from multiple APIs in parallel
variables:
  base_url: https://api.example.com
steps:
  - name: Fetch All
    parallel: true
    steps:
      - name: Users
        command: curl {{base_url}}/users
      - name: Orders
        command: curl {{base_url}}/orders
      - name: Products
        command: curl {{base_url}}/products
  - name: Report
    command: echo Fetch complete
Health Check
workflows/health-check.yaml
name: health-check
description: Verify services are running
steps:
  - name: Check Services
    parallel: true
    steps:
      - name: API
        command: curl -f http://localhost:5000/health
        retry: 3
      - name: Database
        command: curl -f http://localhost:5432/health
        retry: 3
  - name: Report
    command: echo All services healthy
Git Status Report
Inline DSL
"git status >> git log --oneline -5 >> echo Report complete"
Documentation