SWIRLS_
Language

How execution works

A conceptual walkthrough of how Swirls executes workflows, from trigger to checkpointed output.

Reading the syntax gets you writing .swirls files. This page explains what happens after you write them: how a trigger becomes a running graph, how the engine handles failure and resumption, and why local and cloud execution are identical.

The execution lifecycle

Every workflow run follows the same four stages.

Trigger fires
     |
     v
 Graph selected
     |
     v
 Nodes execute (one at a time, depth-first through the DAG)
     |
     v
 Checkpoint written after each node completes

When a trigger fires (a form is submitted, a webhook arrives, a schedule ticks), the engine selects the bound graph and starts a new execution. An execution is a persistent record that tracks every node's status, input, and output.

The DAG model

A graph is a directed acyclic graph. Nodes are vertices. Edges define dependencies. No cycles are allowed.

This structure has a direct operational consequence: the engine can determine exactly which nodes are ready to run at any moment. A node is ready when all its upstream dependencies have completed. If two branches have no shared ancestors, they run without waiting for each other.

         root
        /    \
    enrich  validate
        \    /
        combine

In this graph, enrich and validate are both unblocked as soon as root completes. combine waits for both. The engine never needs to coordinate between them; the topology expresses the dependency.

Routing through conditional branches uses switch nodes. A switch node runs its router function and selects exactly one labeled edge. Only the selected branch executes.

         root
           |
        classify
       /         \
  handle_high  handle_low

No other flow control exists at the DSL level. Iteration uses map (one child-graph run per item) or while (repeated runs until a condition is false). Both are expressed as nodes, not control-flow keywords.

Durable execution and checkpointing

Every node writes its output to the execution record before the next node starts. If the worker crashes mid-run, or if the process is restarted, the execution resumes from the last completed checkpoint.

 root: DONE     (checkpointed)
 enrich: DONE   (checkpointed)
 validate: DONE (checkpointed)
 combine: ---   (next to run on resume)

This means:

  • Completed nodes never re-execute on resume.
  • Node side effects (emails sent, database writes, API calls) are not replayed.
  • An execution that times out or errors at one node does not lose the work already done.

Checkpointing is automatic. No configuration is required. The engine handles it for every node, on every execution.

Pause and resume

Some nodes pause execution and wait for an external signal before continuing.

Review nodes pause at a node marked review: true. The execution waits for a human to approve or reject. When they respond, the execution resumes from that node's output.

wait nodes pause for a duration (amount and unit) before the next node runs.

graph nodes (subgraphs) and map / while nodes can trigger child executions. Each child execution is its own checkpointed record. The parent execution waits for all children to complete before proceeding.

A paused execution holds its state indefinitely. There is no timeout on a paused review. Executions resume the moment the signal arrives, regardless of how much time has passed.

Storage

Where execution state lives depends on the environment.

Local development: the worker uses a SQLite database on disk. The default path is managed by the CLI. All node outputs, execution status, and stream data live in this file.

Cloud: the platform uses a managed PostgreSQL database. All execution data is stored with encryption at rest. You never configure or manage the database directly.

The engine binary is identical in both environments. Local runs behave the same as cloud runs. There is no "development mode" with different semantics.

An annotated execution walkthrough

This example traces a single execution of a three-node graph.

Workflow: process_contact

Trigger: form submission { email: "[email protected]" }

---

[1] root node starts
    Input: { email: "[email protected]" }
    Code: normalize email
    Output: { email: "[email protected]" }
    Status: DONE
    Checkpoint written.

[2] summarize node starts
    Input: context.nodes.root.output -> { email: "[email protected]" }
    AI call: generate summary
    Output: { text: "New contact from [email protected]." }
    Status: DONE
    Checkpoint written.

[3] notify node starts
    Input: context.nodes.summarize.output.text
    Resend call: send email to [email protected]
    Output: { id: "re_abc123" }
    Status: DONE
    Checkpoint written.

Execution complete.

If the worker restarted after step 1, the engine would skip the root node (already checkpointed) and pick up at step 2.

How graphs connect to other graphs

A graph node calls another graph as a subgraph. The parent execution creates a child execution, passes the specified input, and waits for the child to complete.

A map node creates one child execution per item in a list. A while node creates child executions repeatedly until the condition is false or maxIterations is reached.

All child executions are checkpointed independently. If a child fails, the parent sees the failure and applies its failurePolicy (if configured). See Failure policies for the available strategies.

Streams

A stream block captures a graph's output as a typed, persistent record each time the graph runs. You read from a stream using a type: stream node in another graph.

Streams are the primary way to share data between graphs. One graph produces records; another graph queries them. The two graphs run independently; the stream is the interface between them.

Further reading

On this page