7 min readSwirls Team

Introducing Swirls

A declarative language and durable runtime for agentic workflows. Define forms, webhooks, AI steps, and human review in a single .swirls file. Then deploy in one command.

ProductCompany

It's still weirdly difficult to wire up an AI workflow that actually runs in production.

We learned this firsthand. A client came to us with a straightforward ask: let his team fill out a form, run the submission through internal policies with AI to generate social media drafts, review those drafts before they go live, and post them. Simple enough on a whiteboard.

In practice, it meant stitching together a form product, an n8n instance, a notification system, a review step with no obvious home, and a second n8n workflow to handle the posting. And we still had open questions:

  • Where do we store his social media API keys securely?
  • What happens when the workflow needs tweaking? Hand him an n8n login and hope for the best?
  • How do we show a realistic preview of what'll actually get posted to Reddit?
  • How do we audit what the AI generated and what a human approved?

We're engineers, so we did what engineers love to do and built a bespoke app. Clerk for auth, encrypted secrets, and a custom review UI. It worked well, but it didn't scale the way we hoped. Changing a workflow meant changing code. Onboarding another client meant spinning up another deployment. And managing secrets across projects was its own part-time job.

The app solved the immediate problem. It also showed us a gap in the landscape that nobody seemed to be filling.

#The gap

On one side, you have visual automation tools like n8n, Zapier, and Make. They're easy to start with, but they buckle under anything non-trivial: no real secrets management, no durable execution, no human-in-the-loop, no way to version or audit what changed and when.

On the other side, you have workflow libraries like LangChain, Mastra, and Temporal. These are genuinely powerful, but you're writing and maintaining imperative code, managing infrastructure, and rebuilding the same patterns from scratch for every project.

We wanted something in between. The ease of a configuration file with the rigor of real code. Something you could read, review, version in git, and hand to an LLM to write for you.

What SQL did to data queries. What Terraform did to infrastructure. We wanted to do that to agentic workflows.

So we built a language.

#The .swirls file

A .swirls file is a single, declarative definition of an entire workflow. It can contain forms, webhooks, schedules, AI steps, HTTP calls, conditional routing, human review gates, secrets, and email sends. Everything lives in one file.

Here's a support ticket triage system in ~30 lines of meaningful DSL:

form support_ticket {
  label: "Support Ticket"
  enabled: true
  schema: @json {
    {
      "type": "object",
      "required": ["email", "subject", "body"],
      "properties": {
        "email": { "type": "string" },
        "subject": { "type": "string" },
        "body": { "type": "string" }
      }
    }
  }
}

graph triage_ticket {
  label: "Triage Ticket"

  root {
    type: code
    label: "Normalize"
    code: @ts {
      const { email, subject, body } = context.nodes.root.input
      return {
        email: email.trim().toLowerCase(),
        subject: subject.trim(),
        body: body.trim(),
      }
    }
  }

  node classify {
    type: switch
    label: "Classify urgency"
    cases: ["urgent", "normal", "low"]
    router: @ts {
      const body = context.nodes.root.output.body.toLowerCase()
      if (body.includes("urgent") || body.includes("asap")) return "urgent"
      if (body.length > 500) return "normal"
      return "low"
    }
  }

  node escalate {
    type: ai
    label: "Draft escalation"
    model: "google/gemini-2.5-flash"
    prompt: @ts {
      return `Draft an urgent escalation for: ${context.nodes.root.output.subject}`
    }
  }

  node respond {
    type: ai
    label: "Draft response"
    model: "google/gemini-2.5-flash"
    prompt: @ts {
      return `Draft a support response for: ${context.nodes.root.output.subject}`
    }
  }

  node acknowledge {
    type: code
    label: "Auto-acknowledge"
    code: @ts {
      return { message: `We received: ${context.nodes.root.output.subject}` }
    }
  }

  flow {
    root -> classify
    classify -["urgent"]-> escalate
    classify -["normal"]-> respond
    classify -["low"]-> acknowledge
  }
}

trigger on_ticket {
  form:support_ticket -> triage_ticket
  enabled: true
}

That's a form, an AI-powered triage graph with conditional routing, and a trigger binding them together in a single file with no boilerplate.

You can read it top to bottom and know exactly what it does. Put it in a PR and your team can review the workflow like they'd review any other code. And because the syntax is compact and well-structured, LLMs turn out to be remarkably good at writing these files too (more on that later).

#Declarative where it counts, imperative where it matters

Most workflow DSLs force a choice. Visual builders like n8n give you drag-and-drop but trap dynamic logic inside Handlebars templates or expression editors. Minimal autocomplete, no type checking, and good luck chaining complex logic inside of {{ }}. Code-first frameworks like LangChain give you full TypeScript but force you to wire up the graph structure imperatively, burying the workflow shape in boilerplate.

.swirls doesn't make you choose. The structure (forms, graphs, triggers, flow edges, node wiring) is declarative. You declare what connects to what. But every place a node needs real logic, you drop into TypeScript with @ts { }:

node classify {
  type: switch
  label: "Classify urgency"
  cases: ["urgent", "normal", "low"]
  router: @ts {
    const body = context.nodes.root.output.body.toLowerCase()
    if (body.includes("urgent") || body.includes("asap")) return "urgent"
    if (body.length > 500) return "normal"
    return "low"
  }
}

That @ts block is real TypeScript. context.nodes.root.output has typed fields, not a string you hope resolves at runtime. The LSP validates it, your editor gives you autocomplete on context, and if you reference a node that doesn't exist or a field that's not in the output schema, you get an error before execution.

Why go through this trouble? A few reasons.

No template jank. There's no expression language to learn, no Handlebars-style {{ }} interpolation, no square-peg runtime for round-hole logic. When you need a conditional, you write an if. When you need a transform, you write a function. Just TypeScript.

Type safety end to end. Node output schemas flow into downstream context types, and @ts blocks are checked against them. Errors surface in your editor, not after you deploy. This turns out to be especially important for LLMs writing .swirls files. The type system constrains the output space, so the model gets immediate signal on what's valid vs. what isn't.

The graph stays readable. Because the structural glue (flow, trigger, node declarations) is declarative, you can still read the workflow shape at a glance. The imperative logic lives where it belongs, inside individual nodes, not scattered across the graph wiring.

#Why this shape matters

The DSL isn't syntax for syntax's sake. The shape of a .swirls file solves real problems:

Workflows are DAGs, not scripts. Each graph is a directed acyclic graph of typed nodes. The runtime can execute independent branches in parallel, retry failed nodes without re-running the whole workflow, and give you a clear execution trace. You can't get that from imperative code without a lot of ceremony.

Secrets are scoped, not global. You declare secret blocks at the top of your file and reference them per-node. The runtime encrypts them at rest and injects only the specific keys each node declares. No flat process.env that every step can read.

secret social {
  vars: [TWITTER_API_KEY, REDDIT_CLIENT_SECRET]
}

node post_to_twitter {
  type: http
  secrets: {
    social: [TWITTER_API_KEY]
  }
  // only TWITTER_API_KEY is available here
}

Human-in-the-loop is a first-class primitive. Add a review block to any node and the execution pauses, notifies the right people, and resumes only after someone approves it. You don't need to bolt on an external ticketing system or roll your own polling loop.

node draft_email {
  type: ai
  label: "Draft Email"
  model: "anthropic/claude-sonnet-4-20250514"
  prompt: @ts {
    return `Draft a response to: ${context.nodes.root.output.message}`
  }
  review: {
    enabled: true
    label: "Review draft before sending"
  }
}

Triggers decouple entry points from logic. The same graph can be triggered by a form submission, a webhook, and a cron schedule. Change how a workflow starts without touching what it does.

trigger from_form {
  form:contact -> process_lead
  enabled: true
}

trigger from_webhook {
  webhook:inbound -> process_lead
  enabled: true
}

trigger nightly {
  schedule:daily_report -> generate_report
  enabled: true
}

#The runtime

A language without a runtime is just a spec. Swirls ships with both.

Locally, run swirls worker start and execute workflows on your machine. No cloud account needed, no internet required. Great for iterating on a workflow before it goes anywhere near production.

In production, run swirls cloud deploy and your workflows execute durably on Swirls Cloud. That means:

  • Memoized steps. If a node succeeds and the workflow fails downstream, the runtime won't re-run the successful node. It picks up where it left off.
  • Resumable execution. Human review nodes park the execution and resume it when approved, whether that's five minutes or five weeks later.
  • Tamper-evident audit logs. Every execution, node result, and review decision is recorded. When something goes wrong, you can trace exactly what happened.
  • Encrypted secrets. Project secrets are encrypted at rest and scoped per-node at execution time. No plaintext keys sitting in environment variables.

You don't manage servers, job queues, databases, or retry logic. Write a .swirls file, deploy it, and the runtime handles the rest.

#13 node types and growing

Every node in a Swirls graph has a specific type that determines what it does:

Node What it does
code Sandboxed TypeScript transforms
ai LLM calls — text, structured objects, images, embeddings
http REST API calls to any endpoint
switch Conditional routing with labeled edges
resend Transactional email via Resend
firecrawl Web scraping and crawling via Firecrawl
parallel Web search and extraction with Parallel.ai
stream Query persisted workflow outputs
graph Compose subgraphs as nodes
wait Pause execution for a duration or signal
bucket Object storage operations
document Document processing

Each node type has a well-defined interface. Code nodes are sandboxed (no fetch, no fs, no process.env). If a node needs to hit an API, it uses an http node. If it needs an LLM, it uses an ai node. Side effects are always visible in the graph structure, which makes workflows much easier to reason about and debug.

#Type-safe integration with codegen

Run swirls form gen so your repo gets a swirls.gen.ts that registers each form’s schema next to your .swirls definitions. That keeps types aligned with the workflow—useful, but most teams care more about what the UI actually does with that output.

On the app side, useSwirlsFormAdapter from @swirls/sdk/form is the handoff: it ties your form UI to the workflow you already defined, so validation and submits stay consistent with .swirls without bespoke API wiring. Use it with whatever form library you like.

The live demo on this site implements that with TanStack Form. Here is a shortened version of our real feedback form—the full component adds email, feedback body, rating, and error toasts, but the pattern is the same:

import { useSwirlsFormAdapter } from '@swirls/sdk/form'
import { useForm } from '@tanstack/react-form'

export function FeedbackForm() {
  const adapter = useSwirlsFormAdapter(
    // Fully type-safe: keyed to the `feedback` form in the associated `.swirls` file (ids, fields, and value shape come from codegen).
    'feedback',
    { name: '', email: '', feedback: '', rating: 5 },
  )

  const form = useForm({
    defaultValues: adapter.defaultValues,
    validators: {
      onChange: adapter.schema,
    },
    onSubmit: async ({ value }) => {
      await adapter.submit(value)
      form.reset()
    },
  })

  return (
    <form
      onSubmit={(e) => {
        e.preventDefault()
        form.handleSubmit()
      }}
    >
      <form.Field name="name">
        {(field) => (
          <input
            value={field.state.value}
            onChange={(e) => field.handleChange(e.target.value)}
            onBlur={field.handleBlur}
          />
        )}
      </form.Field>
      {/* email, feedback, rating — same Field pattern */}
      <button type="submit">Submit</button>
    </form>
  )
}

Validation stays in sync with the JSON Schema in your .swirls file because both flow through the generated schema. Rename a field in the workflow, re-run swirls form gen, and TypeScript will complain if your JSX still references the old names.

#Designed for a world where LLMs write the code

We think less and less code will be written by humans over the next few years. That conviction shaped the language from day one. If an LLM is going to author your workflows, the language it writes in needs to be compact, constrained, and verifiable. .swirls was designed with all three in mind.

The syntax is small enough to fit in a system prompt. The node types are specific enough that the model doesn't have to improvise. And because the graph structure is declarative, there's no sprawling imperative logic for the model to get tangled in. You describe a workflow in plain English ("when a support ticket comes in, classify the urgency, draft a response with AI for urgent and normal tickets, auto-acknowledge low-priority ones") and an LLM can produce a correct .swirls file. Commit it to git, deploy it, and you're running.

A few properties of the language make this work especially well:

Small surface area. The entire DSL has around a dozen top-level constructs and 13 node types. Compare that to generating a LangChain pipeline or a Temporal workflow in TypeScript, where the model has to navigate an entire framework API. With .swirls, there's just less to get wrong.

Immediate validation. The LSP and type system catch errors as soon as the file is written. An LLM can generate a .swirls file, get feedback on what's invalid, and correct it in a single turn. The tight feedback loop matters: it's the difference between a model that produces working output 60% of the time and one that converges on working output almost every time.

Auditable output. When an LLM generates imperative code, you often can't tell what it does without running it. A .swirls file reads like a blueprint. The flow block shows you the execution path. The node declarations show you what each step does. You can review a generated workflow the same way you'd review a Terraform plan or a SQL migration, and you can do it quickly.

Agents can build their own tools. This is where it gets interesting. An agent using Swirls as an MCP tool can recognize a repeated pattern (say, "fetch company data, enrich it, score the lead") and generate a .swirls graph that encapsulates that entire sequence. Every subsequent call runs the graph deterministically instead of re-orchestrating the steps through the LLM. It's workflow compilation: the agent turns ad-hoc behavior into a reusable, testable, versionable primitive.

#What we're shipping today

Swirls is in early access. The core platform is live:

  • .swirls language with LSP support for syntax highlighting, validation, and autocomplete in your editor
  • Durable execution with memoized steps, resumable workflows, and full audit trails
  • 13 node types covering AI, HTTP, email, scraping, routing, subgraphs, and more
  • Forms, webhooks, and cron schedules as first-class triggers
  • Encrypted secrets scoped per-node with least-privilege access
  • Human-in-the-loop review on any node
  • Type-safe codegen for integrating workflows into your TypeScript applications
  • 46+ cookbook recipes covering lead scoring, content generation, support triage, data enrichment, and more
  • Local runtime for development, cloud runtime for production

We're shipping updates weekly and building in public.

#Get started

You can have a workflow running locally in under five minutes:

curl -fsSL https://swirls.ai/install | bash
swirls worker start

When you're ready for production: swirls cloud deploy.

Browse the cookbook for ready-to-run recipes. Join the Discord to shape the roadmap. We read every message.

We built Swirls because we needed it. We think you might need it too.