SWIRLS_
GuidesNodes

AI

Call language and multimodal models with configurable kind, prompt, model, temperature, and token limit.

AI nodes send prompts to models (via OpenRouter) and return the model response. Use kind: text for plain generation, kind: object with an output schema for structured JSON, or other kinds (image, embed, video) as supported by your environment.

Configuration

FieldTypeRequiredDescription
kindtext | object | image | video | embedYesTask type: text generation, structured output, images, embeddings, or video (when supported).
modelstringNoModel identifier (provider and model name). When omitted, a sensible default is used. Models are routed through OpenRouter.
promptstringNo*Prompt or input code. For text / object / image / embed, typically a TypeScript block. Supports placeholders where applicable.
temperaturenumberNoSampling temperature, typically between 0 and 2.
maxTokensnumberNoMaximum tokens to generate (text/object).
optionsobjectNoKind-specific options (e.g. image size).

* Required for kinds that consume a prompt (text, object, image, embed).

Input

The node receives data from:

  • Trigger payload — the graph's initial input (for example, a form submission or webhook body).
  • Upstream node outputs — data produced by any node connected via an incoming edge.

Reference this data in prompt code or placeholders.

Output

Depends on kind: plain text, structured JSON (with object + output schema), embeddings, images, etc. Output is stored in the node execution record and passed to downstream nodes.

Tips

  • Use kind: object with an output schema on the node when you need validated JSON.
  • Use a low temperature (00.3) for factual tasks; higher (0.71.5) for creative output.
  • Set maxTokens to cap length and cost for text generation.
  • Code — Preprocess or reshape data before sending it to the prompt.
  • Stream — Inject stored context into the prompt.
  • Switch — Branch after the model response.

On this page