GuidesNodes
AI
Call language and multimodal models with configurable kind, prompt, model, temperature, and token limit.
AI nodes send prompts to models (via OpenRouter) and return the model response. Use kind: text for plain generation, kind: object with an output schema for structured JSON, or other kinds (image, embed, video) as supported by your environment.
Configuration
| Field | Type | Required | Description |
|---|---|---|---|
| kind | text | object | image | video | embed | Yes | Task type: text generation, structured output, images, embeddings, or video (when supported). |
| model | string | No | Model identifier (provider and model name). When omitted, a sensible default is used. Models are routed through OpenRouter. |
| prompt | string | No* | Prompt or input code. For text / object / image / embed, typically a TypeScript block. Supports placeholders where applicable. |
| temperature | number | No | Sampling temperature, typically between 0 and 2. |
| maxTokens | number | No | Maximum tokens to generate (text/object). |
| options | object | No | Kind-specific options (e.g. image size). |
* Required for kinds that consume a prompt (text, object, image, embed).
Input
The node receives data from:
- Trigger payload — the graph's initial input (for example, a form submission or webhook body).
- Upstream node outputs — data produced by any node connected via an incoming edge.
Reference this data in prompt code or placeholders.
Output
Depends on kind: plain text, structured JSON (with object + output schema), embeddings, images, etc. Output is stored in the node execution record and passed to downstream nodes.
Tips
- Use
kind: objectwith an output schema on the node when you need validated JSON. - Use a low temperature (
0–0.3) for factual tasks; higher (0.7–1.5) for creative output. - Set
maxTokensto cap length and cost for text generation.