SWIRLS_
GuidesNodes

Stream

Read from project-scoped streams with filtering, sorting, and pagination for use as context.

Stream nodes read data from a stream and pass the results to downstream nodes. Use them to inject stored context -- such as recent records, documents, or historical data -- into AI prompts or other processing steps.

Configuration

FieldTypeRequiredDescription
streamIdstringNoThe ID of the stream to read from. The stream must belong to the same project as the graph.
queryobjectNoQuery options for filtering, sorting, and paginating the stream data. See the query options table below.

Query options

FieldTypeRequiredDescription
filterstringNoA filter expression to restrict which rows are returned (for example, filtering by status or date range).
sortarrayNoSort order. Each element is an object with field (string) and direction ("asc" or "desc").
limitnumberNoMaximum number of rows to return. Use this to cap context size when feeding results into an LLM node.
offsetnumberNoNumber of rows to skip, for pagination.

Input

The node receives the trigger payload and upstream node outputs. You can use placeholders in query values to make queries dynamic at runtime -- for example, filtering by {{input.userId}} or limiting results based on upstream data.

Output

The node produces the result of the stream query: an array of rows (one object per row, with keys = SQL result columns). This output is stored in the node execution record and passed to downstream nodes. Set outputSchema to describe that array (e.g. { "type": "array", "items": { "type": "object", "properties": { ... }, "required": [...] } }). The LSP validates that the SQL SELECT columns match outputSchema.items.

Tips

  • Use stream nodes for "retrieve then reason" patterns: fetch a relevant slice of data, then pass it to an LLM node as context.
  • Always set limit to avoid sending excessive context to downstream LLM nodes. Combine with sort to ensure the most relevant rows appear first.
  • Stream schemas can be versioned. Ensure the stream's current schema matches what downstream nodes expect.
  • See Streams for creating, managing, and writing data to streams.
  • Streams (Storage) -- Create and manage streams.
  • AI -- Use stream output as context in a prompt.
  • Code -- Transform or filter stream results before passing them downstream.

On this page