streaming
💡 Summary
English summary.
🎯 Target Audience
🤖 AI Roast: “Powerful, but the setup might scare off the impatient.”
Risk: Low. Review: shell/CLI command execution; outbound network access (SSRF, data egress); dependency pinning and supply-chain risk. Run with least privilege and audit before enabling in production.
name: streaming description: Guide for assistant-stream package and streaming protocols. Use when implementing streaming backends, custom protocols, or debugging stream issues. version: 0.0.1 license: MIT
assistant-ui Streaming
Always consult assistant-ui.com/llms.txt for latest API.
The assistant-stream package handles streaming from AI backends.
References
- ./references/data-stream.md -- AI SDK data stream format
- ./references/assistant-transport.md -- Native assistant-ui format
- ./references/encoders.md -- Encoders and decoders
When to Use
Using Vercel AI SDK?
├─ Yes → toUIMessageStreamResponse() (no assistant-stream needed)
└─ No → assistant-stream for custom backends
Installation
npm install assistant-stream
Custom Streaming Response
import { createAssistantStreamResponse } from "assistant-stream"; export async function POST(req: Request) { return createAssistantStreamResponse(async (stream) => { stream.appendText("Hello "); stream.appendText("world!"); // Tool call example const tool = stream.addToolCallPart({ toolCallId: "1", toolName: "get_weather" }); tool.argsText.append('{"city":"NYC"}'); tool.argsText.close(); tool.setResponse({ result: { temperature: 22 } }); stream.close(); }); }
With useLocalRuntime
useLocalRuntime expects ChatModelRunResult chunks. Yield content parts for streaming:
import { useLocalRuntime } from "@assistant-ui/react"; const runtime = useLocalRuntime({ model: { async *run({ messages, abortSignal }) { const response = await fetch("/api/chat", { method: "POST", body: JSON.stringify({ messages }), signal: abortSignal, }); const reader = response.body?.getReader(); const decoder = new TextDecoder(); let buffer = ""; while (reader) { const { done, value } = await reader.read(); if (done) break; buffer += decoder.decode(value, { stream: true }); const parts = buffer.split("\n"); buffer = parts.pop() ?? ""; for (const chunk of parts.filter(Boolean)) { yield { content: [{ type: "text", text: chunk }] }; } } }, }, });
Debugging Streams
import { AssistantStream, DataStreamDecoder } from "assistant-stream"; const stream = AssistantStream.fromResponse(response, new DataStreamDecoder()); for await (const event of stream) { console.log("Event:", JSON.stringify(event, null, 2)); }
Stream Event Types
part-startwithpart.type="text" | "reasoning" | "tool-call" | "source" | "file"text-deltawith streamed textresultwith tool resultsstep-start,step-finish,message-finisherrorstrings
Common Gotchas
Stream not updating UI
- Check Content-Type is
text/event-stream - Check for CORS errors
Tool calls not rendering
addToolCallPartneeds bothtoolCallIdandtoolName- Register tool UI with
makeAssistantToolUI
Partial text not showing
- Use
text-deltaevents for streaming
Pros
- p1
- p2
Cons
- c1
- c2
Related Skills
claude-domain-skills
B“Powerful, but the setup might scare off the impatient.”
my-skills
B“Powerful, but the setup might scare off the impatient.”
terraform-ibm-modules-skills
B“Powerful, but the setup might scare off the impatient.”
Disclaimer: This content is sourced from GitHub open source projects for display and rating purposes only.
Copyright belongs to the original author assistant-ui.
