Auto-Pilot / 全自动
更新于 3 months ago
streaming
Aassistant-ui
0.0k
assistant-ui/skills/assistant-ui/skills/streaming
💡 摘要
中文总结。
🎯 适合人群
用户画像1用户画像2用户画像3
🤖 AI 吐槽: “看起来很能打,但别让配置把人劝退。”
安全分析低风险
风险:Low。建议检查:是否执行 shell/命令行指令;是否发起外网请求(SSRF/数据外发);依赖锁定与供应链风险。以最小权限运行,并在生产环境启用前审计代码与依赖。
name: streaming description: Guide for assistant-stream package and streaming protocols. Use when implementing streaming backends, custom protocols, or debugging stream issues. version: 0.0.1 license: MIT
assistant-ui Streaming
Always consult assistant-ui.com/llms.txt for latest API.
The assistant-stream package handles streaming from AI backends.
References
- ./references/data-stream.md -- AI SDK data stream format
- ./references/assistant-transport.md -- Native assistant-ui format
- ./references/encoders.md -- Encoders and decoders
When to Use
Using Vercel AI SDK?
├─ Yes → toUIMessageStreamResponse() (no assistant-stream needed)
└─ No → assistant-stream for custom backends
Installation
npm install assistant-stream
Custom Streaming Response
import { createAssistantStreamResponse } from "assistant-stream"; export async function POST(req: Request) { return createAssistantStreamResponse(async (stream) => { stream.appendText("Hello "); stream.appendText("world!"); // Tool call example const tool = stream.addToolCallPart({ toolCallId: "1", toolName: "get_weather" }); tool.argsText.append('{"city":"NYC"}'); tool.argsText.close(); tool.setResponse({ result: { temperature: 22 } }); stream.close(); }); }
With useLocalRuntime
useLocalRuntime expects ChatModelRunResult chunks. Yield content parts for streaming:
import { useLocalRuntime } from "@assistant-ui/react"; const runtime = useLocalRuntime({ model: { async *run({ messages, abortSignal }) { const response = await fetch("/api/chat", { method: "POST", body: JSON.stringify({ messages }), signal: abortSignal, }); const reader = response.body?.getReader(); const decoder = new TextDecoder(); let buffer = ""; while (reader) { const { done, value } = await reader.read(); if (done) break; buffer += decoder.decode(value, { stream: true }); const parts = buffer.split("\n"); buffer = parts.pop() ?? ""; for (const chunk of parts.filter(Boolean)) { yield { content: [{ type: "text", text: chunk }] }; } } }, }, });
Debugging Streams
import { AssistantStream, DataStreamDecoder } from "assistant-stream"; const stream = AssistantStream.fromResponse(response, new DataStreamDecoder()); for await (const event of stream) { console.log("Event:", JSON.stringify(event, null, 2)); }
Stream Event Types
part-startwithpart.type="text" | "reasoning" | "tool-call" | "source" | "file"text-deltawith streamed textresultwith tool resultsstep-start,step-finish,message-finisherrorstrings
Common Gotchas
Stream not updating UI
- Check Content-Type is
text/event-stream - Check for CORS errors
Tool calls not rendering
addToolCallPartneeds bothtoolCallIdandtoolName- Register tool UI with
makeAssistantToolUI
Partial text not showing
- Use
text-deltaevents for streaming
五维分析
清晰度8/10
创新性6/10
实用性8/10
完整性7/10
可维护性7/10
优缺点分析
优点
- 优点1
- 优点2
缺点
- 缺点1
- 缺点2
相关技能
免责声明:本内容来源于 GitHub 开源项目,仅供展示和评分分析使用。
版权归原作者所有 assistant-ui.
