步骤:1)在请求中声明 tools(JSON Schema 参数);2)模型返回 tool_calls;3)执行本地函数并将结果追加到消息继续对话。字段名与结构随供应商更新,请以最新文档为准。
// Node.js(OpenAI 兼容示例,占位)
import OpenAI from "openai";
const client = new OpenAI({ apiKey: process.env.API_KEY, baseURL: process.env.API_BASE });
const tools = [{
type: "function",
function: {
name: "get_weather",
description: "查询指定城市的天气",
parameters: {
type: "object",
properties: { city: { type: "string" } },
required: ["city"]
}
}
}];
const res = await client.chat.completions.create({
model: "MODEL_NAME",
messages: [{ role: "user", content: "北京天气如何?" }],
tools
});
for (const call of res.choices[0].message.tool_calls || []) {
if (call.function?.name === "get_weather") {
const args = JSON.parse(call.function.arguments || "{}");
const toolResult = await fetchWeather(args.city); // 你的实现
const follow = await client.chat.completions.create({
model: "MODEL_NAME",
messages: [
{ role: "user", content: "北京天气如何?" },
{ role: "tool", name: "get_weather", content: JSON.stringify(toolResult) }
]
});
console.log(follow.choices[0].message);
}
}
tool_choice、parallel_tool_calls 等字段;请以官方文档为准。常见做法:请求中声明 JSON 输出(如 response_format 或通过提示词约束),并在响应中解析为对象以便自动化处理。
// curl(占位字段,具体以官方为准)
curl https://api.example.com/v1/chat/completions \
-H "Authorization: Bearer $API_KEY" -H "Content-Type: application/json" \
-d '{
"model": "MODEL_NAME",
"response_format": {"type": "json_object"},
"messages": [
{"role":"system","content":"只输出 JSON 对象"},
{"role":"user","content":"总结并提取以下文本中的事件与日期"}
]
}'
// Node.js(解析 JSON)
const out = completion.choices[0].message?.content || "{}";
const data = JSON.parse(out); // 注意异常处理
console.log(data.events);
通过设置 stream 或连接 SSE 端点,按增量读取 tokens 以实现边生成边展示。
// Node.js(fetch 读取流,通用示例)
const resp = await fetch("https://api.example.com/v1/chat/completions", {
method: "POST",
headers: { "Authorization": `Bearer ${process.env.API_KEY}`, "Content-Type": "application/json" },
body: JSON.stringify({ model: "MODEL_NAME", stream: true, messages: [{ role: "user", content: "写一首短诗" }] })
});
const reader = resp.body.getReader();
const decoder = new TextDecoder();
let buf = "";
while (true) {
const { value, done } = await reader.read();
if (done) break;
buf += decoder.decode(value, { stream: true });
// 解析 SSE: 每行以 "data:" 开头,payload 为 JSON 字符串
for (const line of buf.split("\n")) {
if (line.startsWith("data:")) {
const payload = line.slice(5).trim();
if (payload === "[DONE]") break;
try { const json = JSON.parse(payload); console.log(json.choices?.[0]?.delta?.content || ""); } catch(e) {}
}
}
}
delta 字段);请以最新文档为准。基本步骤:分块/清洗 → 向量化 → 入库(向量数据库)→ 检索候选 → 组装提示(含引用)→ 生成与标注来源。
// Node.js(简化示例,伪代码)
import OpenAI from "openai";
import { embed, search, upsert } from "./vectorStore.js"; // 自行实现或选用库
// 1. 构建索引
const docs = loadDocs(); // 加载文档
for (const d of docs) {
const chunks = chunkText(d.text); // 按语义/长度分块
const vecs = await client.embeddings.create({ model: "EMBED_MODEL", input: chunks });
await upsert(vecs.data.map((v,i)=>({ id: `${d.id}-${i}`, vector: v.embedding, text: chunks[i], meta: d.meta })));
}
// 2. 检索与组装提示
const query = "请回答:合规策略的关键要点是什么?";
const qvec = await client.embeddings.create({ model: "EMBED_MODEL", input: query });
const hits = await search(qvec.data[0].embedding, { k: 5 });
const context = hits.map(h => `【片段】${h.text}\n来源: ${h.meta?.source}`).join("\n\n");
// 3. 生成(带引用)
const res = await client.chat.completions.create({
model: "MODEL_NAME",
messages: [{ role: "system", content: "基于提供的片段回答用户问题,并在末尾列出引用来源。" }, { role: "user", content: `资料:\n\n${context}\n\n问题:${query}` }]
});
console.log(res.choices[0].message?.content);