25 - 框架版本更新说明(读取方式与迁移边界)¶
⚠️ 时效性说明:框架版本、迁移路径、预构建 API 与发布节奏变化很快。本文不再把某一天的精确版本号表写成长期事实;请在升级当天以官方 PyPI、GitHub Releases、Changelog 和官方文档为准:LangChain | LangGraph | LlamaIndex | Dify | CrewAI
✅ 核验说明(2026-04-03):本文已纳入 2026-04-03 全站统一复核批次。本轮将“固定版本号快照”收束为“官方核验入口 + 常见迁移方向 + 教学骨架代码”,避免把单一日期的版本信息误写成长期稳定结论。
📊 官方核验入口(2026-04-03)¶
| 框架 | 升级时优先查看 | 为什么 |
|---|---|---|
| LangChain / LangChain Core | PyPI + 官方迁移文档 + Changelog | 包拆分、导入路径与链式写法变化频繁 |
| LangGraph | PyPI + 官方概览 + Release Notes | 预构建 agent、StateGraph 写法和边接口可能调整 |
| LlamaIndex | PyPI + Changelog + 官方文档 | Settings、查询引擎与集成包变化较快 |
| CrewAI | PyPI + 官方文档 + Release Notes | Agent / Task 参数和执行模式会持续演进 |
| Dify | GitHub Releases + 官方文档 | Web 能力、API 路径、MCP 接入方式和版本节奏都较快 |
升级建议:先确认你当前项目锁定的版本,再决定是否升级;不要只因为“教程更新了”就跨大版本迁移。
🔄 重要API变更¶
1. LangChain 常见迁移方向¶
1.1 旧式 API 与新式写法¶
Python
# ❌ 旧式写法示意:升级时需要核对官方迁移文档
from langchain.chains import LLMChain, SimpleSequentialChain, SequentialChain
from langchain.agents import initialize_agent, AgentExecutor
from langchain.memory import ConversationBufferMemory, ConversationSummaryMemory
# ✅ 新式写法示意:围绕 LCEL / runnable 组织
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
from langchain_core.runnables import RunnablePassthrough
# 创建链的新方式
chain = prompt | llm | StrOutputParser()
# Agent的新方式 - 使用LangGraph
from langgraph.prebuilt import create_react_agent
agent = create_react_agent(llm, tools)
1.2 LCEL 教学骨架¶
Python
"""
LCEL 教学骨架(以当前官方文档为准)
"""
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
from langchain_openai import ChatOpenAI
from langchain_core.runnables import RunnablePassthrough, RunnableParallel
# 1. 基础链
prompt = ChatPromptTemplate.from_template("Tell me about {topic}")
llm = ChatOpenAI(model="YOUR_CHAT_MODEL")
chain = prompt | llm | StrOutputParser()
# 2. 带RAG的链
from langchain_community.vectorstores import Chroma
from langchain_openai import OpenAIEmbeddings
vectorstore = Chroma.from_documents(documents, OpenAIEmbeddings())
retriever = vectorstore.as_retriever()
rag_chain = (
{"context": retriever, "question": RunnablePassthrough()}
| prompt
| llm
| StrOutputParser()
)
# 3. 并行执行
chain = RunnableParallel(
summary=prompt | llm | StrOutputParser(),
keywords=keyword_prompt | llm | StrOutputParser()
)
# 4. 带记忆的链
from langchain_core.messages import HumanMessage, AIMessage
from langchain_community.chat_message_histories import ChatMessageHistory
from langchain_core.runnables.history import RunnableWithMessageHistory
# 使用新的记忆API
history = ChatMessageHistory()
chain_with_history = RunnableWithMessageHistory(
chain,
get_session_history=lambda session_id: history,
input_messages_key="input",
history_messages_key="chat_history"
)
2. LangGraph 常见迁移方向¶
2.1 StateGraph 核心用法¶
Python
"""
LangGraph 教学骨架(以当前官方文档为准)
"""
from typing import TypedDict, Annotated
from langgraph.graph import StateGraph, START, END
from langgraph.prebuilt import create_react_agent
from langchain_openai import ChatOpenAI
# 1. 定义状态
class AgentState(TypedDict):
messages: list
next_action: str
# 2. 创建StateGraph
graph = StateGraph(AgentState)
# 3. 添加节点
def agent_node(state: AgentState):
llm = ChatOpenAI(model="YOUR_CHAT_MODEL")
response = llm.invoke(state["messages"])
return {"messages": [response]}
graph.add_node("agent", agent_node)
graph.add_node("tool", tool_node)
# 4. 添加边
graph.add_edge(START, "agent")
graph.add_conditional_edges(
"agent",
should_continue,
{"continue": "tool", "end": END}
)
graph.add_edge("tool", "agent")
# 5. 编译
app = graph.compile()
# 6. 执行
result = app.invoke({"messages": [("user", "Hello!")]})
2.2 预构建 Agent¶
Python
"""
使用预构建 Agent 的教学骨架
"""
from langgraph.prebuilt import create_react_agent
from langchain_openai import ChatOpenAI
# 定义工具
from langchain_core.tools import tool
import ast
import operator
@tool
def search(query: str) -> str:
"""搜索网络"""
return f"Search results for: {query}"
@tool
def calculator(expression: str) -> float:
"""计算数学表达式(使用AST安全解析)"""
operators = {
ast.Add: operator.add,
ast.Sub: operator.sub,
ast.Mult: operator.mul,
ast.Div: operator.truediv,
}
def eval_expr(node):
if isinstance(node, ast.Num):
return node.n
elif isinstance(node, ast.BinOp):
left = eval_expr(node.left)
right = eval_expr(node.right)
return operators[type(node.op)](left, right)
else:
raise ValueError(f"不支持的操作")
tree = ast.parse(expression, mode='eval')
return eval_expr(tree.body)
# 创建Agent
llm = ChatOpenAI(model="YOUR_CHAT_MODEL")
tools = [search, calculator]
agent = create_react_agent(llm, tools)
# 执行
result = agent.invoke({
"messages": [("user", "What is 2 + 2?")]
})
3. LlamaIndex 常见迁移方向¶
3.1 核心API更新¶
Python
"""
LlamaIndex 教学骨架(以当前官方文档为准)
"""
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader, Settings
from llama_index.llms.openai import OpenAI
from llama_index.embeddings.openai import OpenAIEmbedding
# 1. 全局配置
Settings.llm = OpenAI(model="YOUR_CHAT_MODEL")
Settings.embed_model = OpenAIEmbedding()
# 2. 加载文档
documents = SimpleDirectoryReader("./data").load_data()
# 3. 创建索引
index = VectorStoreIndex.from_documents(documents)
# 4. 创建查询引擎
query_engine = index.as_query_engine(
similarity_top_k=5,
response_mode="compact"
)
# 5. 查询
response = query_engine.query("What is the document about?")
3.2 高级RAG¶
Python
"""
LlamaIndex 高级 RAG 教学骨架
"""
from llama_index.core import VectorStoreIndex
from llama_index.core.retrievers import VectorIndexRetriever
from llama_index.core.query_engine import RetrieverQueryEngine
from llama_index.core.postprocessor import SimilarityPostprocessor
# 1. 自定义检索器
retriever = VectorIndexRetriever(
index=index,
similarity_top_k=10
)
# 2. 后处理器
postprocessor = SimilarityPostprocessor(similarity_cutoff=0.7)
# 3. 组装查询引擎
query_engine = RetrieverQueryEngine(
retriever=retriever,
node_postprocessors=[postprocessor]
)
# 4. 流式查询
streaming_response = query_engine.query("Tell me more")
for text in streaming_response.response_gen:
print(text, end="")
4. CrewAI 常见迁移方向¶
Python
"""
CrewAI 教学骨架(以当前官方文档为准)
"""
from crewai import Agent, Task, Crew, Process
from langchain_openai import ChatOpenAI
# 1. 配置LLM
llm = ChatOpenAI(model="YOUR_CHAT_MODEL")
# 2. 创建Agent
researcher = Agent(
role="Researcher",
goal="Research AI topics",
backstory="Expert AI researcher",
llm=llm,
verbose=True
)
writer = Agent(
role="Writer",
goal="Write engaging content",
backstory="Professional writer",
llm=llm,
verbose=True
)
# 3. 创建Task
research_task = Task(
description="Research the latest AI trends",
expected_output="A summary of AI trends",
agent=researcher
)
write_task = Task(
description="Write a blog post about AI trends",
expected_output="A blog post",
agent=writer
)
# 4. 创建Crew
crew = Crew(
agents=[researcher, writer],
tasks=[research_task, write_task],
process=Process.sequential,
verbose=True
)
# 5. 执行
result = crew.kickoff()
5. Dify 常见更新点¶
⚠️ 重要更新口径:Dify 的 Web 功能、版本号、MCP 接入方式与发布节奏变化很快。这里不再把具体稳定版 / 预发布版号写成长期事实;升级或对接前,请以
langgenius/difyReleases 与官方文档为准。如果你所用版本支持 MCP,请先确认是“在 Dify 中消费 MCP 工具”还是“将 Dify 应用暴露为 MCP 服务”,两者不是同一件事。
5.1 API调用更新¶
Python
"""
Dify API 调用教学骨架
"""
import requests
class DifyClient:
"""Dify客户端"""
def __init__(self, api_key: str, base_url: str = "https://api.dify.ai/v1"):
self.api_key = api_key
self.base_url = base_url
self.headers = {
"Authorization": f"Bearer {api_key}",
"Content-Type": "application/json"
}
def chat(self, query: str, user: str = "default",
conversation_id: str = None, inputs: dict = None):
"""对话接口"""
payload = {
"query": query,
"user": user,
"response_mode": "blocking",
"inputs": inputs or {}
}
if conversation_id:
payload["conversation_id"] = conversation_id
response = requests.post(
f"{self.base_url}/chat-messages",
headers=self.headers,
json=payload
)
return response.json()
def workflow_run(self, inputs: dict, user: str = "default"):
"""工作流接口"""
response = requests.post(
f"{self.base_url}/workflows/run",
headers=self.headers,
json={
"inputs": inputs,
"user": user,
"response_mode": "blocking"
}
)
return response.json()
5.2 MCP集成¶
Python
"""
Dify MCP 接入教学骨架
"""
# 说明:
# 1. “在 Dify 中使用 MCP 工具”与“将 Dify 应用发布为 MCP Server”是两项不同能力
# 2. 传输协议、鉴权流程与配置入口请以当前版本官方文档为准
# 3. 如服务器要求 OAuth,通常需要在 Dify Web 界面完成授权
mcp_server = {
"name": "notion",
"transport": "http",
"server_url": "https://api.notion.com/mcp",
"server_id": "notion_mcp"
}
# 在 Dify Web 界面中完成:
# 1. 工具 -> 添加 MCP 服务器(HTTP)
# 2. 填写 server_url / 名称 / server_id
# 3. 完成 OAuth(如需要)
# 4. 在 Agent / Workflow 中选择已同步的 MCP 工具
📋 迁移检查清单¶
从旧版 LangChain 迁移¶
- 将
LLMChain替换为LCEL链(prompt | llm | parser) - 将
initialize_agent替换为create_react_agent - 将
ConversationBufferMemory替换为RunnableWithMessageHistory - 将
RetrievalQA替换为LCEL RAG链 - 按官方迁移文档核对包拆分与 import 路径(如
langchain_core、langchain_community、供应商独立包)
从旧版 LangGraph 迁移¶
- 确保使用
StateGraph而非旧版Graph - 使用
START和END常量而非字符串 - 更新条件边的返回格式
从旧版 LlamaIndex 迁移¶
- 使用
Settings全局配置 - 更新
ServiceContext为Settings - 使用新的查询引擎API
🔗 相关资源¶
最后更新日期: 2026-04-03 下次计划更新:2026年6月
⚠️ 核验说明(2026-04-03):本页已纳入 2026-04-03 全站统一复核批次。本轮已将固定版本号快照收束为官方核验入口、迁移边界与教学骨架代码;若文中涉及具体版本、发布状态、预构建能力或界面配置,请以官方 Releases、Changelog 与实际安装环境为准。