可以先查看 上一节内容,会对本节有更好的理解。
安装依赖
编写代码
核心代码
- @tool
- def get_docker_info(docker_name: str) -> str:
- """Get information about a docker pod container info."""
- result = subprocess.run(['docker', 'inspect', str(docker_name)], capture_output=True, text=True)
- return result.stdout
复制代码 这里是通过执行 Shell的方式来获取状态的。
通过执行Docker指令之后,可以获取到一大段的文本内容,此时把这些内容交给大模子去处理,大模子对内容进行提取和推理,最终答复我们。
- 注意@tool注解,没有这个注解的话,无法使用
- 注意要写"""xxx""" 要写明该工具的先容,大模子将根据先容来选择是否调用
- 如果3.5的效果不好,可以尝试使用4
- from langchain import hubfrom langchain.agents import AgentExecutor, toolfrom langchain.agents.output_parsers import XMLAgentOutputParserfrom langchain_openai import ChatOpenAIimport subprocessmodel = ChatOpenAI( model="gpt-3.5-turbo",)@tooldef search(query: str) -> str: """Search things about current events.""" return "32 degrees"@tool
- def get_docker_info(docker_name: str) -> str:
- """Get information about a docker pod container info."""
- result = subprocess.run(['docker', 'inspect', str(docker_name)], capture_output=True, text=True)
- return result.stdout
- tool_list = [search, get_docker_info]# Get the prompt to use - you can modify this!prompt = hub.pull("hwchase17/xml-agent-convo")# Logic for going from intermediate steps to a string to pass into model# This is pretty tied to the promptdef convert_intermediate_steps(intermediate_steps): log = "" for action, observation in intermediate_steps: log += ( f"<tool>{action.tool}</tool><tool_input>{action.tool_input}" f"</tool_input><observation>{observation}</observation>" ) return log# Logic for converting tools to string to go in promptdef convert_tools(tools): return "\n".join([f"{tool.name}: {tool.description}" for tool in tools])agent = ( { "input": lambda x: x["input"], "agent_scratchpad": lambda x: convert_intermediate_steps( x["intermediate_steps"] ), } | prompt.partial(tools=convert_tools(tool_list)) | model.bind(stop=["</tool_input>", "</final_answer>"]) | XMLAgentOutputParser())agent_executor = AgentExecutor(agent=agent, tools=tool_list)message1 = agent_executor.invoke({"input": "whats the weather in New york?"})print(f"message1: {message1}")message2 = agent_executor.invoke({"input": "what is docker pod which name 'lobe-chat-wzk' info? I want to know it 'Image' url"})print(f"message2: {message2}")
复制代码 执行代码
- ➜ python3 test10.py
- message1: {'input': 'whats the weather in New york?', 'output': 'The weather in New York is 32 degrees'}
- message2: {'input': "what is docker pod which name 'lobe-chat-wzk' info? I want to know it 'Image' url", 'output': 'The Image URL for the docker pod named \'lobe-chat-wzk\' is "lobehub/lobe-chat"'}
复制代码
免责声明:如果侵犯了您的权益,请联系站长,我们会及时删除侵权内容,谢谢合作!更多信息从访问主页:qidao123.com:ToB企服之家,中国第一个企服评测及商务社交产业平台。 |