0. 简介
Chat Model 不止是一个用于谈天对话的模子抽象,更紧张的是提供了多角色提示能力(System,AI,Human,Function)。
Chat Prompt Template 则为开发者提供了便捷维护不同角色的提示模板与消息记录的接口。
1. 构造 ChatPromptTemplate
- from langchain.prompts.chat import (
- ChatPromptTemplate,
- SystemMessagePromptTemplate,
- AIMessagePromptTemplate,
- HumanMessagePromptTemplate,
- )
- import os
- from dotenv import load_dotenv, find_dotenv
- # 删除all_proxy环境变量
- if 'all_proxy' in os.environ:
- del os.environ['all_proxy']
- # 删除ALL_PROXY环境变量
- if 'ALL_PROXY' in os.environ:
- del os.environ['ALL_PROXY']
- _ = load_dotenv(find_dotenv())
- template = (
- """You are a translation expert, proficient in various languages. \n
- Translates English to Chinese."""
- )
- system_message_prompt = SystemMessagePromptTemplate.from_template(template)
- print(type(system_message_prompt))
- print(system_message_prompt)
- human_template = "{text}"
- human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)
- print(type(human_message_prompt))
- print(human_message_prompt)
- print("*"*40)
- # 使用 System 和 Human 角色的提示模板构造 ChatPromptTemplate
- chat_prompt_template = ChatPromptTemplate.from_messages(
- [system_message_prompt, human_message_prompt]
- )
- print(type(chat_prompt_template))
- print(chat_prompt_template)
- print("*"*50)
- chat_prompt_prompt_value = chat_prompt_template.format_prompt(text="I love python.")
- print(type(chat_prompt_prompt_value))
- print(chat_prompt_prompt_value)
- print("*"*60)
- chat_prompt_list = chat_prompt_template.format_prompt(text="I love python.").to_messages()
- print(type(chat_prompt_list))
- print(chat_prompt_list)
复制代码 输出:
- <class 'langchain_core.prompts.chat.SystemMessagePromptTemplate'>
- prompt=PromptTemplate(input_variables=[], input_types={}, partial_variables={}, template='You are a translation expert, proficient in various languages. \n\n Translates English to Chinese.') additional_kwargs={}
- <class 'langchain_core.prompts.chat.HumanMessagePromptTemplate'>
- prompt=PromptTemplate(input_variables=['text'], input_types={}, partial_variables={}, template='{text}') additional_kwargs={}
- ****************************************
- <class 'langchain_core.prompts.chat.ChatPromptTemplate'>
- input_variables=['text'] input_types={} partial_variables={} messages=[SystemMessagePromptTemplate(prompt=PromptTemplate(input_variables=[], input_types={}, partial_variables={}, template='You are a translation expert, proficient in various languages. \n\n Translates English to Chinese.'), additional_kwargs={}), HumanMessagePromptTemplate(prompt=PromptTemplate(input_variables=['text'], input_types={}, partial_variables={}, template='{text}'), additional_kwargs={})]
- **************************************************
- <class 'langchain_core.prompt_values.ChatPromptValue'>
- messages=[SystemMessage(content='You are a translation expert, proficient in various languages. \n\n Translates English to Chinese.', additional_kwargs={}, response_metadata={}), HumanMessage(content='I love python.', additional_kwargs={}, response_metadata={})]
- ************************************************************
- <class 'list'>
- [SystemMessage(content='You are a translation expert, proficient in various languages. \n\n Translates English to Chinese.', additional_kwargs={}, response_metadata={}), HumanMessage(content='I love python.', additional_kwargs={}, response_metadata={})]
复制代码 2. LCEL 执行
- from langchain_openai import ChatOpenAI
- from langchain_core.output_parsers import StrOutputParser
- from langchain.prompts.chat import (
- ChatPromptTemplate,
- SystemMessagePromptTemplate,
- AIMessagePromptTemplate,
- HumanMessagePromptTemplate,
- )
- import os
- from dotenv import load_dotenv, find_dotenv
- # 删除all_proxy环境变量
- if 'all_proxy' in os.environ:
- del os.environ['all_proxy']
- # 删除ALL_PROXY环境变量
- if 'ALL_PROXY' in os.environ:
- del os.environ['ALL_PROXY']
- _ = load_dotenv(find_dotenv())
- # 为了结果的稳定性,将 temperature 设置为 0
- translation_model = ChatOpenAI(model_name="gpt-3.5-turbo", temperature=0)
- template = (
- """You are a translation expert, proficient in various languages. \n
- Translates {source_language} to {target_language} in the style of {name}."""
- )
- system_message_prompt = SystemMessagePromptTemplate.from_template(template)
- human_template = "{text}"
- human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)
- # 使用 System 和 Human 角色的提示模板构造 ChatPromptTemplate
- m_chat_prompt_template = ChatPromptTemplate.from_messages(
- [system_message_prompt, human_message_prompt]
- )
- output_parser = StrOutputParser()
- m_translation_chain = m_chat_prompt_template| translation_model | StrOutputParser(callbacks=[callback_handler])
- # Prepare input data
- input_data = {
- "source_language": "English",
- "target_language": "Chinese",
- "name": "严复",
- "text": "Life is full of regrets. All we can do is to minimize them.",
- }
- input_data1 = {
- "source_language": "English",
- "target_language": "Chinese",
- "name": "李白",
- "text": "Life is full of regrets. All we can do is to minimize them.",
- }
- # Format the prompt
- prompt_value = m_chat_prompt_template.format_prompt(**input_data)
- print(type(prompt_value))
- print(prompt_value)
- print(type(prompt_value.to_messages()))
- print(prompt_value.to_messages())
- result = translation_model.invoke(prompt_value)
- print(result)
- result = m_translation_chain.invoke(input_data)
- print(result)
- result = m_translation_chain.invoke(input_data1)
- print(result)
复制代码 输出:
- <class 'langchain_core.prompt_values.ChatPromptValue'>
- messages=[SystemMessage(content='You are a translation expert, proficient in various languages. \n\n Translates English to Chinese in the style of 严复.', additional_kwargs={}, response_metadata={}), HumanMessage(content='Life is full of regrets. All we can do is to minimize them.', additional_kwargs={}, response_metadata={})]
- <class 'list'>
- [SystemMessage(content='You are a translation expert, proficient in various languages. \n\n Translates English to Chinese in the style of 严复.', additional_kwargs={}, response_metadata={}), HumanMessage(content='Life is full of regrets. All we can do is to minimize them.', additional_kwargs={}, response_metadata={})]
- content='人生充满了遗憾。我们所能做的就是尽量减少它们。' additional_kwargs={'refusal': None} response_metadata={'token_usage': {'completion_tokens': 31, 'prompt_tokens': 53, 'total_tokens': 84, 'completion_tokens_details': None, 'prompt_tokens_details': None}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None} id='run-676a9818-bbbc-44f7-a30e-0ec065aa502f-0' usage_metadata={'input_tokens': 53, 'output_tokens': 31, 'total_tokens': 84, 'input_token_details': {}, 'output_token_details': {}}
- 人生充满了遗憾。我们所能做的就是尽量减少它们。
- 人生充滿遺憾,唯有盡量減少。
复制代码 当 translation_model.invoke(input_data) 输入参数是字典时报错
- result = translation_model.invoke(input_data)
- print(result)
复制代码 报错:
- ValueError: Invalid input type <class 'dict'>. Must be a PromptValue, str, or list of BaseMessages.
复制代码 当 m_translation_chain.invoke(prompt_value) 输入参数是prompt_value时报错
- result = m_translation_chain.invoke(prompt_value)
- print(result)
复制代码 报错:
- TypeError: Expected mapping type as input to ChatPromptTemplate. Received <class 'langchain_core.prompt_values.ChatPromptValue'>.
复制代码 检察属性:
- input_schema = m_translation_chain.input_schema.model_json_schema()
- print(input_schema)
- output_schema = m_translation_chain.output_schema.model_json_schema()
- print(output_schema)
复制代码 输出:
- {'properties': {'name': {'title': 'Name', 'type': 'string'}, 'source_language': {'title': 'Source Language', 'type': 'string'}, 'target_language': {'title': 'Target Language', 'type': 'string'}, 'text': {'title': 'Text', 'type': 'string'}}, 'required': ['name', 'source_language', 'target_language', 'text'], 'title': 'PromptInput', 'type': 'object'}
- {'title': 'StrOutputParserOutput', 'type': 'string'}
复制代码 免责声明:如果侵犯了您的权益,请联系站长,我们会及时删除侵权内容,谢谢合作!更多信息从访问主页:qidao123.com:ToB企服之家,中国第一个企服评测及商务社交产业平台。 |