Ollama+Qwen2,轻松搭建支持函数调用的聊天系统
liuian 2024-12-07 14:59 45 浏览
本文介绍如何通过Ollama结合Qwen2,搭建OpenAI格式的聊天API,并与外部函数结合来拓展模型的更多功能。
tools是OpenAI的Chat Completion API中的一个可选参数,可用于提供函数调用规范(function specifications)。这样做的目的是使模型能够生成符合所提供的规范的函数参数格式。同时,API 实际上不会执行任何函数调用。开发人员需要使用模型输出来执行函数调用。
Ollama支持OpenAI格式API的tool参数,在tool参数中,如果functions提供了参数,Qwen将会决定何时调用什么样的函数,不过Ollama目前还不支持强制使用特定函数的参数tool_choice。
注:本文测试用例参考OpenAI cookbook:https://cookbook.openai.com/examples/how_to_call_functions_with_chat_models
本文主要包含以下三个部分:
- 模型部署:使用Ollama和千问,通过设置template,部署支持Function call的聊天API接口。
- 生成函数参数:指定一组函数并使用 API 生成函数参数。
- 调用具有模型生成的参数的函数:通过实际执行具有模型生成的参数的函数来闭合循环。
01、模型部署
单模型文件下载
使用ModelScope命令行工具下载单个模型,本文使用Qwen2-7B的GGUF格式:
modelscope download --model=qwen/Qwen2-7B-Instruct-GGUF --local_dir . qwen2-7b-instruct-q5_k_m.ggufLinux环境使用
Liunx用户可使用魔搭镜像环境安装【推荐】
modelscope download --model=modelscope/ollama-linux --local_dir ./ollama-linux
cd ollama-linux
sudo chmod 777 ./ollama-modelscope-install.sh
./ollama-modelscope-install.sh启动Ollama服务
ollama serve创建ModelFile
复制模型路径,创建名为“ModelFile”的meta文件,其中设置template,使之支持function call,内容如下:
FROM /mnt/workspace/qwen2-7b-instruct-q5_k_m.gguf
# set the temperature to 0.7 [higher is more creative, lower is more coherent]
PARAMETER temperature 0.7
PARAMETER top_p 0.8
PARAMETER repeat_penalty 1.05
TEMPLATE """{{ if .Messages }}
{{- if or .System .Tools }}<|im_start|>system
{{ .System }}
{{- if .Tools }}
# Tools
You are provided with function signatures within <tools></tools> XML tags. You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into functions. Here are the available tools:
<tools>{{- range .Tools }}{{ .Function }}{{- end }}</tools>
For each function call, return a JSON object with function name and arguments within <tool_call></tool_call> XML tags as follows:
<tool_call>
{"name": <function-name>, "arguments": <args-json-object>}
</tool_call>{{- end }}<|im_end|>{{- end }}
{{- range .Messages }}
{{- if eq .Role "user" }}
<|im_start|>{{ .Role }}
{{ .Content }}<|im_end|>
{{- else if eq .Role "assistant" }}
<|im_start|>{{ .Role }}
{{- if .Content }}
{{ .Content }}
{{- end }}
{{- if .ToolCalls }}
<tool_call>
{{ range .ToolCalls }}{"name": "{{ .Function.Name }}", "arguments": {{ .Function.Arguments }}}
{{ end }}</tool_call>
{{- end }}<|im_end|>
{{- else if eq .Role "tool" }}
<|im_start|>user
<tool_response>
{{ .Content }}
</tool_response><|im_end|>
{{- end }}
{{- end }}
<|im_start|>assistant
{{ else }}{{ if .System }}<|im_start|>system
{{ .System }}<|im_end|>
{{ end }}{{ if .Prompt }}<|im_start|>user
{{ .Prompt }}<|im_end|>
{{ end }}<|im_start|>assistant
{{ end }}
"""创建自定义模型
使用ollama create命令创建自定义模型
ollama create myqwen2 --file ./ModelFile运行模型:
ollama run myqwen202、生成函数参数
安装依赖
!pip install scipy --quiet
!pip install tenacity --quiet
!pip install tiktoken --quiet
!pip install termcolor --quiet
!pip install openai --quiet使用OpenAI的API格式调用本地部署的qwen2模型
import json
import openai
from tenacity import retry, wait_random_exponential, stop_after_attempt
from termcolor import colored
MODEL = "myqwen2"
client = openai.OpenAI(
base_url="http://127.0.0.1:11434/v1",
api_key = "None"
)实用工具
首先,让我们定义一些实用工具,用于调用聊天完成 API 以及维护和跟踪对话状态。
@retry(wait=wait_random_exponential(multiplier=1, max=40), stop=stop_after_attempt(3))
def chat_completion_request(messages, tools=None, tool_choice=None, model=MODEL):
try:
response = client.chat.completions.create(
model=model,
messages=messages,
tools=tools,
tool_choice=tool_choice,
)
return response
except Exception as e:
print("Unable to generate ChatCompletion response")
print(f"Exception: {e}")
return edef pretty_print_conversation(messages):
role_to_color = {
"system": "red",
"user": "green",
"assistant": "blue",
"function": "magenta",
}
for message in messages:
if message["role"] == "system":
print(colored(f"system: {message['content']}\n", role_to_color[message["role"]]))
elif message["role"] == "user":
print(colored(f"user: {message['content']}\n", role_to_color[message["role"]]))
elif message["role"] == "assistant" and message.get("function_call"):
print(colored(f"assistant: {message['function_call']}\n", role_to_color[message["role"]]))
elif message["role"] == "assistant" and not message.get("function_call"):
print(colored(f"assistant: {message['content']}\n", role_to_color[message["role"]]))
elif message["role"] == "function":
print(colored(f"function ({message['name']}): {message['content']}\n", role_to_color[message["role"]]))
基本概念(https://cookbook.openai.com/examples/how_to_call_functions_with_chat_models#basic-concepts)
这里假设了一个天气 API,并设置了一些函数规范和它进行交互。将这些函数规范传递给 Chat API,以便模型可以生成符合规范的函数参数。
tools = [
{
"type": "function",
"function": {
"name": "get_current_weather",
"description": "Get the current weather",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA",
},
"format": {
"type": "string",
"enum": ["celsius", "fahrenheit"],
"description": "The temperature unit to use. Infer this from the users location.",
},
},
"required": ["location", "format"],
},
}
},
{
"type": "function",
"function": {
"name": "get_n_day_weather_forecast",
"description": "Get an N-day weather forecast",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA",
},
"format": {
"type": "string",
"enum": ["celsius", "fahrenheit"],
"description": "The temperature unit to use. Infer this from the users location.",
},
"num_days": {
"type": "integer",
"description": "The number of days to forecast",
}
},
"required": ["location", "format", "num_days"]
},
}
},
]如果我们向模型询问当前的天气情况,它将会反问,希望获取到进一步的更多的参数信息。
messages = []
messages.append({"role": "system", "content": "Don't make assumptions about what values to plug into functions. Ask for clarification if a user request is ambiguous."})
messages.append({"role": "user", "content": "hi ,can you tell me what's the weather like today"})
chat_response = chat_completion_request(
messages, tools=tools
)
assistant_message = chat_response.choices[0].message
messages.append(assistant_message)
assistant_messageChatCompletionMessage(content='Of course, I can help with that. To provide accurate information, could you please specify the city and state you are interested in?', role='assistant', function_call=None, tool_calls=None)一旦我们通过对话提供缺失的参数信息,模型就会为我们生成适当的函数参数。
messages.append({"role": "user", "content": "I'm in Glasgow, Scotland."})
chat_response = chat_completion_request(
messages, tools=tools
)
assistant_message = chat_response.choices[0].message
messages.append(assistant_message)
assistant_messageChatCompletionMessage(content='', role='assistant', function_call=None, tool_calls=[ChatCompletionMessageToolCall(id='call_qq8e5z9w', function=Function(arguments='{"location":"Glasgow, Scotland"}', name='get_current_weather'), type='function')])通过不同的提示词,我们可以让它反问不同的问题以获取函数参数信息。
messages = []
messages.append({"role": "system", "content": "Don't make assumptions about what values to plug into functions. Ask for clarification if a user request is ambiguous."})
messages.append({"role": "user", "content": "can you tell me, what is the weather going to be like in Glasgow, Scotland in next x days"})
chat_response = chat_completion_request(
messages, tools=tools
)
assistant_message = chat_response.choices[0].message
messages.append(assistant_message)
assistant_messageChatCompletionMessage(content='Sure, I can help with that. Could you please specify how many days ahead you want to know the weather forecast for Glasgow, Scotland?', role='assistant', function_call=None, tool_calls=None)messages.append({"role": "user", "content": "5 days"})
chat_response = chat_completion_request(
messages, tools=tools
)
chat_response.choices[0]Choice(finish_reason='stop', index=0, logprobs=None, message=ChatCompletionMessage(content='', role='assistant', function_call=None, tool_calls=[ChatCompletionMessageToolCall(id='call_b7f3j7im', function=Function(arguments='{"location":"Glasgow, Scotland","num_days":5}', name='get_n_day_weather_forecast'), type='function')]))并行函数调用
(https://cookbook.openai.com/examples/how_to_call_functions_with_chat_models#parallel-function-calling)
支持一次提问中,并行调用多次函数
messages = []
messages.append({"role": "system", "content": "Don't make assumptions about what values to plug into functions. Ask for clarification if a user request is ambiguous."})
messages.append({"role": "user", "content": "what is the weather going to be like in San Francisco and Glasgow over the next 4 days"})
chat_response = chat_completion_request(
messages, tools=tools, model=MODEL
)
assistant_message = chat_response.choices[0].message.tool_calls
assistant_message[ChatCompletionMessageToolCall(id='call_vei89rz3', function=Function(arguments='{"location":"San Francisco, CA","num_days":4}', name='get_n_day_weather_forecast'), type='function'),
ChatCompletionMessageToolCall(id='call_4lgoubee', function=Function(arguments='{"location":"Glasgow, UK","num_days":4}', name='get_n_day_weather_forecast'), type='function')]使用模型生成函数
(https://cookbook.openai.com/examples/how_to_call_functions_with_chat_models#how-to-call-functions-with-model-generated-arguments)
在这个示例中,演示如何执行输入由模型生成的函数,并使用它来实现可以为我们解答有关数据库的问题的代理。
本文使用Chinook 示例数据库(https://www.sqlitetutorial.net/sqlite-sample-database/)。
指定执行 SQL 查询的函数
(https://cookbook.openai.com/examples/how_to_call_functions_with_chat_models#specifying-a-function-to-execute-sql-queries)
首先,让我们定义一些有用的函数来从 SQLite 数据库中提取数据。
import sqlite3
conn = sqlite3.connect("data/Chinook.db")
print("Opened database successfully")def get_table_names(conn):
"""Return a list of table names."""
table_names = []
tables = conn.execute("SELECT name FROM sqlite_master WHERE type='table';")
for table in tables.fetchall():
table_names.append(table[0])
return table_names
def get_column_names(conn, table_name):
"""Return a list of column names."""
column_names = []
columns = conn.execute(f"PRAGMA table_info('{table_name}');").fetchall()
for col in columns:
column_names.append(col[1])
return column_names
def get_database_info(conn):
"""Return a list of dicts containing the table name and columns for each table in the database."""
table_dicts = []
for table_name in get_table_names(conn):
columns_names = get_column_names(conn, table_name)
table_dicts.append({"table_name": table_name, "column_names": columns_names})
return table_dicts现在可以使用这些实用函数来提取数据库模式的表示。
database_schema_dict = get_database_info(conn)
database_schema_string = "\n".join(
[
f"Table: {table['table_name']}\nColumns: {', '.join(table['column_names'])}"
for table in database_schema_dict
]
)与之前一样,我们将为希望 API 为其生成参数的函数定义一个函数规范。请注意,我们正在将数据库模式插入到函数规范中。这对于模型了解这一点很重要。
tools = [
{
"type": "function",
"function": {
"name": "ask_database",
"description": "Use this function to answer user questions about music. Input should be a fully formed SQL query.",
"parameters": {
"type": "object",
"properties": {
"query": {
"type": "string",
"description": f"""
SQL query extracting info to answer the user's question.
SQL should be written using this database schema:
{database_schema_string}
The query should be returned in plain text, not in JSON.
""",
}
},
"required": ["query"],
},
}
}
]执行 SQL 查询
(https://cookbook.openai.com/examples/how_to_call_functions_with_chat_models#executing-sql-queries)
现在让我们实现实际执行数据库查询的函数。
def ask_database(conn, query):
"""Function to query SQLite database with a provided SQL query."""
try:
results = str(conn.execute(query).fetchall())
except Exception as e:
results = f"query failed with error: {e}"
return results使用 Chat Completions API 调用函数的步骤:
(https://cookbook.openai.com/examples/how_to_call_functions_with_chat_models#steps-to-invoke-a-function-call-using-chat-completions-api)
步骤 1:向模型提示可能导致模型选择要使用的工具的内容。工具的描述(例如函数名称和签名)在“工具”列表中定义,并在 API 调用中传递给模型。如果选择,函数名称和参数将包含在响应中。
步骤 2:通过编程检查模型是否想要调用函数。如果是,则继续执行步骤 3。
步骤 3:从响应中提取函数名称和参数,使用参数调用该函数。将结果附加到消息中。
步骤 4:使用消息列表调用聊天完成 API 以获取响应。
messages = [{
"role":"user",
"content": "What is the name of the album with the most tracks?"
}]
response = client.chat.completions.create(
model='myqwen2',
messages=messages,
tools= tools,
tool_choice="auto"
)
# Append the message to messages list
response_message = response.choices[0].message
messages.append(response_message)
print(response_message)ChatCompletionMessage(content='', role='assistant', function_call=None, tool_calls=[ChatCompletionMessageToolCall(id='call_23nnhlv6', function=Function(arguments='{"query":"SELECT Album.Title FROM Album JOIN Track ON Album.AlbumId = Track.AlbumId GROUP BY Album.Title ORDER BY COUNT(*) DESC LIMIT 1"}', name='ask_database'), type='function')])# Step 2: determine if the response from the model includes a tool call.
tool_calls = response_message.tool_calls
if tool_calls:
# If true the model will return the name of the tool / function to call and the argument(s)
tool_call_id = tool_calls[0].id
tool_function_name = tool_calls[0].function.name
tool_query_string = json.loads(tool_calls[0].function.arguments)['query']
# Step 3: Call the function and retrieve results. Append the results to the messages list.
if tool_function_name == 'ask_database':
results = ask_database(conn, tool_query_string)
messages.append({
"role":"tool",
"tool_call_id":tool_call_id,
"name": tool_function_name,
"content":results
})
# Step 4: Invoke the chat completions API with the function response appended to the messages list
# Note that messages with role 'tool' must be a response to a preceding message with 'tool_calls'
model_response_with_function_call = client.chat.completions.create(
model="myqwen2",
messages=messages,
) # get a new response from the model where it can see the function response
print(model_response_with_function_call.choices[0].message.content)
else:
print(f"Error: function {tool_function_name} does not exist")
else:
# Model did not identify a function to call, result can be returned to the user
print(response_message.content) The album "Greatest Hits" contains the most tracks
欢迎点赞关注我,获取更多关于 AI 的前沿资讯。别忘了将今天的内容分享给你的朋友们,让我们一起见证 AI 技术的飞跃!学习商务交流
相关推荐
- 如何查看显卡配置(如何查看显卡配置信息)
-
可以使用鲁大师来查看硬件设备,打开硬件设备,选择显卡就可以看到电脑显卡的详细的配置,包括显卡的型号以及显存。也可以查看显卡的外包装。查看电脑显卡配置方法:方法一:右键点“我的计算机”,左键点“属性”、...
- 小马kms激活工具下载(小马kms10激活工具怎么用)
-
使用说明:1、解压对应的压缩包,然后通过管理员方式来进行运行对应的win10激活软件,基本上双击就可以正常打开,不行的话就是电脑权限不够;2、打开后会出现小马win10激活的主界面,你会看到一个【一键...
- vista看天下电子版(vista看天下官网订阅)
-
第一,运营成本比较高。第二,经营的定位一直在变。第三。之前的营销措施有点失误。第四,电子书时代来临,纸质版传媒面临巨大冲击,订阅量骤减,收入低下。vista看天下性质是中国市场上唯一面向高端人群的新闻...
- 惠普打印机怎么连接无线(hp打印机无线连接教程)
-
hp打印机无线连接设置的方法及步骤:工具/原料:hp打印机1、首先,进行普通的无线连接。在支持无线打印的打印机操作面板会有下面红框内的标识,点击此无线标识就可以进行对打印机无线功能的设置。2、然后选择...
- 金士顿固态硬盘序列号(金士顿序列码)
-
固态硬盘的序列号一般都有标注在硬盘的后面,直接到官网查询就行了。有两种方法可以查询:第一种,首先要将固态硬盘安正确的装在电脑上,然后联网下载鲁大师,载鲁大师里进行硬件检查就可以知道它的真伪。第二种,每...
- 怎么使用ghost备份系统(怎么用ghost备份文件)
-
步骤1、首先把装有一键GHOST装系统的U盘插在电脑上,然后打开电脑马上按F2或DEL键入BIOS界面,然后就选择BOOT打USDHDD模式选择好,然后按F10键保存,电脑就会马上重启。 步骤...
- 如何组装电脑视频(电脑组装小视频)
-
第一部,主机箱拧上螺丝柱,主板对准螺柱拧螺丝第二部,安装对应的CPU,涂抹散热硅脂.安装散热器第三部,安装电源拧紧螺丝,电源接口插入对应主板的接口.第四部,安装内存条,显卡,硬盘.第五部,主机箱上...
- 怎么安装宽带(怎么安装宽带全部教程)
-
这个事好办,你可以带上你的身份证是网络供应商营业大厅,叫他们查你所住的地方有没有宽带资源,一般情况下,他们马上就可以查出,如果 有,你就填写协议或资料的东西,然后交钱就可以了,根据你自己上网需要,和营...
- 繁体字转换器app(繁体字转换器软件下载)
-
简体繁体转换工具是一款可以自由转换简体字和繁体字的工具软件。简体繁体转换工具支持中文简体、繁体互相转换,还可以简体和BIG5之间的相互转换,只需要在软件中输入需要转换的简体文字,然后点击一下就可以转换...
-
- 用户账户控制怎么打开(用户账户控制在哪里设置)
-
1、按开始(或者按“Win键”),选择“控制面板”; 2、在“控制面板”界面中,选择“用户账户和家庭安全”; 3、在“用户账户和家庭安全”界面中,选择“用户账户”; 4、在“用户账户”界面中,选择“更改用户账户控制设置”; 5、这样...
-
2025-12-19 00:05 liuian
- 日语输入法下载安装 app(日语输入法下载安装讯飞)
-
搜狗输入法可以把中文翻译成日语。1、首先打开短信的文字输入框,输入需要翻译的中文,找到右上方的搜索图标。2、接着在弹出的功能栏中选择左下方的翻译功能,这时就能看到翻译出的多国语言。3、最后选择日文后方...
- win10系统补丁怎么安装(win10系统补丁怎么安装不了)
-
任何系统装补丁都是一样的。下载好补丁的NPK文件,然后放在DNF安装文件夹下面的imagepacks2里面就好。首先,强烈不建议用管家类的软件批量打补丁,非常容易出问题!如果你的电脑是用的正版系统,...
- 一周热门
- 最近发表
- 标签列表
-
- python判断字典是否为空 (50)
- crontab每周一执行 (48)
- aes和des区别 (43)
- bash脚本和shell脚本的区别 (35)
- canvas库 (33)
- dataframe筛选满足条件的行 (35)
- gitlab日志 (33)
- lua xpcall (36)
- blob转json (33)
- python判断是否在列表中 (34)
- python html转pdf (36)
- 安装指定版本npm (37)
- idea搜索jar包内容 (33)
- css鼠标悬停出现隐藏的文字 (34)
- linux nacos启动命令 (33)
- gitlab 日志 (36)
- adb pull (37)
- python判断元素在不在列表里 (34)
- python 字典删除元素 (34)
- vscode切换git分支 (35)
- python bytes转16进制 (35)
- grep前后几行 (34)
- hashmap转list (35)
- c++ 字符串查找 (35)
- mysql刷新权限 (34)
