llama-factory微调Qwen2.5-7B-instruct实战,看这一篇就够了!!!(含windows和linux)
一.安装llama-factoryllama-factort的网站:https://github.com/hiyouga/LLaMA-Factory
安装llama-factory很简单,打开github后滑到安装 LLaMA Factory跟着步调走即可。
安装 LLaMA Factory
git clone --depth 1 https://github.com/hiyouga/LLaMA-Factory.git
cd LLaMA-Factory
pip install -e “.”
安装后到根目录实行
llamafactory-cli webui
可进行可视化,如下图所示:
https://i-blog.csdnimg.cn/direct/7ddd19cf421a40e28e6dbad49e8a673f.png
即安装完成,接着进行微调。
二.开始微调。
1.数据准备:
必要将数据调整到Alpaca格式。
[*]Alpaca格式:alpaca 格式最初与Stanford大学的一个研究项目相关联,该项目旨在通过少量高质量的数据来微调大型语言模子。它受到了Alpaca模子(一种基于LLaMA的指令跟随模子)的影响,该模子是在Meta AI的LLaMA底子上进行改进而来的。
格式如下:{
"instruction": "Summarize the following text.",
"input": "Artificial intelligence (AI) is a rapidly growing field...",
"output": "AI is an evolving technology that is growing quickly in various fields...",
"system": "system prompt (optional)",
"history": [
["user instruction in the first round (optional)", "model response in the first round (optional)"],
["user instruction in the second round (optional)", "model response in the second round (optional)"]
]
}
例子:处理后格式如下图所示:
https://i-blog.csdnimg.cn/direct/d7c41323a1db4077988f8d54f36d2089.png
留意:
[*] 数据准备完成后,在llama-factory下的data/dataset_info.json 必要添加以下字段:
"dataset_name": {
"file_name": "dataset_name.json",
"columns": {
"prompt": "instruction",
"query": "input",
"response": "output",
"system": "system",
"history": "history"
}
}
[*] 假如是我上述的数据格式,即:
"lora_data": {
"file_name": "lora_data.json",
"columns": {
"prompt": "instruction",
"query": "input",
"response": "output"
}
[*] 必须添加字段,如许子才可以在gui界面找到你的数据 https://i-blog.csdnimg.cn/direct/9585eefa60844b3ba44d5f98737cda93.png
2.下载模子:
方式一:自动下载(不保举)
https://i-blog.csdnimg.cn/direct/688e950b787346b9b23cad5fd49870dc.png
选择好模子之后点击加载模子就会自动连接hugging face下载模子,必要科学上网并且下载速度慢,以是不保举
方式二:本地部署
进入国内镜像网站:https://hf-mirror.com/
搜刮Qwen2.5-7B-Instruct:https://hf-mirror.com/Qwen/Qwen2.5-7B-Instruct
https://i-blog.csdnimg.cn/direct/95b6ec13dd384724bacaa087586cd71c.png
直接git到本地
https://i-blog.csdnimg.cn/direct/db7f9d08481947468faa51f4c4ec8dec.png
到时候填写在模子路径即可
https://i-blog.csdnimg.cn/direct/94cdf545f3a54ab1b83db6a40b5e2e29.png
2.开始训练
Linux下部署docker进行微调:
a.运行装有llama-factory以及cuda的容器:sudo docker run --runtime=nvidia --gpus all --net host --shm-size=2g -d -it -v $(pwd)/:/workspace docker-cuda-llamafactory
b.将数据以及模子转移到容器内:
先sudo docker ps获取容器id:https://i-blog.csdnimg.cn/direct/ad6192a22d58440a9d567ae1a3eabbdd.png
接着将数据转移到容器内(容器与宿主的数据不共享)sudo docker cp file_name docker_id:docker_path
docker_id:上述步观察询到的容器id
docker_path:数据存放在容器内的位置
c.进入容器启动llamafactory:sudo docker exec -it docker_id /bin/bash
运行llamafactory-cli webui启动
d.训练:
[*]假如出现报错:
https://i-blog.csdnimg.cn/direct/579336bb18f5465fa46f7e09111b95b3.png
报错内容:RuntimeError : CUDA Setup failed despite Gpu being available. Please run the following command to get more information: python -m bitsandbytes Inspect the output of the command and see if you can locate CUDA libraries. You might need to add thel to your LD LIBRARY PATH. If you suspect a bug, please take the information from python -m bitsandbytes and open an issue at: https://github.com/TimDettmers/bitsandbytes/issues
已包办理点击查看
设置好路径等各种参数后直接点击开始即可
https://i-blog.csdnimg.cn/direct/7f60c27afb444219bfa2c6e6d636d797.png
e.测试
点击检查点路径选择刚才训练后的效果,点击chat,加载模子后就可以测试微调后的效果了。
https://i-blog.csdnimg.cn/direct/6916674701c146c79d348f0aad17f169.png
f.导出
填写检查点路径,目录后即可导出为自己的模子
https://i-blog.csdnimg.cn/direct/a82f457c184449eeb4a6c2b1fbf03f4f.png
Win系统参考linux即可,Win系统设置路径什么的非常方便,感觉没须要出图文教程了。假如出现bitsandbytes报错,可尝试pip install bitsandbytes-windows
免责声明:如果侵犯了您的权益,请联系站长,我们会及时删除侵权内容,谢谢合作!更多信息从访问主页:qidao123.com:ToB企服之家,中国第一个企服评测及商务社交产业平台。
页:
[1]