LLAMA-Factory安装教程(办理报错cannot allocate memory in static TLS bl ...

打印 上一主题 下一主题

主题 870|帖子 870|积分 2610

步骤一: 下载底子镜像

  1. # 配置docker DNS
  2. vi /etc/docker/daemon.json
复制代码
  # daemon.json文件中
  { "insecure-registries": ["https://swr.cn-east-317.qdrgznjszx.com"], "registry-mirrors": ["https://docker.mirrors.ustc.edu.cn"] }
  1. systemctl restart docker.service
  2. docker pull swr.cn-east-317.qdrgznjszx.com/donggang/llama-factory-ascend910b:cann8-py310-torch2.2.0-ubuntu18.04
  3. mkdir /root/llama_factory_model
复制代码
步骤二:新建底子容器 

  1. docker create -it -u root --ipc=host --net=host --name=llama-factory   -e LANG="C.UTF-8"\
  2.         --device=/dev/davinci0 \
  3.         --device=/dev/davinci1 \
  4.         --device=/dev/davinci2 \
  5.         --device=/dev/davinci3 \
  6.         --device=/dev/davinci4 \
  7.         --device=/dev/davinci5 \
  8.         --device=/dev/davinci6 \
  9.         --device=/dev/davinci7 \
  10.         --device=/dev/davinci_manager \
  11.         --device=/dev/devmm_svm \
  12.         --device=/dev/hisi_hdc \
  13.         -v /usr/local/Ascend/driver:/usr/local/Ascend/driver \
  14.         -v /usr/local/Ascend/add-ons/:/usr/local/Ascend/add-ons/ \
  15.         -v /usr/local/sbin/npu-smi:/usr/local/sbin/npu-smi \
  16.         -v /mnt/:/mnt/ \
  17.         -v /root/llama_factory_model:/root/llama_factory_model \
  18.         -v /var/log/npu:/usr/slog swr.cn-east-317.qdrgznjszx.com/donggang/llama-factory-ascend910b:cann8-py310-torch2.2.0-ubuntu18.04 \
  19.         /bin/bash \
复制代码
步骤三:安装llamafactory

  1. docker start llama-factory
  2. docker exec -it llama-factory bash
  3. # 安装llama-factory
  4. wget https://codeload.github.com/hiyouga/LLaMA-Factory/zip/refs/heads/main -O LLaMA-Factory.zip
  5. unzip LLaMA-Factory.zip
  6. mv LLaMA-Factory-main LLaMA-Factory
  7. cd LLaMA-Factory
  8. pip install -e ".[torch-npu,metrics]"
  9. apt install libsndfile1
  10. # 激活昇腾环境变量(建议加入 ~/.bashrc中)
  11. source /usr/local/Ascend/ascend-toolkit/set_env.sh
  12. #使用以下指令对 LLaMA-Factory × 昇腾的安装进行校验
  13. llamafactory-cli env
复制代码

  1. # 运行llamafactory webui(访问本机7860端口)
  2. nohup llamafactory-cli webui> llama_factory_output.log 2>&1 &
  3. # 查看llamafactory运行日志
  4. tail -f /home/HwHiAiUser/LLaMA-Factory/llama_factory_output.log
复制代码

办理报错

题目描述

RuntimeError: Failed to import transformers.generation.utils because of the following error (look up to see its traceback):
/usr/local/python3.10.13/lib/python3.10/site-packages/sklearn/utils/../../scikit_learn.libs/libgomp-d22c30c5.so.1.0.0: cannot allocate memory in static TLS block
办理思路

  1. vim ~/.bashrc
  2. #文档末尾添加
  3. export LD_PRELOAD=/usr/local/python3.10.13/lib/python3.10/site-packages/sklearn/utils/../../scikit_learn.libs/libgomp-d22c30c5.so.1.0.0
  4. source ~/.bashrc
复制代码


免责声明:如果侵犯了您的权益,请联系站长,我们会及时删除侵权内容,谢谢合作!更多信息从访问主页:qidao123.com:ToB企服之家,中国第一个企服评测及商务社交产业平台。
回复

使用道具 举报

0 个回复

倒序浏览

快速回复

您需要登录后才可以回帖 登录 or 立即注册

本版积分规则

罪恶克星

金牌会员
这个人很懒什么都没写!
快速回复 返回顶部 返回列表