罪恶克星 发表于 2025-3-5 10:58:08

LLAMA-Factory安装教程(办理报错cannot allocate memory in static TLS bl

步骤一: 下载底子镜像

# 配置docker DNS
vi /etc/docker/daemon.json   # daemon.json文件中
{ "insecure-registries": ["https://swr.cn-east-317.qdrgznjszx.com"], "registry-mirrors": ["https://docker.mirrors.ustc.edu.cn"] }
systemctl restart docker.service
docker pull swr.cn-east-317.qdrgznjszx.com/donggang/llama-factory-ascend910b:cann8-py310-torch2.2.0-ubuntu18.04
mkdir /root/llama_factory_model 步骤二:新建底子容器 

docker create -it -u root --ipc=host --net=host --name=llama-factory   -e LANG="C.UTF-8"\
      --device=/dev/davinci0 \
      --device=/dev/davinci1 \
      --device=/dev/davinci2 \
      --device=/dev/davinci3 \
      --device=/dev/davinci4 \
      --device=/dev/davinci5 \
      --device=/dev/davinci6 \
      --device=/dev/davinci7 \
      --device=/dev/davinci_manager \
      --device=/dev/devmm_svm \
      --device=/dev/hisi_hdc \
      -v /usr/local/Ascend/driver:/usr/local/Ascend/driver \
      -v /usr/local/Ascend/add-ons/:/usr/local/Ascend/add-ons/ \
      -v /usr/local/sbin/npu-smi:/usr/local/sbin/npu-smi \
      -v /mnt/:/mnt/ \
      -v /root/llama_factory_model:/root/llama_factory_model \
      -v /var/log/npu:/usr/slog swr.cn-east-317.qdrgznjszx.com/donggang/llama-factory-ascend910b:cann8-py310-torch2.2.0-ubuntu18.04 \
      /bin/bash \ 步骤三:安装llamafactory

docker start llama-factory
docker exec -it llama-factory bash

# 安装llama-factory
wget https://codeload.github.com/hiyouga/LLaMA-Factory/zip/refs/heads/main -O LLaMA-Factory.zip
unzip LLaMA-Factory.zip
mv LLaMA-Factory-main LLaMA-Factory

cd LLaMA-Factory
pip install -e "."
apt install libsndfile1

# 激活昇腾环境变量(建议加入 ~/.bashrc中)
source /usr/local/Ascend/ascend-toolkit/set_env.sh

#使用以下指令对 LLaMA-Factory × 昇腾的安装进行校验
llamafactory-cli env https://i-blog.csdnimg.cn/direct/36300b259bf94f93b2f5783f0329b39e.png
# 运行llamafactory webui(访问本机7860端口)
nohup llamafactory-cli webui> llama_factory_output.log 2>&1 &
# 查看llamafactory运行日志
tail -f /home/HwHiAiUser/LLaMA-Factory/llama_factory_output.log https://i-blog.csdnimg.cn/direct/4f5f9c5e182a4bf89fc65610b77bc647.png
办理报错

题目描述

RuntimeError: Failed to import transformers.generation.utils because of the following error (look up to see its traceback):
/usr/local/python3.10.13/lib/python3.10/site-packages/sklearn/utils/../../scikit_learn.libs/libgomp-d22c30c5.so.1.0.0: cannot allocate memory in static TLS block
办理思路

vim ~/.bashrc

#文档末尾添加
export LD_PRELOAD=/usr/local/python3.10.13/lib/python3.10/site-packages/sklearn/utils/../../scikit_learn.libs/libgomp-d22c30c5.so.1.0.0

source ~/.bashrc https://i-blog.csdnimg.cn/direct/e13fd1b3ab094849874d183326e7948a.png

免责声明:如果侵犯了您的权益,请联系站长,我们会及时删除侵权内容,谢谢合作!更多信息从访问主页:qidao123.com:ToB企服之家,中国第一个企服评测及商务社交产业平台。
页: [1]
查看完整版本: LLAMA-Factory安装教程(办理报错cannot allocate memory in static TLS bl