- #### 自定义dockerfile
- 自定义安装,主要是为了在docker中使用conda虚拟环境。
复制代码 https://docs.nvidia.com/deeplearning/frameworks/pytorch-release-notes/rel-23-11.html#rel-23-11
FROM nvcr.io/nvidia/pytorch:23.11-py3
LABEL maintainer=“transformers”
ARG DEBIAN_FRONTEND=noninteractive
ARG PYTORCH=‘2.1.0’
Example: cu102, cu113, etc.
ARG CUDA=‘cu121’
RUN apt-get update &&
apt-get install -y libaio-dev wget bzip2 ca-certificates curl git git-lfs unzip mlocate usbutils
vim tmux g++ gcc build-essential cmake checkinstall lsb-release &&
rm -rf /var/lib/apt/lists/* &&
apt-get clean
RUN python3 -m pip uninstall -y torch torchvision torchaudio torch-tensorrt transformer-engine apex
SHELL [“/bin/bash”, “–login”, “-c”]
RUN cd / && wget --quiet https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O /miniconda.sh &&
/bin/bash /miniconda.sh -b -p /opt/conda &&
ln -s /opt/conda/etc/profile.d/conda.sh /etc/profile.d/conda.sh &&
echo “. /opt/conda/etc/profile.d/conda.sh” >> ~/.bashrc &&
/bin/bash -c “source ~/.bashrc” &&
/opt/conda/bin/conda update -n base -c defaults conda -y &&
/opt/conda/bin/conda config --set ssl_verify no &&
/opt/conda/bin/conda config --add channels conda-forge &&
/opt/conda/bin/conda create -n ai python=3.10
ENV PATH $PATH:/opt/conda/envs/ai/bin
RUN conda init bash &&
echo “conda activate ai” >> ~/.bashrc &&
conda activate ai &&
pip install --upgrade pip -i https://mirror.baidu.com/pypi/simple &&\
pip config set global.index-url https://mirror.baidu.com/pypi/simple &&\
Install latest release PyTorch
(PyTorch must be installed before pre-compiling any DeepSpeed c++/cuda ops.)
(https://www.deepspeed.ai/tutorials/advanced-install/#pre-install-deepspeed-ops)
- pip install --no-cache-dir -U torch==$PYTORCH torchvision torchaudio \
- --extra-index-url https://download.pytorch.org/whl/$CUDA &&\
- pip install -U numpy opencv-python onnx onnxoptimizer onnxruntime -i https://mirror.baidu.com/pypi/simple
复制代码 ARG REF=main
RUN conda activate ai &&
cd &&
git clone https://github.com/huggingface/transformers && cd transformers && git checkout $REF &&
cd … &&
pip install --no-cache-dir ./transformers[deepspeed-testing] &&
pip install --no-cache-dir git+https://github.com/huggingface/accelerate@main#egg=accelerate &&\
recompile apex
- # pip uninstall -y apex &&\
复制代码 RUN git clone https://github.com/NVIDIA/apex
MAX_JOBS=1 disables parallel building to avoid cpu memory OOM when building image on GitHub Action (standard) runners
TODO: check if there is alternative way to install latest apex
RUN cd apex && MAX_JOBS=1 python3 -m pip install --global-option=“–cpp_ext” --global-option=“–cuda_ext” --no-cache -v --disable-pip-version-check .
Pre-build latest DeepSpeed, so it would be ready for testing (otherwise, the 1st deepspeed test will timeout)
- pip uninstall -y deepspeed
复制代码 This has to be run (again) inside the GPU VMs running the tests.
The installation works here, but some tests fail, if we don’t pre-build deepspeed again in the VMs running the tests.
TODO: Find out why test fail.
RUN DS_BUILD_CPU_ADAM=1 DS_BUILD_FUSED_ADAM=1
RUN conda activate ai &&
pip install deepspeed --global-option=“build_ext”
–global-option=“-j8” --no-cache -v --disable-pip-version-check 2>&1
When installing in editable mode, transformers is not recognized as a package.
this line must be added in order for python to be aware of transformers.
RUN conda activate ai &&
cd &&
cd transformers && python3 setup.py develop
The base image ships with pydantic==1.8.2 which is not working - i.e. the next command fails
RUN conda activate ai &&
pip install -U --no-cache-dir “pydantic<2”
RUN conda activate ai &&
python3 -c “from deepspeed.launcher.runner import main”
RUN apt-get update &&
rm -rf /var/lib/apt/lists/* &&
apt-get clean
[code]
### 缓存设置
预训练模型会被下载并本地缓存到 `~/.cache/huggingface/hub`。这是由环境变量 `TRANSFORMERS_CACHE` 指定的默认目录。在 Windows 上,默认目录为 `C:\Users\username\.cache\huggingface\hub`。你可以按照不同优先级改变下述环境变量,以指定不同的缓存目录。
1. 环境变量(默认): `HUGGINGFACE_HUB_CACHE` 或 `TRANSFORMERS_CACHE`。
2. 环境变量 `HF_HOME`。
3. 环境变量 `XDG_CACHE_HOME` + `/huggingface`。
除非你明确指定了环境变量 `TRANSFORMERS_CACHE`, |