ToB企服应用市场:ToB评测及商务社交产业平台

标题: vllm的部署和使用 [打印本页]

作者: 卖不甜枣    时间: 2025-1-1 13:26
标题: vllm的部署和使用
conda create -n cosyvoice python=3.10.9 cudnn=9.1.1.17 nvidia/label/cuda-12.1.1::cuda-toolkit ffmpeg x264
  pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121
  pip install vllm==0.6.6
  pip install transformers==4.46 modelscope==1.20.1
   qwen2.5模型下载
   from modelscope import snapshot_download
  # Downloading model checkpoint to a local dir model_dir
# model_dir = snapshot_download('Qwen/Qwen2.5-0.5B-Instruct')
# model_dir = snapshot_download('Qwen/Qwen2.5-7B-Instruct')
# model_dir = snapshot_download('Qwen/Qwen2.5-32B-Instruct')
# model_dir = snapshot_download('Qwen/Qwen2.5-72B-Instruct')
model_dir = snapshot_download('Qwen/Qwen2.5-1.5B-Instruct')
 

免责声明:如果侵犯了您的权益,请联系站长,我们会及时删除侵权内容,谢谢合作!更多信息从访问主页:qidao123.com:ToB企服之家,中国第一个企服评测及商务社交产业平台。




欢迎光临 ToB企服应用市场:ToB评测及商务社交产业平台 (https://dis.qidao123.com/) Powered by Discuz! X3.4