pip install deepspeed
支持transformers: --deepspeed,以及config文件;
- model_engine, optimizer, _, _ = deepspeed.initialize(args=cmd_args,
- model=model,
- model_parameters=params)
复制代码 分布式和mixed-precision等,都包罗在deepspeed.initialize和model_engine内里了;
删掉: torch.distributed.init_process_group(...)
- for step, batch in enumerate(data_loader):
- #forward() method
- loss = model_engine(batch)
- #runs backpropagation
- model_engine.backward(loss)
- #weight update
- model_engine.step()
复制代码 Gradient Average: 在model_engine.backward里自动解决;
Loss Scaling: 自动解决;
Learning Rate Scheduler: model_engin.step里自动解决;
save&load: (model、optimizer、lr scheduler状态,都存下来)(client_sd是用户自定义数据)
- _, client_sd = model_engine.load_checkpoint(args.load_dir, args.ckpt_id)
- step = client_sd['step']
- ...
- if step % args.save_interval:
- client_sd['step'] = step
- ckpt_id = loss.item()
- model_engine.save_checkpoint(args.save_dir, ckpt_id, client_sd = client_sd)
复制代码 配置文件:(例如名为ds_config.json)
- {
- "train_batch_size": 8,
- "gradient_accumulation_steps": 1,
- "optimizer": {
- "type": "Adam",
- "params": {
- "lr": 0.00015
- }
- },
- "fp16": {
- "enabled": true
- },
- "zero_optimization": true
- }
复制代码 hostfile: (和OpenMPI、Horovord兼容)(hostname GPU个数)
- worker-1 slots=4
- worker-2 slots=4
复制代码 启动下令:
- deepspeed --hostfile=myhostfile <client_entry.py> <client args> \
- --deepspeed --deepspeed_config ds_config.json
复制代码 --num_nodes: 在几台机器上跑;
--num_gpus:在几张GPU卡上跑;
--include: 白名单节点和GPU编号;例:--include="worker-2:0,1"
--exclude: 黑名单节点和GPU编号;例:--exclude="worker-2:0@worker-3:0,1"
环境变量:
运行起来会被设置到所有node上;
".deepspeed_env"文件;放运行目次下,或者~/;例:
- NCCL_IB_DISABLE=1
- NCCL_SOCKET_IFNAME=eth0
复制代码 在一台机器上运行"deepspeed"下令,会在所有node上launch历程;
也支持mpirun方式来launch,但通信后端用的仍是NCCL而不是MPI;
注意:
不支持CUDA_VISIBLE_DEVICES;只能这么来指定GPU:
- deepspeed --include localhost:1 ...
复制代码 免责声明:如果侵犯了您的权益,请联系站长,我们会及时删除侵权内容,谢谢合作!更多信息从访问主页:qidao123.com:ToB企服之家,中国第一个企服评测及商务社交产业平台。 |