0. 安装前言
- The list of prerequisites for running NVIDIA Container Toolkit is described below:
GNU/Linux x86_64 with kernel version > 3.10
Docker >= 19.03 (recommended, but some distributions may include older versions of Docker. The minimum supported version is 1.12)
NVIDIA GPU with Architecture >= Kepler (or compute capability 3.0)
NVIDIA Linux drivers >= 418.81.07 (Note that older driver releases or branches are unsupported.)
安装docker-19.03及以上版本
docker19.03及以上版本,已经内置了nvidia-docker,无需再单独部署nvidia-docker了。安装方式如下:
安装docker:
我安装20版本的docker ,具体步骤不在描述。
[root@localhost home]# yum install docker-ce-20.10.17
Running transaction
正在安装 : 2:container-selinux-2.119.2-1.911c772.el7_8.noarch 1/4
正在安装 : containerd.io-1.6.8-3.1.el7.x86_64 2/4
正在安装 : 3:docker-ce-20.10.17-3.el7.x86_64 3/4
正在安装 : docker-ce-rootless-extras-20.10.18-3.el7.x86_64 4/4
验证中 : docker-ce-rootless-extras-20.10.18-3.el7.x86_64 1/4
验证中 : 2:container-selinux-2.119.2-1.911c772.el7_8.noarch 2/4
验证中 : containerd.io-1.6.8-3.1.el7.x86_64 3/4
验证中 : 3:docker-ce-20.10.17-3.el7.x86_64 4/4
已安装:
docker-ce.x86_64 3:20.10.17-3.el7
作为依赖被安装:
container-selinux.noarch 2:2.119.2-1.911c772.el7_8 containerd.io.x86_64 0:1.6.8-3.1.el7
docker-ce-rootless-extras.x86_64 0:20.10.18-3.el7
完毕!
只安装docker 没有安装nvidia-docker2
[root@localhost home]# docker --version
Docker version 20.10.18, build b40c2f6
[root@localhost home]# systemctl start docker
[root@localhost home]# systemctl enable docker
1. Ubuntu安装nvidia-docker
distribution=$(. /etc/os-release;echo $ID$VERSION_ID) \
&& curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg \
&& curl -s -L https://nvidia.github.io/libnvidia-container/experimental/$distribution/libnvidia-container.list | \
sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \
sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
distribution=$(. /etc/os-release;echo $ID$VERSION_ID) \
&& curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg \
&& curl -s -L https://nvidia.github.io/libnvidia-container/experimental/$distribution/libnvidia-container.list | \
sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \
sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
sudo apt-get update
sudo apt-get install -y nvidia-docker2
# 设置默认运行时后,重新启动Docker守护程序以完成安装:
sudo systemctl restart docker
# 此时,可以通过运行基本CUDA容器来测试工作设置:
sudo docker run --rm --gpus all nvidia/cuda:11.0.3-base-ubuntu20.04 nvidia-smi
这将产生如下所示的控制台输出:
Thu Sep 29 12:30:53 2022
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 470.141.03 Driver Version: 470.141.03 CUDA Version: 11.4 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA A100 80G... Off | 00000000:00:0C.0 Off | 0 |
| N/A 41C P0 47W / 300W | 0MiB / 80994MiB | 0% Default |
| | | Disabled |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
如果输出跟直接在宿主机上执行 nvidia-smi
一致则说明安装成功。
[root@localhost]# nvidia-docker version
NVIDIA Docker: 2.11.0
/usr/bin/nvidia-docker:行34: /usr/bin/docker: 权限不够
/usr/bin/nvidia-docker:行34: /usr/bin/docker: 成功
[root@localhost]# setenforce 0
[root@localhost]# nvidia-docker version
NVIDIA Docker: 2.11.0
2. Centos7安装nvidia-docker
docker 已经安装完毕,20.版本的
安装nvidia-container-toolkit
distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.repo | sudo tee /etc/yum.repos.d/nvidia-docker.repo
sudo yum install -y nvidia-container-toolkit
sudo yum install -y nvidia-docker2
sudo systemctl restart docker
启动容器:
[root@localhost ]# docker run --gpus all nvidia/cuda:10.0-base /bin/sh -c "while true; do echo hello world; sleep 1; done"
hello world
hello world
hello world
验证:
- 查看–gpus 参数是否安装成功:
[root@localhost]# docker run --help | grep -i gpus
--gpus gpu-request GPU devices to add to the container ('all' to pass all GPUs)
自从升级了docker19后跑需要gpu的docker只需要加个参数–gpus all 即可(表示使用所有的gpu,如果要使用2个gpu:–gpus 2,也可直接指定哪几个卡:–gpus ‘“device=1,2”’,后面有详细介绍)。
--gpus '"device=1,2"',这个的意思是,将物理机的第二块、第三块gpu卡映射给容器?
下面三个参数代表的都是是容器内可以使用物理机的所有gpu卡
--gpus all
NVIDIA_VISIBLE_DEVICES=all
--runtime=nvida
NVIDIA_VISIBLE_DEVICES=2 只公开两个gpu,容器内只能用两个gpu
使用显卡数量示例
- 使用所有显卡
$ docker run --rm --gpus all nvidia/cuda nvidia-smi
$ docker run --rm --runtime=nvidia -e NVIDIA_VISIBLE_DEVICES=all nvidia/cuda nvidia-smi
- 指明使用哪几张卡
$ docker run --gpus '"device=1,2"' nvidia/cuda nvidia-smi
$ docker run --rm --runtime=nvidia -e NVIDIA_VISIBLE_DEVICES=1,2 nvidia/cuda nvidia-smi
到这里在 Docker 下使用 Nvidia 显卡加速计算的基础环境搭建就介绍完了
- 运行nvidia官网提供的镜像,并输入nvidia-smi命令,查看nvidia界面是否能够启动:
[root@localhost]# docker run --rm --gpus all nvidia/cuda:10.0-base nvidia-smi
Thu Sep 29 12:52:00 2022
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 470.141.03 Driver Version: 470.141.03 CUDA Version: 11.4 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA A100 80G... Off | 00000000:00:0C.0 Off | 0 |
| N/A 41C P0 47W / 300W | 0MiB / 80994MiB | 0% Default |
| | | Disabled |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
3. 进入容器
以centos7为例,当你运行:
[root@localhost ~]# docker run -it --rm --runtime=nvidia --gpus all nvidia/cuda:9.0-base /bin/bash
docker: Error response from daemon: Unknown runtime specified nvidia.
# 报错,因为没有安装 nvidia-docker2,安装好后,重新执行即可。
docker exec进入容器,再次运行nvidia-smi
会出现和在主机运行的一样结果。
进入容器内部,发现是ubuntu版本的系统
root@c2c7d583633f:/home/Python-3.8.13# cat /etc/issue
Ubuntu 16.04.7 LTS \n \l
4. 验证
pip install torch==1.8.1+cu111 torchvision==0.9.1+cu111 torchaudio==0.8.1 -f https://download.pytorch.org/whl/torch_stable.html
>>> import torch
>>> torch.cuda.is_available()
True
如果输出 True 证明环境也成功了,可以使用显卡。
5. docker 镜像源
官网:link
# 专业版
# Centos
docker pull nvidia/cuda:11.1.1-cudnn8-devel-centos7
docker pull nvidia/cuda:11.1.1-cudnn8-devel-centos8
# Ubuntu
docker pull nvidia/cuda:11.1.1-cudnn8-devel-ubuntu18.04
docker pull nvidia/cuda:11.1.1-cudnn8-devel-ubuntu20.04
参考官网: