Setting up a Docker GPU Environment
Set up a GPU environment if you are building a model with image or video data
Prerequisites
Ensure your OS has one or more CUDA-capable GPUs
The iai client uses PyTorch with CUDA 11.3. Ensure you install a driver version >= 11.3. Install the latest driver available. For more information, see: CUDA major component versions.
Linux/macOS Setup
Install CUDA driver or CUDA toolkit:
Install cuda toolkit (which includes the driver, but also contains other unnecessary components):
If using rpm/deb package manager, you can install the driver only by using
sudo apt-get/yum -y install cuda-drivers
Install CUDA driver only:
If it's a data centre GPU see: https://docs.nvidia.com/datacenter/tesla/tesla-installation-notes/index.html
For the other types of graphic cards, go to Nvidia driver page, select the corresponding graphic card and operating system, download and install the driver.
In order for the driver to be loaded, a system reboot may be needed.
Install NVidia container toolkit:
Ensure
nvidia-docker2
can modify the docker configuration file/etc/docker/daemon.json
Windows Setup
Ensure that intel VT-x or AMD SVM is enabled in BIOS, check the motherboard manufacture document for exact steps.
Install CUDA driver or CUDA toolkit:
Install cuda toolkit (which include driver, but also contains other unnecessary components)
In the component selection screen, you can choose to install only the CUDA driver
Install CUDA driver only
Go to Nvidia driver page, select the corresponding graphic card and operating system, download and install the driver.
Running Docker Container with GPU device
Add --gpus all
option to the docker run
command.
Example:
Last updated