AMD ROCm, PyTorch, Stable Diffusion & YOLO Installation Guide


Updated on March 13, 2024


Introduction

This guide covers how to install ROCm which is AMD’s answer to Nvidia’s CUDA, giving AMD GPUs the ability to run AI and machine learning models. I'll be doing this on an RX 6700 XT GPU, but these steps should work for all RDNA, RDNA 2, and RDNA 3 GPUs. GPUs from other generations will likely need to follow different steps, see this Reddit thread for more info on older cards: Reddit: HOW-TO: Stable Diffusion on an AMD GPU.

This guide also shows how to install PyTorch which is a framework for machine learning tools in Python, and then I'll show how to install and run Stable Diffusion and YOLO object detection. Most of the guide will be done on a desktop running Kubuntu (Ubuntu 22.04) but I'll also include the steps for Arch based distros as well. Everything will be installed locally on the system instead of using containers such as Docker. Some people may prefer going the container route instead, especially if you’re a developer who frequently works with different versions of packages, but for most people installing locally will be perfectly fine.

AMD Drivers & ROCm on Ubuntu

Here are the new official instructions for installing the AMD drivers using the package manager. The necessary steps have been copied here for your convenience. Make sure your system is up to date before installing the drivers. There was an issue with kernel 6.5.0-14, so if you're running this kernel make sure you update to a newer kernel before installing ROCm. See this link for more info on this issue.

Copy and paste the following commands to install ROCm:

sudo apt update

sudo apt install "linux-headers-$(uname -r)" "linux-modules-extra-$(uname -r)"

sudo usermod -a -G render,video $LOGNAME

Enter one of the following lines depending on your version of Ubuntu:

Ubuntu 24.04: wget https://repo.radeon.com/amdgpu-install/6.2/ubuntu/noble/amdgpu-install_6.2.60200-1_all.deb

Ubuntu 22.04: wget https://repo.radeon.com/amdgpu-install/6.2/ubuntu/jammy/amdgpu-install_6.2.60200-1_all.deb

If you're running 24.04 you'll also need to add the following repo:

sudo add-apt-repository -y -s deb http://security.ubuntu.com/ubuntu jammy main universe

Now finish the ROCm installation with these commands, regardless of which version you're running:

sudo apt install ./amdgpu-install_6.2.60200-1_all.deb

sudo apt update

sudo apt install amdgpu-dkms rocm

Installing on Arch Distros

Installing ROCm on Arch distros is extremely easy, all you need to do is install the following package (make sure you have the AUR enabled if your specific distro doesn't have it enabled by default):

sudo pacman -S opencl-amd

Next enable user permissions:

sudo usermod -a -G render,video $LOGNAME

Other Requirements

The following step is only required with certain consumer grade GPUs, or if your CPU contains an integrated GPU. If you're running a professional card, an RDNA 2 GPU with 16GB of VRAM (i.e. RX 6800 XT, 6900 XT), or a 7900 XTX/XT then the following step is not necessary. Lower tiered cards will require the following step. If your system has a CPU with an integrated GPU (Ryzen 7000) then it may also require this step.

Edit ~/.profile with the following command:

sudo nano ~/.profile

Paste the following line at the bottom of the file, then press ctrl-x and save the file.

For RDNA and RDNA 2 cards:

export HSA_OVERRIDE_GFX_VERSION=10.3.0

For RDNA 3 cards:

export HSA_OVERRIDE_GFX_VERSION=11.0.0

If your CPU contains an integrated GPU then this command might be necessary to ignore the integrated GPU and force the dedicated GPU:

export HIP_VISIBLE_DEVICES=0

Now make sure to restart your computer before continuing. Then you can check if ROCm was installed successfully by running rocminfo. If an error is returned then something went wrong with the installation. Another possibility is that secure boot may cause issues on some systems, so if you received an error here then disabling secure boot may help.

PyTorch

Next we'll download PyTorch with PIP in a Python virtual environment. First install the required software. For Ubuntu:

sudo apt install git python3-pip python3-venv libstdc++-12-dev

For Arch distros:

sudo pacman -S git python-pip python-virtualenv

Now we can install PyTorch, but it's best to create a Python virtual environment before installing packages with PIP. To create a default environment, enter:

python3 -m venv venv

The previous command should have created a folder named venv in your current directory, make sure you don't delete this folder. Next you'll need to activate that environment. Each time you open a new terminal it will need to be re-activated if you want to use PyTorch or any other PIP packages.

. venv/bin/activate

If your venv works at first but then stops working later when using PIP, you might see an error message such as this "error: externally-managed-environment". This usually happens when your system updates to a newer Python version compared to what was used to create your venv. You can solve this by deleting the old venv folder and create a new one as shown above. You'll need to reinstall the PIP packages as well.

Now PyTorch can be installed:

pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/rocm6.1

Let's verify PyTorch was installed correctly with GPU support, so lets first enter the Python console.

python3

Now enter the following two lines of code. If it returns True then everything was installed correctly.

import torch
torch.cuda.is_available()

Then enter exit() to exit the Python console.

Stable Diffusion

Update: I now recommend using ComfyUI instead of WebUI because WebUI is outdated and still requires Python 3.10, and they don't seem to have any plans to update to newer versions. Here's the link for Stable Diffusion ComfyUI

I might do a video showing how to use ComfyUI, but the instructions are provided in the link above. Even though I no longer recommend WebUI, I've left the instructions here:

Now we'll test it out with real applications, let's start with Stable Diffusion Web UI. First install this package for improved performance with Stable Diffusion. For Ubuntu distros:

sudo apt install --no-install-recommends google-perftools

For Arch distros:

sudo pacman -S gperftools

Next download the repository, enter the directory, and setup a virtual environment:

git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui
cd stable-diffusion-webui

Now we can run it with this command:

./webui.sh

The program will automatically download a model but there are much better ones available to download at huggingface.co. Simply search for Stable Diffusion to see what's available. As an example you could download Stable Diffusion v2-1 which is newer than the stock model that was provided. You'll need to place this file in the stable-diffusion-webui/models/stable-diffusion/ folder.

Next open your web browser and enter the address localhost:7860 to access the UI. If you downloaded the v2-1 .ckpt model I linked to above, then you'll need to go into the Settings tab and then the Stable Diffusion section, and make sure to check the box that says “upcast cross attention layer to float32”. This step isn't necessary for the stock v1-5 model. Now you can navigate to the txt2img section, select your desired model at the top, enter your prompt, then press generate. The first time it runs might take several minutes, but following runs will be much quicker. To exit, go to the terminal and press ctrl-c.

YOLO Object Detection

Now let's try object detection with YOLOv5. You'll need a webcam for this one. First enter the following to download the repository and install the necessary packages with PIP:

git clone https://github.com/ultralytics/yolov5
cd yolov5
pip install -r requirements.txt

Now lets run it. Make sure you have a camera connected to the system. Again, the first time it runs might take several minutes to start, but the following runs will start up much faster.

python3 detect.py --weights yolov5s.pt --source 0

To exit, go to the terminal and press ctrl-c.



Have a question or business inquiry? Get in touch!