AMD’s Radeon RX 7900 XTX really shines when it comes to running the DeepSeek R1 AI model, outperforming NVIDIA’s GeForce RTX 4090 in inference benchmarks.
### AMD Swiftly Introduces Support for DeepSeek’s R1 LLM Models, Delivering Superior Performance
The latest AI model from DeepSeek has everyone buzzing, and although there’s curiosity about the hefty computing power involved in training such models, it’s great news that a regular consumer can achieve impressive results using AMD’s “RDNA 3” Radeon RX 7900 XTX GPU. AMD has released comparison benchmarks for DeepSeek’s R1 inference, pitting their top-of-the-line RX 7000 series against NVIDIA’s offering, and the results are glowing for Team Red.
DeepSeek performing very well on @AMDRadeon 7900 XTX. Learn how to run on Radeon GPUs and Ryzen AI APUs here: pic.twitter.com/5OKEkyJjh3
— David McAfee (@McAfeeDavid_AMD) January 29, 2025
Using consumer GPUs for AI tasks has turned out to be quite beneficial for many enthusiasts, mainly due to their good performance-to-cost ratio compared to standard AI accelerators. Plus, running models on your own device keeps your data private—a major concern with DeepSeek’s AI solutions. Thankfully, AMD has rolled out a comprehensive guide for getting DeepSeek R1 up and running on their GPUs, and here’s how you can do it:
1. Ensure you’re using the 25.1.1 Optional Adrenalin driver or a newer version.
2. Head over to lmstudio.ai/ryzenai and download LM Studio 0.3.8 or later.
3. Install LM Studio and bypass the onboarding screen.
4. Navigate to the ‘Discover’ tab.
5. Select your desired DeepSeek R1 Distill. Starting with the smaller Qwen 1.5B is recommended as it offers lightning-fast performance, though larger distills enhance reasoning capabilities.
6. On the right, select “Q4 K M” quantization and hit “Download.”
7. After downloading, switch to the chat tab, choose the DeepSeek R1 distill from the dropdown, and make sure “manually select parameters” is enabled.
8. Max out the slider in the GPU offload layers.
9. Click model load.
10. Interact directly with the reasoning model running entirely on your local AMD hardware!
If you run into any snags, AMD has also shared a step-by-step tutorial on YouTube, which is worth checking out. This guide ensures none of your data is compromised while using DeepSeek’s LLMs on AMD systems. With new GPU models from both NVIDIA and AMD on the horizon, we’re eager to see a significant boost in inferencing capabilities, thanks to dedicated AI engines designed to handle these tasks.