Hey everyone, Getting local LLMs like Mistral to run smoothly on an AMD GPU in a Windows environment can be a bit of a headache. Most guides focus on NVIDIA/Linux setups. So, I wrote a simple, step-by-step article explaining how I got it working. It covers the necessary tools (like Jan and llama.cpp), the setup process, and a few tips to get you started Comments
You must log in or register to comment.

