Docker stack: Ollama v0.21.0 built from source against ROCm 7.2.2 with native gfx1151 (Strix Halo) — serves Gemma 4 up to 256K context on AMD Ryzen AI MAX+ 395 / Radeon 8060S. Includes a 9-layer make validate ladder for the host firmware, ROCm runtime, container, and long-context inference.
What is the MaxusAI/ryzen-ai-max-rocm-ollama-testbench GitHub project? Description: "Docker stack: Ollama v0.21.0 built from source against ROCm 7.2.2 with native gfx1151 (Strix Halo) — serves Gemma 4 up to 256K context on AMD Ryzen AI MAX+ 395 / Radeon 8060S. Includes a 9-layer make validate ladder for the host firmware, ROCm runtime, container, and long-context inference.". Written in Shell. Explain what it does, its main use cases, key features, and who would benefit from using it.
Question is copied to clipboard — paste it after the AI opens.
Clone via HTTPS
Clone via SSH
Download ZIP
Download master.zipReport bugs or request features on the ryzen-ai-max-rocm-ollama-testbench issue tracker:
Open GitHub Issues