Stable Diffusion image generation on AMD Ryzen AI NPUs for Linux
Wrapper stack for reproducible local llama.cpp inference on AMD/Vulkan, with Open WebUI and Tailscale entrypoints.
Yocto Scarthgap meta layer for VD100 (XCVE2302) — XRT 2025.2, zocl, AIE-ML pipeline. Fixes undocumented BOOT.BIN CDO gap that leaves all AIE tiles clo...
A lightweight TUI monitor for AMD Ryzen AI NPUs
Vivado 2025.2 block design for VD100 (XCVE2302) — CIPS, NoC, AIE-ML, AXI interrupt controller, MyLEDIP. Reusable hardware platform for Vitis AIE kerne...
Fix for AMD XDNA NPU driver on Ryzen AI 300 (Strix Point/Halo) — SMU init bypass patch, systemd auto-loader, full root cause analysis
Configure NVIDIA PRIME offloading and VFIO GPU passthrough on Linux with Looking Glass for Windows VMs without rebooting
Local LLM inference on AMD GPU — llama.cpp Vulkan on Windows, no ROCm required
Open-source local image-to-mesh helper for Windows. Meshes may need light polish, can also be used for 3D printing, and the project is still a work in...
Sticky-block topology lottery scheduler for transformer fine-tuning. Less VRAM, less wall-clock, bigger models.
Documentation technique et guides sur l'IA générative, les LLM, les agents mémoires et les outils comme AMD Amuse. Ressources pour développeurs et cré...
DF41 team solution for the MICCAI 2024 MARIO Challenge. OCT-based AMD progression analysis using fusion CNNs and masked autoencoders. Ranked 2nd in Ta...
Interactive bare-metal AIE-ML v1 MA crossover on VD100 — UART command interface triggers AIE graph, prints BUY/SELL/HOLD signals
Faster-Whisper on AMD GPUs via DirectML on Windows — drop-in GPU transcription, no ROCm required
Debug Xilinx embedded and native targets in VS Code with XSDB, GDB, LLDB, and Mago-MI support for Zynq, Versal, and MicroBlaze systems
Run ComfyUI on AMD GPU (RDNA1-RDNA4) on Windows -- comfyui-rocm, AMD Portable, and DirectML. Tested on RX 5700 XT.
Automated scheduler for PBO2 Ryzen Tuner
SoftEng-Islam/nixxin NixOS Configs 2026 - Declarative System Theme Pack 🚀
Native ROCm C++ kernels for Strix Halo (gfx1151): ternary BitNet GEMV, RMSNorm, RoPE, split-KV Flash-Decoding attention. Zero hipBLAS, zero Python.
AMD Kintex™ UltraScale™ FPGA KCU105 Evaluation Kit
ONNX Runtime + DirectML on AMD GPUs — GPU-accelerated inference on Windows, no CUDA, no ROCm
Stable Diffusion / SDXL on AMD GPUs via DirectML on Windows — no ROCm required
Small axi slave for controlling the MMCM to phase shift output clocks.
Docker stack: Ollama v0.21.0 built from source against ROCm 7.2.2 with native gfx1151 (Strix Halo) — serves Gemma 4 up to 256K context on AMD Ryzen AI...
Local autonomous coding agent stack for AMD RX 6700 XT (RDNA2): OpenCode + LM Studio Vulkan + Gemma 4 E4B Q6_K + oh-my-opencode. Bypasses Ollama Vulka...
🚀 Run 120B AI Models on Spark 2026 - vLLM API & Coding Assistant
Everything you need to run AI/ML on AMD GPUs on Windows — master toolkit hub
Vitis bare-metal BSP platform for Versal AI Edge XCVE2302 (VD100) — shared foundation for bare-metal AIE applications
AMD(Advanced Micro Devices)-Native Inference Stack The goal is a ground-up LLM inference + serving stack that bypasses CUDA lock-in, targets ROCm nati...
Add multi-agent AI voice chat to any website with embeddable real-time conversations and flexible agent voices
Bare-metal MA crossover pipeline on Versal AI Edge VD100 — PS drives AIE-ML v1 graph via HLS DMA without Linux or XRT
Run GPT-OSS 120B on NVIDIA DGX Spark with vLLM, build an API server, and create a local AI coding assistant
Self-hosted AI homelab — LLMs, voice/vision assistant, RAG knowledge base, n8n automation
Secure infrastructure access and operations platform for remote access, automation, and audited control across Linux and mixed-protocol systems