Cross-platform local LLM inference benchmark: 4 hardware platforms × 5 engines × 5,100 measurements. Qwen3.5 (9B→122B) on Apple Silicon, DGX Spark, Ryzen AI MAX 395, RTX 3090×2.
What is the BAEM1N/llm-bench GitHub project? Description: "Cross-platform local LLM inference benchmark: 4 hardware platforms × 5 engines × 5,100 measurements. Qwen3.5 (9B→122B) on Apple Silicon, DGX Spark, Ryzen AI MAX 395, RTX 3090×2.". Written in Python. Explain what it does, its main use cases, key features, and who would benefit from using it.
Question is copied to clipboard — paste it after the AI opens.
Clone via HTTPS
Clone via SSH
Download ZIP
Download master.zipReport bugs or request features on the llm-bench issue tracker:
Open GitHub Issues