llm-bench

llm-bench

BAEM1N

Cross-platform local LLM inference benchmark: 4 hardware platforms × 5 engines × 5,100 measurements. Qwen3.5 (9B→122B) on Apple Silicon, DGX Spark, Ryzen AI MAX 395, RTX 3090×2.

0 Stars
0 Forks
0 Watchers
Python Language
30 SrcLog Score
Cost to Build
$35.9K
Market Value
$3.9K

Growth over time

3 data points  ·  2026-04-10 → 2026-04-25
Stars Forks Watchers
💬

How do you feel about this project?

Ask AI about llm-bench

Question copied to clipboard

What is the BAEM1N/llm-bench GitHub project? Description: "Cross-platform local LLM inference benchmark: 4 hardware platforms × 5 engines × 5,100 measurements. Qwen3.5 (9B→122B) on Apple Silicon, DGX Spark, Ryzen AI MAX 395, RTX 3090×2.". Written in Python. Explain what it does, its main use cases, key features, and who would benefit from using it.

Question is copied to clipboard — paste it after the AI opens.

How to clone llm-bench

Clone via HTTPS

git clone https://github.com/BAEM1N/llm-bench.git

Clone via SSH

[email protected]:BAEM1N/llm-bench.git

Download ZIP

Download master.zip

Found an issue?

Report bugs or request features on the llm-bench issue tracker:

Open GitHub Issues