llm-bug-bench

llm-bug-bench

gabrieldiem

Web-based benchmark suite that evaluates how well LLMs catch real-world concurrency bugs, deadlocks, and distributed systems edge cases. Includes automated 1–20 scoring via LLM judge, side-by-side run comparisons, and support for local (Ollama) and cloud providers.

0 Stars
0 Forks
0 Watchers
Python Language
30 SrcLog Score
Cost to Build
$22.1K
Market Value
$2.4K

Growth over time

2 data points  ·  2026-04-13 → 2026-04-20
Stars Forks Watchers
💬

How do you feel about this project?

Ask AI about llm-bug-bench

Question copied to clipboard

What is the gabrieldiem/llm-bug-bench GitHub project? Description: "Web-based benchmark suite that evaluates how well LLMs catch real-world concurrency bugs, deadlocks, and distributed systems edge cases. Includes automated 1–20 scoring via LLM judge, side-by-side run comparisons, and support for local (Ollama) and cloud providers.". Written in Python. Explain what it does, its main use cases, key features, and who would benefit from using it.

Question is copied to clipboard — paste it after the AI opens.

How to clone llm-bug-bench

Clone via HTTPS

git clone https://github.com/gabrieldiem/llm-bug-bench.git

Clone via SSH

[email protected]:gabrieldiem/llm-bug-bench.git

Download ZIP

Download master.zip

Found an issue?

Report bugs or request features on the llm-bug-bench issue tracker:

Open GitHub Issues