LLM-Game-Benchmark

LLM-Game-Benchmark

research-outcome

Evaluating Large Language Models with Grid-Based Game Competitions: An Extensible LLM Benchmark and Leaderboard

25 Stars
3 Forks
25 Watchers
JavaScript Language
other License
100 SrcLog Score
Cost to Build
$47.2K
Market Value
$65.7K

Growth over time

2 data points  ·  2025-04-10 → 2026-04-09
Stars Forks Watchers
💬

How do you feel about this project?

Ask AI about LLM-Game-Benchmark

Question copied to clipboard

What is the research-outcome/LLM-Game-Benchmark GitHub project? Description: "Evaluating Large Language Models with Grid-Based Game Competitions: An Extensible LLM Benchmark and Leaderboard". Written in JavaScript. Explain what it does, its main use cases, key features, and who would benefit from using it.

Question is copied to clipboard — paste it after the AI opens.

How to clone LLM-Game-Benchmark

Clone via HTTPS

git clone https://github.com/research-outcome/LLM-Game-Benchmark.git

Clone via SSH

[email protected]:research-outcome/LLM-Game-Benchmark.git

Download ZIP

Download master.zip

Found an issue?

Report bugs or request features on the LLM-Game-Benchmark issue tracker:

Open GitHub Issues