Awesome-LLM-Eval
Awesome-LLM-Eval: a curated list of tools, datasets/benchmark, demos, leaderboard, papers, docs and models, mainly for Evaluation on LLMs. 一个由工具、基准/数据、演示、排行榜和大模型等组成的精选列表,主要面向基础大模型评测,旨在探求生成式AI的技术边界.
How to download and setup Awesome-LLM-Eval
Open terminal and run command
git clone https://github.com/onejune2018/Awesome-LLM-Eval.git
git clone is used to create a copy or clone of Awesome-LLM-Eval repositories.
You pass git clone a repository URL. it supports a few different network protocols and corresponding URL formats.
Also you may download zip file with Awesome-LLM-Eval https://github.com/onejune2018/Awesome-LLM-Eval/archive/master.zip
Or simply clone Awesome-LLM-Eval with SSH
[email protected]:onejune2018/Awesome-LLM-Eval.git
If you have some problems with Awesome-LLM-Eval
You may open issue on Awesome-LLM-Eval support forum (system) here: https://github.com/onejune2018/Awesome-LLM-Eval/issuesSimilar to Awesome-LLM-Eval repositories
Here you may see Awesome-LLM-Eval alternatives and analogs
netdata primesieve fashion-mnist FrameworkBenchmarks BenchmarkDotNet jmeter awesome-semantic-segmentation sysbench hyperfine tsung benchmark_results across web-frameworks php-framework-benchmark jsperf.com go-web-framework-benchmark huststore phoronix-test-suite Attabench ann-benchmarks sbt-jmh caffenet-benchmark chillout IocPerformance prophiler TBCF NBench sympact awesome-http-benchmark BlurTestAndroid