MMTrustEval
A toolbox for benchmarking trustworthiness of multimodal large language models (MultiTrust, NeurIPS 2024 Track Datasets and Benchmarks)
How to download and setup MMTrustEval
Open terminal and run command
git clone https://github.com/thu-ml/MMTrustEval.git
git clone is used to create a copy or clone of MMTrustEval repositories.
You pass git clone a repository URL. it supports a few different network protocols and corresponding URL formats.
Also you may download zip file with MMTrustEval https://github.com/thu-ml/MMTrustEval/archive/master.zip
Or simply clone MMTrustEval with SSH
[email protected]:thu-ml/MMTrustEval.git
If you have some problems with MMTrustEval
You may open issue on MMTrustEval support forum (system) here: https://github.com/thu-ml/MMTrustEval/issuesSimilar to MMTrustEval repositories
Here you may see MMTrustEval alternatives and analogs
netdata primesieve fashion-mnist FrameworkBenchmarks BenchmarkDotNet jmeter awesome-semantic-segmentation sysbench hyperfine tsung benchmark_results across web-frameworks php-framework-benchmark jsperf.com go-web-framework-benchmark huststore phoronix-test-suite Attabench ann-benchmarks sbt-jmh caffenet-benchmark chillout IocPerformance prophiler TBCF NBench sympact awesome-http-benchmark BlurTestAndroid