MMTrustEval

MMTrustEval

thu-ml

A toolbox for benchmarking trustworthiness of multimodal large language models (MultiTrust, NeurIPS 2024 Track Datasets and Benchmarks)

173 Stars
12 Forks
173 Watchers
Python Language
cc-by-sa-4.0 License
100 SrcLog Score
Cost to Build
$670.5K
Market Value
$1.78M

Growth over time

4 data points  ·  2025-06-06 → 2026-04-26
Stars Forks Watchers
💬

How do you feel about this project?

Ask AI about MMTrustEval

Question copied to clipboard

What is the thu-ml/MMTrustEval GitHub project? Description: "A toolbox for benchmarking trustworthiness of multimodal large language models (MultiTrust, NeurIPS 2024 Track Datasets and Benchmarks) ". Written in Python. Explain what it does, its main use cases, key features, and who would benefit from using it.

Question is copied to clipboard — paste it after the AI opens.

How to clone MMTrustEval

Clone via HTTPS

git clone https://github.com/thu-ml/MMTrustEval.git

Clone via SSH

[email protected]:thu-ml/MMTrustEval.git

Download ZIP

Download master.zip

Found an issue?

Report bugs or request features on the MMTrustEval issue tracker:

Open GitHub Issues