Multi-Modality-Arena

Multi-Modality-Arena

OpenGVLab

Chatbot Arena meets multi-modality! Multi-Modality Arena allows you to benchmark vision-language models side-by-side while providing images as inputs. Supports MiniGPT-4, LLaMA-Adapter V2, LLaVA, BLIP-2, and many more!

79 Stars
3 Forks
79 Watchers
Python Language
Cost to Build
$237.6K
Market Value
$337.7K

Growth over time

1 data points  ·  2023-06-12 → 2023-06-12
Stars Forks Watchers
💬

How do you feel about this project?

Ask AI about Multi-Modality-Arena

Question copied to clipboard

What is the OpenGVLab/Multi-Modality-Arena GitHub project? Description: "Chatbot Arena meets multi-modality! Multi-Modality Arena allows you to benchmark vision-language models side-by-side while providing images as inputs. Supports MiniGPT-4, LLaMA-Adapter V2, LLaVA, BLIP-2, and many more!". Written in Python. Explain what it does, its main use cases, key features, and who would benefit from using it.

Question is copied to clipboard — paste it after the AI opens.

How to clone Multi-Modality-Arena

Clone via HTTPS

git clone https://github.com/OpenGVLab/Multi-Modality-Arena.git

Clone via SSH

[email protected]:OpenGVLab/Multi-Modality-Arena.git

Download ZIP

Download master.zip

Found an issue?

Report bugs or request features on the Multi-Modality-Arena issue tracker:

Open GitHub Issues