Transformer-MM-Explainability

Transformer-MM-Explainability

hila-chefer

[ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers, a novel method to visualize any Transformer-based network. Including examples for DETR, VQA.

856 Stars
111 Forks
856 Watchers
Jupyter Notebook Language
mit License
Cost to Build
$131.3K
Market Value
$484.7K

Growth over time

7 data points  ·  2021-08-01 → 2025-07-01
Stars Forks Watchers
💬

How do you feel about this project?

Ask AI about Transformer-MM-Explainability

Question copied to clipboard

What is the hila-chefer/Transformer-MM-Explainability GitHub project? Description: "[ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers, a novel method to visualize any Transformer-based network. Including examples for DETR, VQA.". Written in Jupyter Notebook. Explain what it does, its main use cases, key features, and who would benefit from using it.

Question is copied to clipboard — paste it after the AI opens.

How to clone Transformer-MM-Explainability

Clone via HTTPS

git clone https://github.com/hila-chefer/Transformer-MM-Explainability.git

Clone via SSH

[email protected]:hila-chefer/Transformer-MM-Explainability.git

Download ZIP

Download master.zip

Found an issue?

Report bugs or request features on the Transformer-MM-Explainability issue tracker:

Open GitHub Issues