78 Forks
530 Stars
530 Watchers

Transformer-MM-Explainability

[ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers, a novel method to visualize any Transformer-based network. Including examples for DETR, VQA.

How to download and setup Transformer-MM-Explainability

Open terminal and run command
git clone https://github.com/hila-chefer/Transformer-MM-Explainability.git
git clone is used to create a copy or clone of Transformer-MM-Explainability repositories. You pass git clone a repository URL.
it supports a few different network protocols and corresponding URL formats.

Also you may download zip file with Transformer-MM-Explainability https://github.com/hila-chefer/Transformer-MM-Explainability/archive/master.zip

Or simply clone Transformer-MM-Explainability with SSH
[email protected]:hila-chefer/Transformer-MM-Explainability.git

If you have some problems with Transformer-MM-Explainability

You may open issue on Transformer-MM-Explainability support forum (system) here: https://github.com/hila-chefer/Transformer-MM-Explainability/issues

Similar to Transformer-MM-Explainability repositories

Here you may see Transformer-MM-Explainability alternatives and analogs

 echarts    viz    plotly.js    vega    visx    plotly.py    vprof    flask_jsondash    planetary.js    markvis    resonance    osmnx    svgo    DiagrammeR    metabase    arcan    heatmap.js    mapview    dockviz    GRASSMARLIN    algorithm-visualizer    glumpy    redash    rawgraphs-app    datawrapper    scope    d3    bokeh    vis    timesheet.js