memory-efficient-attention-pytorch

memory-efficient-attention-pytorch

lucidrains

Implementation of a memory efficient multi-head attention as proposed in the paper, "Self-attention Does Not Need O(n²) Memory"

370 Stars
35 Forks
370 Watchers
Python Language
mit License
Cost to Build
$2.09M
Market Value
$5.49M

Growth over time

4 data points  ·  2022-06-01 → 2025-03-01
Stars Forks Watchers
💬

How do you feel about this project?

Ask AI about memory-efficient-attention-pytorch

Question copied to clipboard

What is the lucidrains/memory-efficient-attention-pytorch GitHub project? Description: "Implementation of a memory efficient multi-head attention as proposed in the paper, "Self-attention Does Not Need O(n²) Memory"". Written in Python. Explain what it does, its main use cases, key features, and who would benefit from using it.

Question is copied to clipboard — paste it after the AI opens.

How to clone memory-efficient-attention-pytorch

Clone via HTTPS

git clone https://github.com/lucidrains/memory-efficient-attention-pytorch.git

Clone via SSH

[email protected]:lucidrains/memory-efficient-attention-pytorch.git

Download ZIP

Download master.zip

Found an issue?

Report bugs or request features on the memory-efficient-attention-pytorch issue tracker:

Open GitHub Issues