lucidrains

lucidrains

👤 Developer

159 repositories on SrcLog

View on GitHub
159 Repos
156.6k Stars
16k Forks
156.6k Watchers

Repositories (159)

sinkhorn-transformer lucidrains/sinkhorn-transformer Python

Sinkhorn Transformer - Practical implementation of Sparse Sinkhorn Attention

258
pixel-level-contrastive-learning lucidrains/pixel-level-contrastive-learning Python

Implementation of Pixel-level Contrastive Learning, proposed in the paper "Propagate Yourself", in Pytorch

257
rectified-flow-pytorch lucidrains/rectified-flow-pytorch Python

Implementation of rectified flow and some of its followup research / improvements in Pytorch

254
Adan-pytorch lucidrains/Adan-pytorch Python

Implementation of the Adan (ADAptive Nesterov momentum algorithm) Optimizer in Pytorch

251
speculative-decoding lucidrains/speculative-decoding Python

Explorations into some recent techniques surrounding speculative decoding

245
perfusion-pytorch lucidrains/perfusion-pytorch Python

Implementation of Key-Locked Rank One Editing, from Nvidia AI

233
CoLT5-attention lucidrains/CoLT5-attention Python

Implementation of the conditionally routed attention in the CoLT5 architecture, in Pytorch

226
magic3d-pytorch lucidrains/magic3d-pytorch Python

Implementation of Magic3D, Text to 3D content synthesis, in Pytorch

226
med-seg-diff-pytorch lucidrains/med-seg-diff-pytorch Python

Implementation of MedSegDiff in Pytorch - SOTA medical segmentation using DDPM and filtering of features in fourier space

225
electra-pytorch lucidrains/electra-pytorch Python

A simple and working implementation of Electra, the fastest way to pretrain language models from scratch, in Pytorch

223
block-recurrent-transformer-pytorch lucidrains/block-recurrent-transformer-pytorch Python

Implementation of Block Recurrent Transformer - Pytorch

218
En-transformer lucidrains/En-transformer Python

Implementation of E(n)-Transformer, which incorporates attention mechanisms into Welling's E(n)-Equivariant Graph Neural Network

217
graph-transformer-pytorch lucidrains/graph-transformer-pytorch Python

Implementation of Graph Transformer in Pytorch, for potential use in replicating Alphafold2

214
metnet3-pytorch lucidrains/metnet3-pytorch Python

Implementation of MetNet-3, SOTA neural weather model out of Google Deepmind, in Pytorch

211
flash-cosine-sim-attention lucidrains/flash-cosine-sim-attention Cuda

Implementation of fused cosine similarity attention in the same style as Flash Attention

210
simple-hierarchical-transformer lucidrains/simple-hierarchical-transformer Python

Experiments around a simple idea for inducing multiple hierarchical predictive model within a GPT

208
unet-stylegan2 lucidrains/unet-stylegan2 Python

A Pytorch implementation of Stylegan2 with UNet Discriminator

206
flash-attention-jax lucidrains/flash-attention-jax Python

Implementation of Flash Attention in Jax

205
Mega-pytorch lucidrains/Mega-pytorch Python

Implementation of Mega, the Single-head Attention with Multi-headed EMA architecture that currently holds SOTA on Long Range Arena

204
recurrent-interface-network-pytorch lucidrains/recurrent-interface-network-pytorch Python

Implementation of Recurrent Interface Network (RIN), for highly efficient generation of images and video without cascading networks, in Pytorch

200
natural-speech-pytorch lucidrains/natural-speech-pytorch Python

Implementation of the neural network proposed in Natural Speech, a text-to-speech generator that is indistinguishable from human recordings for the first time, from Microsoft Research

200
halonet-pytorch lucidrains/halonet-pytorch Python

Implementation of the 😇 Attention layer from the paper, Scaling Local Self-Attention For Parameter Efficient Visual Backbones

198
res-mlp-pytorch lucidrains/res-mlp-pytorch Python

Implementation of ResMLP, an all MLP solution to image classification, in Pytorch

197
glom-pytorch lucidrains/glom-pytorch Python

An attempt at the implementation of Glom, Geoffrey Hinton's new idea that integrates concepts from neural fields, top-down-bottom-up processing, and attention (consensus between columns), for emergent part-whole heirarchies from data

193
fast-transformer-pytorch lucidrains/fast-transformer-pytorch Python

Implementation of Fast Transformer in Pytorch

167
attention lucidrains/attention HTML

This repository will house a visualization that will attempt to convey instant enlightenment of how Attention works to someone not working in artificial intelligence, with 3Blue1Brown as inspiration

156
dreamerv3-pytorch lucidrains/dreamerv3-pytorch

Implementation of Dreamer v3, Deepmind's first neural network that was able to learn to collect diamonds in Minecraft, in Pytorch

149
transganformer lucidrains/transganformer Python

Implementation of TransGanFormer, an all-attention GAN that combines the finding from the recent GanFormer and TransGan paper

147
h-transformer-1d lucidrains/h-transformer-1d Python

Implementation of H-Transformer-1D, Hierarchical Attention for Sequence Learning

129
compressive-transformer-pytorch lucidrains/compressive-transformer-pytorch Python

Pytorch implementation of Compressive Transformers, from Deepmind

126