Extend existing LLMs way beyond the original training length with constant memory usage, without retraining
What is the tomaarsen/attention_sinks GitHub project? Description: "Extend existing LLMs way beyond the original training length with constant memory usage, without retraining". Written in Python. Explain what it does, its main use cases, key features, and who would benefit from using it.
Question is copied to clipboard — paste it after the AI opens.
Clone via HTTPS
Clone via SSH
Download ZIP
Download master.zipReport bugs or request features on the attention_sinks issue tracker:
Open GitHub Issues