neural-compressor

neural-compressor

intel

Intel® Neural Compressor (formerly known as Intel® Low Precision Optimization Tool), targeting to provide unified APIs for network compression technologies, such as low precision quantization, sparsity, pruning, knowledge distillation, across different deep learning frameworks to pursue optimal inference performance.

995 Stars
165 Forks
995 Watchers
Python Language
apache-2.0 License
Cost to Build
$33.03M
Market Value
$84.18M

Growth over time

1 data points  ·  2023-03-23 → 2023-03-23
Stars Forks Watchers
💬

How do you feel about this project?

Ask AI about neural-compressor

Question copied to clipboard

What is the intel/neural-compressor GitHub project? Description: "Intel® Neural Compressor (formerly known as Intel® Low Precision Optimization Tool), targeting to provide unified APIs for network compression technologies, such as low precision quantization, sparsity, pruning, knowledge distillation, across different deep learning frameworks to pursue optimal inference performance.". Written in Python. Explain what it does, its main use cases, key features, and who would benefit from using it.

Question is copied to clipboard — paste it after the AI opens.

How to clone neural-compressor

Clone via HTTPS

git clone https://github.com/intel/neural-compressor.git

Clone via SSH

[email protected]:intel/neural-compressor.git

Download ZIP

Download master.zip

Found an issue?

Report bugs or request features on the neural-compressor issue tracker:

Open GitHub Issues