benchmark-FP32-FP16-INT8-with-TensorRT

benchmark-FP32-FP16-INT8-with-TensorRT

kentaroy47

Benchmark inference speed of CNNs with various quantization methods in Pytorch+TensorRT with Jetson Nano/Xavier

56 Stars
3 Forks
56 Watchers
Jupyter Notebook Language
mit License
100 SrcLog Score
Cost to Build
$29.7K
Market Value
$39.5K

Growth over time

8 data points  ·  2021-07-01 → 2026-04-01
Stars Forks Watchers
💬

How do you feel about this project?

Ask AI about benchmark-FP32-FP16-INT8-with-TensorRT

Question copied to clipboard

What is the kentaroy47/benchmark-FP32-FP16-INT8-with-TensorRT GitHub project? Description: "Benchmark inference speed of CNNs with various quantization methods in Pytorch+TensorRT with Jetson Nano/Xavier". Written in Jupyter Notebook. Explain what it does, its main use cases, key features, and who would benefit from using it.

Question is copied to clipboard — paste it after the AI opens.

How to clone benchmark-FP32-FP16-INT8-with-TensorRT

Clone via HTTPS

git clone https://github.com/kentaroy47/benchmark-FP32-FP16-INT8-with-TensorRT.git

Clone via SSH

[email protected]:kentaroy47/benchmark-FP32-FP16-INT8-with-TensorRT.git

Download ZIP

Download master.zip

Found an issue?

Report bugs or request features on the benchmark-FP32-FP16-INT8-with-TensorRT issue tracker:

Open GitHub Issues