Benchmark inference speed of CNNs with various quantization methods in Pytorch+TensorRT with Jetson Nano/Xavier
What is the kentaroy47/benchmark-FP32-FP16-INT8-with-TensorRT GitHub project? Description: "Benchmark inference speed of CNNs with various quantization methods in Pytorch+TensorRT with Jetson Nano/Xavier". Written in Jupyter Notebook. Explain what it does, its main use cases, key features, and who would benefit from using it.
Question is copied to clipboard — paste it after the AI opens.
Clone via HTTPS
Clone via SSH
Download ZIP
Download master.zipReport bugs or request features on the benchmark-FP32-FP16-INT8-with-TensorRT issue tracker:
Open GitHub Issues