3 Forks
45 Stars
45 Watchers

benchmark-FP32-FP16-INT8-with-TensorRT

Benchmark inference speed of CNNs with various quantization methods in Pytorch+TensorRT with Jetson Nano/Xavier

How to download and setup benchmark-FP32-FP16-INT8-with-TensorRT

Open terminal and run command
git clone https://github.com/kentaroy47/benchmark-FP32-FP16-INT8-with-TensorRT.git
git clone is used to create a copy or clone of benchmark-FP32-FP16-INT8-with-TensorRT repositories. You pass git clone a repository URL.
it supports a few different network protocols and corresponding URL formats.

Also you may download zip file with benchmark-FP32-FP16-INT8-with-TensorRT https://github.com/kentaroy47/benchmark-FP32-FP16-INT8-with-TensorRT/archive/master.zip

Or simply clone benchmark-FP32-FP16-INT8-with-TensorRT with SSH
[email protected]:kentaroy47/benchmark-FP32-FP16-INT8-with-TensorRT.git

If you have some problems with benchmark-FP32-FP16-INT8-with-TensorRT

You may open issue on benchmark-FP32-FP16-INT8-with-TensorRT support forum (system) here: https://github.com/kentaroy47/benchmark-FP32-FP16-INT8-with-TensorRT/issues