3 repositories on SrcLog
The Triton Inference Server provides an optimized cloud and edge inferencing solution.
The Triton backend that allows running GPU-accelerated data pre-processing pipelines implemented in DALI's python API.
The Triton backend for the ONNX Runtime.