Intel Deep Learning Deployment Toolkit Online

Take your slowest production model, run it through the Model Optimizer, and benchmark the result. You will be shocked. Have you used OpenVINO or the Intel DLDT in production? Let me know your latency improvements in the comments below!

The easiest way to get the runtime is via pip, though for the full Model Optimizer, download the full OpenVINO toolkit. intel deep learning deployment toolkit

mo --input_model my_model.onnx --output_dir ./optimized_model Here is a Python snippet to run your newly minted IR model: Take your slowest production model, run it through