• For Ubuntu, CentOS & Yocto

  1. Follow this to setup OpenVINO
Known CompatibilityWindows 10Ubuntu 18.04.x long-term support (LTS), 64-bit

Ubuntu 20.04.0 long-term support (LTS), 64-bit

CentOS 7.6, 64-bit (for target only)

Yocto Project v3.0, 64-bit (for target only and requires modifications)

For deployment scenarios on Red Hat* Enterprise Linux* 8.2 (64 bit), you can use the of Intel® Distribution of OpenVINO toolkit run-time package that includes the Inference Engine core libraries, nGraph, OpenCV, Python bindings, CPU and GPU plugins. The package is available as:
Downloadable archive
PyPi package
Docker image
Install LocationC:\Program Files (x86)\Intel\openvino_<version>, referred to as <INSTALL_DIR>For root or administrator: /opt/intel/openvino_<version>/

For regular users: /home/<USER>/intel/openvino_<version>/
Model Tools Location<INSTALL_DIR>\deployment_tools/opt/intel/openvino_<version >/deployment_tools
Contains demos to verify OpenVINO’s workability after installing
To check: demo_security_barrier_camera.bat
<INSTALL_DIR>\ deployment_tools\demo
Model files (#1 & #3 have exact same files stored in different locations; all 3 no *.xml & *.bin files but has *.yml files)#1 – <INSTALL_DIR> \deployment_tools\open_model_zoo\models\intel
# 2 – <INSTALL_DIR> \deployment_tools\intel_models
# 3 – <INSTALL_DIR> \deployment_tools\open_model_zoo\models\public
#4 – <INSTALL_DIR> \deployment_tools\inference_engine
Demos/Samples (both Demos/Samples are the same kind; they have *.c, *.cpp & *.py files) <INSTALL_DIR> \deployment_tools\open_model_zoo\demos
<INSTALL_DIR> \deployment_tools\inference_engine\demos <– shortcut to open_model_zoo’s demos folder <INSTALL_DIR> \deployment_tools\inference_engine\samples
Intialize environmentcd <INSTALL_DIR> \bin

  1. Inference Engine: The software libraries that run inference against the Intermediate Representation (optimized model) to produce inference results.
  2. Model Optimizer: Optimizes models for Intel® architecture, converting models into a format compatible with the Inference Engine. This format is called an Intermediate Representation (IR).
  3. Intermediate Representation (IR): The Model Optimizer output. A model converted to a format that has been optimized for Intel® architecture and is usable by the Inference Engine.
  • Demo Scripts – Batch/Shell scripts that automatically perform the workflow steps to demonstrate running inference pipelines for different scenarios.
  • Code Samples – Small console applications that show you how to:
    • Utilize specific OpenVINO capabilities in an application
    • Perform specific tasks, such as loading a model, running inference, querying specific device capabilities, and more.
  • Demo Applications – Console applications that provide robust application templates to help you implement specific deep learning scenarios. These applications involve increasingly complex processing pipelines that gather analysis data from several models that run inference simultaneously, such as detecting a person in a video stream along with detecting the person’s physical attributes, such as age, gender, and emotional state.
  1. First, the model done through a framework such as TensorFlow is prepared. 
  2. Next, the model is converted using OpenVINO’s Model Optimizer to convert models such as Caffe’s <INPUT_MODEL>.caffemodel and TensorFlow’s <INFERENCE_GRAPH>.pb using the script to Immediate Representation (IR) files (*.xml & *.bin) for inference operations. 
  3. Once converted, the IR files are able to be run by the Inference Engine.
    • Prior to IR inference, the environment has to be initialized by running setupvars.bat.


  1. “Get Started with OpenVINO Toolkit on Windows* – OpenVINO Toolkit.” OpenVINO,
  2. “Get Started with OpenVINO Toolkit on Linux* – OpenVINO Toolkit.” OpenVINO,