- For Ubuntu, CentOS & Yocto
- Follow this to setup OpenVINO
|Known Compatibility||Windows 10||Ubuntu 18.04.x long-term support (LTS), 64-bit|
Ubuntu 20.04.0 long-term support (LTS), 64-bit
CentOS 7.6, 64-bit (for target only)
Yocto Project v3.0, 64-bit (for target only and requires modifications)
For deployment scenarios on Red Hat* Enterprise Linux* 8.2 (64 bit), you can use the of Intel® Distribution of OpenVINO
|Install Location||For root or administrator: |
For regular users:
|Model Tools Location||<INSTALL_DIR>\deployment_tools||/opt/intel/openvino_|
|Contains demos to verify OpenVINO’s workability after installing|
To check: demo_security_barrier_camera.bat
|Model files (#1 & #3 have exact same files stored in different locations; all 3 no *.xml & *.bin files but has *.yml files)||#1 – <INSTALL_DIR> \deployment_tools\open_model_zoo\models\intel|
# 2 – <INSTALL_DIR> \deployment_tools\intel_models
# 3 – <INSTALL_DIR> \deployment_tools\open_model_zoo\models\public
#4 – <INSTALL_DIR> \deployment_tools\inference_engine
|Demos/Samples (both Demos/Samples are the same kind; they have *.c, *.cpp & *.py files)|| <INSTALL_DIR> \deployment_tools\open_model_zoo\demos|
<INSTALL_DIR> \deployment_tools\inference_engine\demos <– shortcut to open_model_zoo’s demos folder <INSTALL_DIR> \deployment_tools\inference_engine\samples
|Intialize environment||cd <INSTALL_DIR> \bin|
- Inference Engine: The software libraries that run inference against the Intermediate Representation (optimized model) to produce inference results.
- Model Optimizer: Optimizes models for Intel® architecture, converting models into a format compatible with the Inference Engine. This format is called an Intermediate Representation (IR).
- Intermediate Representation (IR): The Model Optimizer output. A model converted to a format that has been optimized for Intel® architecture and is usable by the Inference Engine.
- Demo Scripts – Batch/Shell scripts that automatically perform the workflow steps to demonstrate running inference pipelines for different scenarios.
- Code Samples – Small console applications that show you how to:
- Utilize specific OpenVINO capabilities in an application
- Perform specific tasks, such as loading a model, running inference, querying specific device capabilities, and more.
- Demo Applications – Console applications that provide robust application templates to help you implement specific deep learning scenarios. These applications involve increasingly complex processing pipelines that gather analysis data from several models that run inference simultaneously, such as detecting a person in a video stream along with detecting the person’s physical attributes, such as age, gender, and emotional state.
- First, the model done through a framework such as TensorFlow is prepared.
- Next, the model is converted using OpenVINO’s Model Optimizer to convert models such as Caffe’s <INPUT_MODEL>.caffemodel and TensorFlow’s <INFERENCE_GRAPH>.pb using the mo.py script to Immediate Representation (IR) files (*.xml & *.bin) for inference operations.
- Once converted, the IR files are able to be run by the Inference Engine.
- Prior to IR inference, the environment has to be initialized by running setupvars.bat.
- “Get Started with OpenVINO
Toolkit on Windows* – OpenVINO Toolkit.” OpenVINO, docs.openvinotoolkit.org/latest/openvino_docs_get_started_get_started_windows.html.
- “Get Started with OpenVINO
Toolkit on Linux* – OpenVINO Toolkit.” OpenVINO, docs.openvinotoolkit.org/latest/openvino_docs_get_started_get_started_linux.html.