Using Inference Engines to Power AI Apps
With the demand for intelligent solutions like autonomous driving, digital assistants, recommender systems, enterprises of every type are demanding AI powered applications for surveillance, retail, manufacturing, smart cities and homes, office automation, autonomous driving, and more coming every day. Increasingly, AI applications are powered by smart inference-based inputs.
Up till now, most of these smart applications have required a wealth of machine learning, deep learning, and data science knowledge to enable simple object recognition, much less facial recognition or collision avoidance. That’s all changed with the introduction of Intel’s Distribution of OpenVINO (Open Visual Inference and Neural Network Optimization) toolkit.
With deep learning revenue expected to grow to $35 billion by 2025, the need for accelerating deployment is clear. Here’s some of the reasons to download and use this new intel toolkit
OpenVINO includes Intel’s deep learning deployment toolkit, which includes a model optimizer that imports trained models from a number of frameworks (Caffe, Tensoflow, MxNet, ONNX, Kaldi), optimizes topologies, and provides a huge performance boost by conversion to data types that match hardware types – whether code is running on CPUs, GPUs, VPUs, or FPGAs – or any combination of them. This fast, heterogeneous performance is proven to yield up to higher performance gains compared to public deep learning models.
OpenVINO also includes a host of samples for image and audio classification and segmentation, object detection, neural style transfer, face detection, people counting, among others, and dozens of optimized pre-trained models for everything from age and gender to crossroad object detection to vehicle metadata.
Optimized libraries in the package include OpenCV – a popular open-source computer vision library with a broad range of algorithms and functions and OpenVX – an optimized, graph-based approach for computer vision functions targeted at real-time, low-power apps.
Also included in this distribution are the Intel Media SDK to speed media encode/decode, and users can work with the intel OpenCL drivers and runtime to assist in creation of custom kernels.