Intel, Marvell, Qualcomm, and others will support Facebook’s Glow compiler | Industry
At Facebook’s 2018 @Scale conference in San Jose, California today, the company announced broad industry backing for Glow, its machine learning compiler designed to accelerate the performance of deep learning frameworks. Cadence, Esperanto, Intel, Marvell, and Qualcomm committed to supporting Glow in future silicon products.
“We created Glow, an open source framework, to be community driven,” Facebook wrote of the announcement. “This approach allows partners to more rapidly design and optimize new silicon products for AI and ML by leveraging community-driven compiler software.”
As the Menlo Park company explained in a blog post, Glow was architected with ease of use in mind. It accepts computation graphs from a variety of machine learning frameworks and works with a range of accelerators. And it packs utilities that can be tweaked and adjusted to support multiple hardware targets.
One example: a memory allocator that can generate code for multiple memory configurations. Among Glow’s other tools are a linear algebra optimizer, a CPU-based reference implementation for testing hardware accuracy, and an instruction scheduler.
“The hardware-independent parts of the compiler focus on math-related optimizations that are not tied to a specific hardware model,” Facebook wrote. “Relying on the existing optimizations and capabilities reduces development time, and the extensive test suite improves a hardware provider’s confidence in the accuracy of the compiler and its conformance to the PyTorch specification.”
Facebook open-sourced Glow in March at its 2018 F8 developer conference in May, where it also launched version 1.0 of its deep learning framework, PyTorch; a PyTorch Language Library for language translation; an object detection model called Detectron; EFL, which teaches machines to reason through gameplay; and Tensor Comprehensions, a C++ library that automatically synthesizes machine learning kernels.
In another move toward platform agnosticism, PyTorch 1.0 — which both Amazon Web Services and Microsoft’s Azure platform support — taps ONNX, an open source project spearheaded by Facebook, Amazon, and Microsoft. It acts as the model export format in PyTorch 1.0, allowing for the integration of accelerated runtimes and hardware-specific libraries.