Nvidia outlines inference platform, lands Japan’s industrial giants as AI, robotics customers | Artificial intelligence
This article was originally published on ZDNet.
Nvidia launched a hyperscale data center platform that combines the Tesla T4 GPU, TensorRT software and the Turing architecture to provide inference acceleration for voice, video and image recommendations.
Separately, Nvidia announced a bevy of key customer wins with Japanese industrial giants such as Yamaha Motor.
The news was outlined at GTC Japan in Tokyo by Nvidia CEO Jensen Huang.
SEE: IT leader’s guide to the future of artificial intelligence (Tech Pro Research)
Nvidia’s inference platform—dubbed Nvidia TensorRT Hyperscale Inference Platform—is designed to offer more performance with lower latency within hyperscale data centers. These data centers are typically offering natural language interaction, answers to search queries and artificial intelligence tools.
As for the parts of the platform, the Tesla T4 GPU (right) has 320 Turing Tensor Cores and 2,560 CUDA cores in a package that fits into most servers. TensorRT 5 is Nvidia’s inference optimizer and runtime engine and that is coupled with the TensorRT inference server, which is used for AI models in production.
Nvidia has all the big server vendors on board with the platform to deliver systems.
Meanwhile, the company outlined how Japan’s industrial giants were planning to use Nvidia platforms for autonomous and intelligent machines.
Yamaha Motor will use Nvidia’s Jetson AGX Xavier as its development system for unmanned agriculture vehicles, marine products and last-mile vehicles. Jetson AGX Xavier is a system designed for AI, robotics and edge computing.
FANUC, Komatsu, Musashi Seimitsu and Kawada Technologies also said they will adopt Nvidia’s Jetson AGX Xavier system.
The customers wins in Japan give Nvidia momentum in robotics and AI applications and development. Japan is among the robotic leaders given its aging population and the need to offload labor.