On 4 December 2017, NVIDIA announced that AI researchers using NVIDIA desktop GPUs can now tap into NVIDIA GPU Cloud (NGC). By extending NVIDIA GPU Cloud support to NVIDIA TITAN, they have opened up NGC to hundreds of thousands of new users.
Now Everyone Can Use NVIDIA GPU Cloud!
The expanded NGC capabilities add new software and other key updates to the NGC container registry, providing AI researchers with a broader and more powerful set of tools.
Anyone using the NVIDIA-powered TITAN graphics cards can sign up immediately for a no-charge NGC account and gain full access to a comprehensive catalog of GPU-optimized deep learning and HPC software and tools. Other supported computing platforms include NVIDIA DGX-1, DGX Station and NVIDIA Volta-enabled instances on Amazon EC2.
Software available through NGC’s rapidly expanding container registry includes NVIDIA-optimized deep learning frameworks such as TensorFlow and PyTorch, third-party managed HPC applications, NVIDIA HPC visualization tools, and NVIDIA’s programmable inference accelerator, TensorRT 3.0.
New NGC Container, Updates & Features
In addition to making NVIDIA TensorRT available on NGC’s container registry, NVIDIA announced the following NGC updates:[adrotate group=”2″]
- Open Neural Network Exchange (ONNX) support for TensorRT.
- Immediate support and availability for the first release of MXNet 1.0
- Availability of Baidu’s PaddlePaddle AI framework
ONNX is an open format originally created by Facebook and Microsoft through which developers can exchange models across different frameworks. In the TensorRT development container, NVIDIA created a converter to deploy ONNX models to the TensorRT inference engine. This makes it easier for application developers to deploy low-latency, high-throughput models to TensorRT.
Together, these additions give developers a one-stop shop for software that supports a full spectrum of AI computing needs — from research and application development to training and deployment.