Tag Archives: NVIDIA Jetson Xavier NX

NVIDIA Jetson Xavier NX : World's Smallest AI Supercomputer

NVIDIA Jetson Xavier NX : World’s Smallest AI Supercomputer

On 7 November 2019, NVIDIA introduced the Jetson Xavier NX – the world’s smallest AI supercomputer designed for robotics and embedded computing applications at the edge!

Here is EVERYTHING you need to know about the new NVIDIA Jetson Xavier NX!

 

NVIDIA Jetson Xavier NX : World’s Smallest AI Supercomputer

At just 70 x 45 mm, the new NVIDIA Jetson Xavier NX is smaller than a credit card. Yet it delivers server-class AI performance at up to 21 TOPS, while consuming as little as 10 watts of power.

Short for Nano Xavier, the NX is a low-power version of the Xavier SoC that came up tops in the MLPerf Inference benchmarks.

Recommended : NVIDIA Wins MLPerf Inference Benchmarks For DC + Edge!

With its small size and low-power, it opens up the possibility of adding AI on-the-edge computing capabilities to small commercial robots, drones, industrial IoT systems, network video recorders and portable medical devices.

The Jetson Xavier NX can be configured to deliver up to 14 TOPS at 10 W, or 21 TOPS at 15 W. It is powerful enough to run multiple neural networks in parallel, and process data from multiple high-resolution sensors simultaneously.

The NVIDIA Jetson Xavier NX runs on the same CUDA-X AI software architecture as all other Jetson processors, and is supported by the NVIDIA JetPack software development kit.

It is pin-compatible with the Jetson Nano, offering up to 15X higher performance than the Jetson TX2 in a smaller form factor.

It is not available for a few more months, but developers can begin development today using the Jetson AGX Xavier Developer Kit, with a software patch to emulate Jetson Xavier NX.

 

NVIDIA Jetson Xavier NX Specifications

Specifications NVIDIA Jetson Xavier NX
CPU NVIDIA Carmel
– 6 x Arm 64-bit cores
– 6 MB L2 + 4 MB L3 caches
GPU NVIDIA Volta
– 384 CUDA cores, 48 Tensor cores, 2 NVDLA cores
AI Performance 21 TOPS : 15 watts
14 TOPS : 10 watts
Memory Support 128-bit LPDDR4x-3200
– Up to 8 GB, 51.2 GB/s
Video Support Encoding : Up to 2 x 4K30 streams
Decoding : Up to 2 x 4K60 streams
Camera Support Up to six CSI cameras (32 via virtual channels)
Up to 12 lanes (3×4 or 6×2) MIPI CSI-2
Connectivity Gigabit Ethernet
OS Support Ubuntu-based Linux
Module Size 70 x 45 mm (Nano)

 

NVIDIA Jetson Xavier NX Price + Availability

The NVIDIA Jetson Xavier NX will be available in March 2020 from NVIDIA’s distribution channels, priced at US$399.

 

Recommended Reading

Go Back To > Enterprise | Software | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


NVIDIA Wins MLPerf Inference Benchmarks For DC + Edge!

The MLPerf Inference 0.5 benchmarks are officially released today, with NVIDIA declaring that they aced them for both datacenter and edge computing workloads.

Find out how well NVIDIA did, and why it matters!

 

The MLPerf Inference Benchmarks

MLPerf Inference 0.5 is the industry’s first independent suite of five AI inference benchmarks.

Applied across a range of form factors and four inference scenarios, the new MLPerf Inference Benchmarks test the performance of established AI applications like image classification, object detection and translation.

 

NVIDIA Wins MLPerf Inference Benchmarks For Datacenter + Edge

Thanks to the programmability of its computing platforms to cater to diverse AI workloads, NVIDIA was the only company to submit results for all five MLPerf Inference Benchmarks.

According to NVIDIA, their Turing GPUs topped all five benchmarks for both datacenter scenarios (server and offline) among commercially-available processors.

Meanwhile, their Jetson Xavier scored highest among commercially-available edge and mobile SoCs under both edge-focused scenarios – single stream and multi-stream.

The new NVIDIA Jetson Xavier NX that was announced today is a low-power version of the Xavier SoC that won the MLPerf Inference 0.5 benchmarks.

All of NVIDIA’s MLPerf Inference Benchmark results were achieved using NVIDIA TensorRT 6 deep learning inference software.

 

Recommended Reading

Go Back To > Enterprise | Software | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!