With its small size and low-power, it opens up the possibility of adding AI on-the-edge computing capabilities to small commercial robots, drones, industrial IoT systems, network video recorders and portable medical devices.
The Jetson Xavier NX can be configured to deliver up to 14 TOPS at 10 W, or 21 TOPS at 15 W. It is powerful enough to run multiple neural networks in parallel, and process data from multiple high-resolution sensors simultaneously.
The NVIDIA Jetson Xavier NX runs on the same CUDA-X AI software architecture as all other Jetson processors, and is supported by the NVIDIA JetPack software development kit.
It is pin-compatible with the Jetson Nano, offering up to 15X higher performance than the Jetson TX2 in a smaller form factor.
It is not available for a few more months, but developers can begin development today using the Jetson AGX Xavier Developer Kit, with a software patch to emulate Jetson Xavier NX.
NVIDIA Jetson Xavier NX Specifications
NVIDIA Jetson Xavier NX
– 6 x Arm 64-bit cores
– 6 MB L2 + 4 MB L3 caches
– 384 CUDA cores, 48 Tensor cores, 2 NVDLA cores
21 TOPS : 15 watts
14 TOPS : 10 watts
– Up to 8 GB, 51.2 GB/s
Encoding : Up to 2 x 4K30 streams
Decoding : Up to 2 x 4K60 streams
Up to six CSI cameras (32 via virtual channels)
Up to 12 lanes (3×4 or 6×2) MIPI CSI-2
70 x 45 mm (Nano)
NVIDIA Jetson Xavier NX Price + Availability
The NVIDIA Jetson Xavier NX will be available in March 2020 from NVIDIA’s distribution channels, priced at US$399.
NVIDIA CEO Jensen Huang (recently anointed as Fortune 2017 Businessperson of the Year) made as surprise reveal at the NIPS conference – the NVIDIA TITAN V. This is the first desktop graphics card to be built on the latest NVIDIA Volta microarchitecture, and the first to use HBM2 memory.
In this article, we will share with you everything we know about the NVIDIA TITAN V, and how it compares against its TITANic predecessors. We will also share with you what we think could be a future NVIDIA TITAN Vp graphics card!
Updated @ 2017-12-10 : Added a section on gaming with the NVIDIA TITAN V .
Originally posted @ 2017-12-09
NVIDIA Volta isn’t exactly new. Back in GTC 2017, NVIDIA revealed NVIDIA Volta, the NVIDIA GV100 GPU and the first NVIDIA Volta-powered product – the NVIDIA Tesla V100. Jensen even highlighted the Tesla V100 in his Computex 2017 keynote, more than 6 months ago!
Yet there has been no desktop GPU built around NVIDIA Volta. NVIDIA continued to churn out new graphics cards built around the Pascal architecture – GeForce GTX 1080 Ti and GeForce GTX 1070 Ti. That changed with the NVIDIA TITAN V.
The NVIDIA GV100 is the first NVIDIA Volta-based GPU, and the largest they have ever built. Even using the latest 12 nm FFN (FinFET NVIDIA) process, it is still a massive chip at 815 mm²! Compare that to the GP100 (610 mm² @ 16 nm FinFET) and GK110 (552 mm² @ 28 nm).
That’s because the GV100 is built using a whooping 21.1 billion transistors. In addition to 5376 CUDA cores and 336 Texture Units, it boasts 672 Tensor cores and 6 MB of L2 cache. All those transistors require a whole lot more power – to the tune of 300 W.
The NVIDIA TITAN V
That’s V for Volta… not the Roman numeral V or V for Vendetta. Powered by the NVIDIA GV100 GPU, the TITAN V has 5120 CUDA cores, 320 Texture Units, 640 Tensor cores, and a 4.5 MB L2 cache. It is paired with 12 GB of HBM2 memory (3 x 4GB stacks) running at 850 MHz.
The blowout picture of the NVIDIA TITAN V reveals even more details :
It has 3 DisplayPorts and one HDMI port.
It has 6-pin + 8-pin PCIe power inputs.
It has 16 power phases, and what appears to be the Founders Edition copper heatsink and vapour chamber cooler, with a gold-coloured shroud.
There is no SLI connector, only what appears to be an NVLink connector.
Here are more pictures of the NVIDIA TITAN V, courtesy of NVIDIA.
Can You Game On The NVIDIA TITAN V? New!
Right after Jensen announced the TITAN V, the inevitable question was raised on the Internet – can it run Crysis / PUBG?
The NVIDIA TITAN V is the most powerful GPU for the desktop PC, but that does not mean you can actually use it to play games. NVIDIA notably did not mention anything about gaming, only that the TITAN V is “ideal for developers who want to use their PCs to do work in AI, deep learning and high performance computing.”
In fact, the TITAN V is not listed in their GeForce Gaming section. The most powerful graphics card in the GeForce Gaming section remains the TITAN Xp.
Then again, the TITAN V uses the same NVIDIA Game Ready Driver as GeForce gaming cards, starting with version 388.59. Even so, it is possible that some or many games may not run well or properly on the TITAN V.
Of course, all this is speculative in nature. All that remains to crack this mystery is for someone to buy the TITAN V and use it to play some games!
If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!
The NVIDIA TITAN V Specification Comparison
Let’s take a look at the known specifications of the NVIDIA TITAN V, compared to the TITAN Xp (launched earlier this year), and the TITAN X (launched late last year). We also inserted the specifications of a hypotheticalNVIDIA TITAN Vp, based on a full GV100.
Future TITAN Vp?
NVIDIA TITAN V
NVIDIA TITAN Xp
NVIDIA TITAN X
12 nm FinFET+
12 nm FinFET+
16 nm FinFET
16 nm FinFET
L2 Cache Size
GPU Core Clock
GPU Boost Clock
Multi GPU Capability
The NVIDIA TITAN Vp?
In case you are wondering, the TITAN Vp does not exist. It is merely a hypothetical future model that we think NVIDIA may introduce mid-cycle, like the NVIDIA TITAN Xp.
Our TITAN Vp is based on the full capabilities of the NVIDIA GV100 GPU. That means it will have 5376 CUDA cores with 336 Texture Units, 672 Tensor cores and 6 MB of L2 cache. It will also have a higher TDP of 300 watts.
The Official NVIDIA TITAN V Press Release
December 9, 2017—NVIDIA today introduced TITAN V, the world’s most powerful GPU for the PC, driven by the world’s most advanced GPU architecture, NVIDIA Volta .
Announced by NVIDIA founder and CEO Jensen Huang at the annual NIPS conference, TITAN V excels at computational processing for scientific simulation. Its 21.1 billion transistors deliver 110 teraflops of raw horsepower, 9x that of its predecessor, and extreme energy efficiency.
“Our vision for Volta was to push the outer limits of high performance computing and AI. We broke new ground with its new processor architecture, instructions, numerical formats, memory architecture and processor links,” said Huang. “With TITAN V, we are putting Volta into the hands of researchers and scientists all over the world. I can’t wait to see their breakthrough discoveries.”
NVIDIA Supercomputing GPU Architecture, Now for the PC
TITAN V’s Volta architecture features a major redesign of the streaming multiprocessor that is at the center of the GPU. It doubles the energy efficiency of the previous generation Pascal design, enabling dramatic boosts in performance in the same power envelope.
New Tensor Cores designed specifically for deep learning deliver up to 9x higher peak teraflops. With independent parallel integer and floating-point data paths, Volta is also much more efficient on workloads with a mix of computation and addressing calculations. Its new combined L1 data cache and shared memory unit significantly improve performance while also simplifying programming.
Fabricated on a new TSMC 12-nanometer FFN high-performance manufacturing process customised for NVIDIA, TITAN V also incorporates Volta’s highly tuned 12GB HBM2 memory subsystem for advanced memory bandwidth utilisation.
Free AI Software on NVIDIA GPU Cloud
TITAN V’s incredible power is ideal for developers who want to use their PCs to do work in AI, deep learning and high performance computing.
Users of TITAN V can gain immediate access to the latest GPU-optimised AI, deep learning and HPC software by signing up at no charge for an NVIDIA GPU Cloud account. This container registry includes NVIDIA-optimised deep learning frameworks, third-party managed HPC applications, NVIDIA HPC visualisation tools and the NVIDIA TensorRT inferencing optimiser.
June 16, 2017 — NVIDIA is among six technology companies to receive funding from the U.S. Department of Energy’s Exascale Computing Project (ECP) to accelerate the development of next-generation supercomputers.
The Exascale Computing Project
The ECP mission is to facilitate the delivery of at least two exascale computing systems, with an aim to deliver at least one by 2021. Such systems would be approximately 50x more powerful than the nation’s fastest supercomputer, Titan, located at Oak Ridge National Laboratory, in use today.
The goal of the ECP PathForward programme is to find solutions that maximise the energy efficiency and overall performance of future large-scale supercomputers critical to areas such as national security, manufacturing, industrial competitiveness, and energy research.
In addition to performance, the DOE has ambitious goals for improving power efficiency, to achieve exascale performance using only 20-30 megawatts. By comparison, an exascale system built with CPUs alone could consume hundreds of megawatts.
NVIDIA In The Exascale Computing Project
NVIDIA has been researching and developing faster, more efficient GPUs for high performance computing for more than a decade. This is its sixth DOE research and development subcontract, which will help accelerate its efforts to develop highly efficient throughput computing technologies to ensure U.S. leadership in HPC.
NVIDIA’s R&D will focus on critical areas including energy-efficient GPU architectures and resilience. Its findings may be incorporated into future generation GPU architectures after Volta (which will be used in the DOE’s upcoming flagship Summit and Sierra supercomputers, scheduled to go online in 2018).
The DOE has placed a high priority on supercomputer research. Its PathForward technical requirements state, “The U.S. faces serious and urgent economic, environmental, and national security challenges based on energy, climate, and growing security threats. High performance computing is a requirement for addressing such challenges, and the need for the development of capable exascale computers has become critical for solving these problems.”
To facilitate and test its technology, NVIDIA research teams will collaborate closely with six national DOE laboratories: Argonne National Laboratory, Lawrence Berkeley National Laboratory, Lawrence Livermore National Laboratory, Los Alamos National Laboratory, Oak Ridge National Laboratory, and Sandia National Laboratories.
The first major event at Computex Taipei 2017 was the Powering The AI Revolution keynote by NVIDIA CEO Jensen Huang. Although the keynote was heavy on artificial intelligence technologies like NVIDIA Isaac and NVIDIA Volta, Jensen also announced other technologies like GeForce GTX with Max-Q Design.
NVIDIA Isaac, Max-Q, Volta, HGX & More!
In this 90 minute long keynote, Jensen reveals the future of artificial intelligence and robotics. In that vision, the GPU is taking over from the CPU in delivering the petaflops of computing power required to deliver artificial intelligence.
He also reveals the new NVIDIA technologies that will power the next-generation AI applications – the new NVIDIA Tesla V100, which is the largest GPU ever made, and the NVIDIA HGX server that hosts eight of these GPUs to deliver almost 1 petaflops of compute performance in a single chassis!
Jensen also shows how the NVIDIA Isaac Initiative allows robots to self-learn in a virtual environment, using nothing more than the NVIDIA Jetson 2 module. Watch how AI on the edge delivers smarter intelligence with minimal work.
Whether you are a scientist in these fields, or just geeks like us, you will enjoy listening to his views and NVIDIA’s endeavours in those technologies.
Our Blow-by-Blow Account Of The Keynote
First stop of the day – the NVIDIA AI Forum keynote. Jensen Huang is upstairs polishing his keynote while the crowd grows downstairs.
With the Japanese contingent waiting to storm up the escalator to the 3rd floor hall.