NVIDIA Wins MLPerf Inference Benchmarks For DC + Edge!

The MLPerf Inference 0.5 benchmarks are officially released today, with NVIDIA declaring that they aced them for both datacenter and edge computing workloads.

Find out how well NVIDIA did, and why it matters!

NVIDIA Wins MLPerf Inference Benchmarks For Datacenter + Edge

 

The MLPerf Inference Benchmarks

MLPerf Inference 0.5 is the industry’s first independent suite of five AI inference benchmarks.

Applied across a range of form factors and four inference scenarios, the new MLPerf Inference Benchmarks test the performance of established AI applications like image classification, object detection and translation.

NVIDIA MLPerf Inference Benchmark Win slide

 

NVIDIA Wins MLPerf Inference Benchmarks For Datacenter + Edge

Thanks to the programmability of its computing platforms to cater to diverse AI workloads, NVIDIA was the only company to submit results for all five MLPerf Inference Benchmarks.

According to NVIDIA, their Turing GPUs topped all five benchmarks for both datacenter scenarios (server and offline) among commercially-available processors.

NVIDIA MLPerf Inference Benchmark Win slide

Meanwhile, their Jetson Xavier scored highest among commercially-available edge and mobile SoCs under both edge-focused scenarios – single stream and multi-stream.

The new NVIDIA Jetson Xavier NX that was announced today is a low-power version of the Xavier SoC that won the MLPerf Inference 0.5 benchmarks.

NVIDIA MLPerf Inference Benchmark Win slide

All of NVIDIA’s MLPerf Inference Benchmark results were achieved using NVIDIA TensorRT 6 deep learning inference software.

 

Recommended Reading

Go Back To > Enterprise | Software | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


Comments

comments

About The Author

Leave a Reply

%d bloggers like this: