TAIPEI, May 31, 2016 — NVIDIA won big at the Computex 2016 Best Choice Awards, with the NVIDIA Tesla M40 GPU and NVIDIA Jetson TX1 module hauling in Gold Awards and the NVIDIA SHIELD Android TV clinching a Category Award.
Garnering these three prestigious awards extends the company’s winning streak — the longest of any international Computex exhibitor — to eight consecutive years. Taiwan’s President Tsai Ing-wen will hand out the awards.
Nearly 375 technology products from more than 140 vendors vied for the Best Choice Awards at Computex 2016, the largest technology tradeshow in Asia and second largest in the world. The Best Choice Awards, established in 2002, honour innovation, functionality and market potential.
The Gold Award-winning NVIDIA Tesla M40 GPU is the world’s fastest deep learning training accelerator. Purpose-built to dramatically reduce training time, the Tesla M40 can crank through deep learning models within hours versus days on CPU-based compute systems.
The NVIDIA Jetson TX1, the other Gold Award winner, is the world’s most advanced system for embedded visual computing. A supercomputer on a module that’s the size of a credit card, it offers embedded computing developers the highest performance, latest technology and the best development platform.
Winner of the Digital Entertainment & AR/VR Application Category Award, NVIDIA SHIELD transforms the living room entertainment experience with 4K streaming, advanced gaming and Android TV. It also comes with GeForce NOW, the only game-streaming service that delivers GeForce GTX gaming to the TV instantly.
Support Tech ARP!
If you like our work, you can help support our work by visiting our sponsors, participate in the Tech ARP Forums, or even donate to our fund. Any help you can render is greatly appreciated!
Dec. 11, 2015—NVIDIA today announced that Facebook will power its next-generation computing system with the NVIDIA® Tesla® Accelerated Computing Platform, enabling it to drive a broad range of machine learning applications.
While training complex deep neural networks to conduct machine learning can take days or weeks on even the fastest computers, the Tesla platform can slash this by 10-20x. As a result, developers can innovate more quickly and train networks that are more sophisticated, delivering improved capabilities to consumers.
Facebook is the first company to adopt NVIDIA Tesla M40 GPU accelerators, introduced last month, to train deep neural networks. They will play a key role in the new “Big Sur” computing platform, Facebook AI Research’s (FAIR) purpose-built system designed specifically for neural network training.
“Deep learning has started a new era in computing,” said Ian Buck, vice president of accelerated computing at NVIDIA. “Enabled by big data and powerful GPUs, deep learning algorithms can solve problems never possible before. Huge industries from web services and retail to healthcare and cars will be revolutionised. We are thrilled that NVIDIA GPUs have been adopted as the engine of deep learning. Our goal is to provide researchers and companies with the most productive platform to advance this exciting work.”
In addition to reducing neural network training time, GPUs offer a number of other advantages. Their architectural compatibility from generation to generation provides seamless speed-ups for future GPU upgrades. And the Tesla platform’s growing global adoption facilitates open collaboration with researchers around the world, fueling new waves of discovery and innovation in the machine learning field.
Big Sur Optimised for Machine Learning
NVIDIA worked with Facebook engineers on the design of Big Sur, optimising it to deliver maximum performance for machine learning workloads, including the training of large neural networks across multiple Tesla GPUs.
[adrotate banner=”4″]Two times faster than Facebook’s existing system, Big Sur will enable the company to train twice as many neural networks – and to create neural networks that are twice as large – which will help develop more accurate models and new classes of advanced applications.
“The key to unlocking the knowledge necessary to develop more intelligent machines lies in the capability of our computing systems,” said Serkan Piantino, engineering director for FAIR. “Most of the major advances in machine learning and AI in the past few years have been contingent on tapping into powerful GPUs and huge data sets to build and train advanced models.”
The addition of Tesla M40 GPUs will help Facebook make new advancements in machine learning research and enable teams across its organisation to use deep neural networks in a variety of products and services.
First Open Sourced AI Computing Architecture
Big Sur represents the first time a computing system specifically designed for machine learning and artificial intelligence (AI) research will be released as an open source solution.
Committed to doing its AI work in the open and sharing its findings with the community, Facebook intends to work with its partners to open source Big Sur specifications via the Open Compute Project. This unique approach will make it easier for AI researchers worldwide to share and improve techniques, enabling future innovation in machine learning by harnessing the power of GPU accelerated computing.