Page 1 : The AMD Radeon Instinct Platform Launch, Key Points Summary
The AMD Tech Summit held in Sonoma, California from December 7-9, 2016 was not only very exclusive, it was highly secretive. The first major announcement we have been allowed to reveal is the new AMD Radeon Instinct heterogenous computing platform.
In this article, you will hear from AMD what the Radeon Instinct platform is all about. As usual, we have a ton of videos from the event, so it will be as if you were there with us. Enjoy! 🙂
The AMD Radeon Instinct Platform Summarised
For those who want the quick low-down on AMD Radeon Instinct, here are the key takeaway points :
- The AMD Radeon Instinct platform is made up of two components – hardware and software.
- The hardware components are the AMD Radeon Instinct accelerators built around the current Polaris and the upcoming Vega GPUs.
- The software component is the AMD Radeon Open Compute (ROCm) platform, which includes the new MIOpen open-source deep learning library.
- The first three Radeon Instinct accelerator cards are the MI6, MI8 and MI25 Vega with NCU.
- The AMD Radeon Instinct MI6 is a passively-cooled inference accelerator with 5.7 TFLOPS of FP16 processing power, 224 GB/s of memory bandwidth, and a TDP of <150 W. It will come with 16 GB of GDDR5 memory.
- The AMD Radeon Instinct MI8 is a small form-factor (SFF) accelerator with 8.2 TFLOPS of processing power, 512 GB/s of memory bandwidth, and a TDP of <175 W. It will come with 4 GB of HBM memory.
- The AMD Radeon Instinct MI25 Vega with NCU is a passively-cooled training accelerator with 25 TFLOPS of processing power, support for 2X packed math, a High Bandwidth Cache and Controller, and a TDP of <300 W.
- The Radeon Instinct accelerators will all be built exclusively by AMD.
- The Radeon Instinct accelerators will all support MxGPU SRIOV hardware virtualisation.
- The Radeon Instinct accelerators are all passively cooled.
- The Radeon Instinct accelerators will all have large BAR (Base Address Register) support for multiple GPUs.
- The upcoming AMD Zen “Naples” server platform is designed to supported multiple Radeon Instinct accelerators through a high-speed network fabric.
- The ROCm platform is not only open source, it will support a multitude of standards in addition to MIOpen.
- The MIOpen deep learning library is open source, and will be available in Q1 2017.
- The MIOpen deep learning library is optimised for Radeon Instinct, allowing for 3X better performance in machine learning.
- AMD Radeon Instinct accelerators will be significantly faster than NVIDIA Titan X GPUs based on the Maxwell and Pascal architectures.
In the subsequent pages, we will give you the full low-down on the Radeon Instinct platform, with the following presentations by AMD :
- Dr. Lisa Su : Why Is Heterogenous Computing Important?
- Raja Koduri : The New AMD Radeon Instinct Accelerators
- Raja Koduri : The MIOpen Deep Learning Library For Radeon Instinct
- Raja Koduri : The Performance Advantage Of Radeon Instinct & MIOpen
- Ben Sander : The Radeon Instinct MI25 Training Demonstration
- Raja Koduri : The Radeon Instinct MI8 Visual Inference Demonstration
- Raja Koduri : The Radeon Instinct On The Zen “Naples” Platform
- Raja Koduri : The First Radeon Instinct Servers
- Greg Stoner : The Radeon Open Compute (ROCm) Platform Discussion
- Raja Koduri : Closing Remarks On Radeon Instinct
We also prepared the complete video and slides of the Radeon Instinct tech briefing for your perusal :
- The Complete AMD Radeon Instinct Tech Briefing Video
- The Complete AMD Radeon Instinct Tech Briefing Slides