Intel Nervana NNP-T1000 PCIe + Mezzanine Cards Revealed!

Spread the love

The new Intel Nervana NNP-T1000 neural network processor comes in PCIe and Mezzanine card options designed for AI training acceleration.

Here is EVERYTHING you need to know about the Intel Nervana NNP-T1000 PCIe and Mezzanine card options!

Intel Nervana NNP-T1000 PCIe + Mezzanine Cards Revealed!


Intel Nervana Neural Network Processors

Intel Nervana neural network processors, NNPs for short, are designed to accelerated two key deep learning technologies – training and inference.

To target these two different tasks, Intel created two AI accelerator families – Nervana NNP-T that’s optimised for training, and Nervana NNP-I that’s optimised for inference.

They are both paired with a full software stack, developed with open components and deep learning framework integration.

Recommended : Intel Nervana AI Accelerators : Everything You Need To Know!


Intel Nervana NNP-T1000

The Intel Nervana NNP-T1000 is not only capable of training even the most complex deep learning models, it is highly scalable – offering near linear scaling and efficiency.

By combining compute, memory and networking capabilities in a single ASIC, it allows for maximum efficiency with flexible and simple scaling.

Each Nervana NNP-T1000 is powered by up to 24 Tensor Processing Clusters (TPCs), and comes with 16 bi-directional Inter-Chip Links (ICL).

Its TPC supports 32-bit floating point (FP32) and brain floating point (bfloat16) formats, allowing for multiple deep learning primitives with maximum processing efficiency.

Its high-speed ICL communication fabric allows for near-linear scaling, directly connecting multiple NNP-T cards within servers, between servers and even inside and across racks.

  • Intel Nervana NNP-T1000
  • Intel Nervana NNP-T1000
  • High compute utilisation using Tensor Processing Clusters (TPC) with bfloat16 numeric format
  • Both on-die SRAM and on-package High-Bandwidth Memory (HBM) keep data local, reducing movement
  • Its Inter-Chip Links (ICL) glueless fabric architecture and fully-programmable router achieves near-linear scaling across multiple cards, systems and PODs
  • Available in PCIe and OCP Open Accelerator Module (OAM) form factors
  • Offers a programmable Tensor-based instruction set architecture (ISA)
  • Supports common open-source deep learning frameworks like TensorFlow, PaddlePaddle and PyTorch


Intel Nervana NNP-T1000 Models

The Intel Nervana NNP-T1000 is currently available in two form factors – a dual-slot PCI Express card, and a OAM Mezzanine Card, with these specifications :

SpecificationsIntel Nervana NNP-T1300Intel Nervana NNP-T1400
Form FactorDual-slot PCIe CardOAM Mezzanine Card
CompliancePCIe CEMOAM 1.0
Compute Cores22 TPCs24 TPCs
Frequency950 MHz1100 MHz
SRAM55 MB on-chip, with ECC60 MB on-chip, with ECC
Memory32 GB HBM2, with ECC32 GB HBM2, with ECC
Memory Bandwidth2.4 Gbps (300 MB/s)
Inter-Chip Link (ICL)16 x 112 Gbps (448 GB/s)
ICL TopologyRingRing, Hybrid Cube Mesh,
Fully Connected
Multi-Chassis ScalingYesYes
Multi-Rack ScalingYesYes
I/O to Host CPUPCIe Gen3 / Gen4 x16
Thermal SolutionPassive, IntegratedPassive Cooling
TDP300 W375 W
Dimensions265.32 mm x 111.15 mm165 mm x 102 mm


Intel Nervana NNP-T1000 PCIe Card

This is what the Intel Nervana NNP-T1000 (also known as the NNP-T1300) PCIe card looks like :

Intel Nervana NNP-T1000 PCIe card


Intel Nervana NNP-T1000 OAM Mezzanine Card

This is what the Intel Nervana NNP-T1000 (also known as NNP-T1400) Mezzanine card looks like :

Intel Nervana NNP-T1000 Mezzanine card Intel Nervana NNP-T1000 Mezzanine card


Recommended Reading

[adrotate group=”2″]

Go Back To > Business + Enterprise | Home


Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

About The Author

Leave a Reply