Tag Archives: T-Head

The Alibaba Hanguang 800 (含光 800) AI NPU Explained!

At the Apsara Computing Conference 2019, Alibaba Group unveiled details of their first AI inference NPU – the Hanguang 800 (含光 800).

Here is EVERYTHING you need to know about the Alibaba Hanguang 800 AI inference NPU!

Updated @ 2019-09-27 : Added more details, including a performance comparison against its main competitors.

Originally posted @ 2019-09-25

 

What Is The Alibaba Hanguang 800?

The Alibaba Hanguang 800 is a neural processing unit (NPU) for AI inference applications. It was specifically designed to accelerate machine learning and AI inference tasks.

 

What Does Hanguang Mean?

The name 含光 (Hanguang) literally means “contains light“.

While the name may suggest that it uses photonics, that light-based technology is still at least a decade from commercialisation.

 

What Are The Hanguang 800 Specifications?

Not much is known about the Hanguang 800, other than that it has 17 billion transistors, and is fabricated on the 12 nm process technology.

Also, it is designed for inferencing only, unlike the HUAWEI Ascend 910 AI chip which can handle both training and inference.

Recommended : 3rd Gen X-Dragon Architecture by Alibaba Cloud Explained!

 

Who Designed The Hanguang 800?

The Hanguang 800 was developed over a period of 7 months, by Alibaba’s research unit, T-Head, followed by a 3-month tape-out.

T-Head, whose Chinese name is Pintougehoney badger in English, is responsible for designing chips for cloud and edge computing under Alibaba Cloud / Aliyun.

Earlier this year, T-Head revealed a high-performance IoT processor called XuanTie 910.

Based on the RISC-V open-source instruction set, 16-core XuanTie 910 is targeted at heavy-duty IoT applications like edge servers, networking gateways, and self-driving automobiles.

 

How Fast Is Hanguang 800?

Alibaba claims that the Hanguang 800 “largely” outpaces the industry average performance, with image processing efficiency about 12X better than GPUs :

  • Single chip performance : 78,563 images per second (IPS)
  • Computational efficiency : 500 IPS per watt (Resnet-50 Inference Test)
Hanguang 800 Habana Goya Cambricon MLU270 NVIDIA T4 NVIDIA P4
Fab Process 12 nm 16 nm 16 nm 12 nm 16 nm
Transistors 17 billion NA NA 13.6 billion 7.2 billion
Performance
(ResNet-50)
78,563 IPS 15,433 IPS 10,000 IPS 5,402 IPS 1,721 IPS
Peak Efficiency
(ResNet-50)
500 IPS/W 150 IPS/W 143 IPS/W 78 IPS/W 52 IPS/W

Recommended : 2nd Gen EPYC – Everything You Need To Know Summarised!

 

Where Will Hanguang 800 Be Used?

The Hanguang 800 chip will be used exclusively by Alibaba to power their own business operations, especially in product search and automatic translation, personalised recommendations and advertising.

According to Alibaba, merchants upload a billion product images to Taobao every day. It used to take their previous platform an hour to categorise those pictures, and then tailor search and personalise recommendations for millions of Taobao customers.

With Hanguang 800, they claim that the Taboo platform now takes just 5 minutes to complete the task – a 12X reduction in time!

Alibaba Cloud will also be using it in their smart city projects. They are already using it in Hangzhou, where they previously used 40 GPUs to process video feeds with a latency of 300 ms.

After migrating to four Hanguang 800 NPUs, they were able to process the same video feeds with half the latency – just 150 ms.

 

Can We Buy Or Rent The Hanguang 800?

No, Alibaba will not be selling the Hanguang 800 NPU. Instead, they are offering it as a new AI cloud computing service.

Developers can now make a request for a Hanguang 800 cloud compute quota, which Alibaba Cloud claims is 100% more cost-effective than traditional GPUs.

 

Are There No Other Alternatives For Alibaba?

In our opinion, this is Alibaba’s way of preparing for an escalation of the US-Chinese trade war that has already savaged HUAWEI.

While Alibaba certainly have a few AI inference accelerator alternatives, from AMD and NVIDIA for example, it makes sense for them to spend money and time developing their own AI inference chip.

In the long term, the Chinese government wants to build a domestic capability to design and fabricate their own computer chips for national security reasons.

Recommended : The HUAWEI Trump Ban – Everything You Need To Know!

 

Recommended Reading

Go Back To > Business + Enterprise | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!