Tag Archives: NVIDIA A800

China Still Has Access To High-Speed NVIDIA AI Chips!

Military institutions, AI research institutes and universities in China are still able to source and buy NVIDIA AI chips, albeit in small quantities!

 

AMD + NVIDIA Banned From Selling AI Chips To China!

Both AMD and NVIDIA were ordered by the US government to stop selling high-performance AI chips to both China and Russia on 26 August 2022. This ban was introduced to prevent both countries from using those high-performance AI chips for military purposes.

With immediate effect, the US government banned the export of all AI chips that are equal to, or faster than, the NVIDIA A100 (and H100), or the AMD Instinct MI250 chips. NVIDIA then created slower A800 and H800 AI chips for the Chinese market, but even they were also banned in October 2023.

Recommended : AMD, NVIDIA Banned From Selling AI Chips To China!

 

China Still Has Access To High-Speed NVIDIA AI Chips!

Despite the ongoing ban on the sale of high-performance AI chips to China and Russia, it appears that Chinese military-linked research institutes are still able to source and buy NVIDIA AI chips, albeit in small quantities!

According to a Reuters report on 14 January 2024, public tender documents show that dozens of military institutions, AI research institutes and universities in China with links to the military, have purchased and received high-performance NVIDIA AI chips like the A100 and the H100, as well as the slower A800 and H800 AI chips.

  • Harbin Institute of Technology purchased six NVIDIA A100 chips in May 2023, to train a deep-learning model
  • University of Electronic Science and Technology of China purchased on NVIDIA A100 in December 2022, for an unspecified purpose.

Both universities are subject to the US export restrictions, although the sale of those AI chips are not illegal in China.

More than 100 tenders were identified, in which Chinese state entities successfully purchased NVIDIA A100 and H100 chips, and dozens of tenders show successful purchases of the slower A800 chips.

  • Tsinghua University purchased two H100 chips in December 2023, as well as about eighty A100 chips since September 2022.
  • A Ministry of Industry and Information Technology laboratory purchased a H100 chip in December 2023.
  • An unnamed People’s Liberation Army (PLA) entity based in Wuxi sought to purchase three A100 chips in October 2023, and one H100 chip in January 2024
  • Shandong Artificial Intelligence Institute purchased five A100 chips from Shandong Chengxiang Electronic Technology in December 2023
  • Chongqing University purchased an NVIDIA A100 chip in January 2024.

Recommended : Can StopNCII Remove All Nude / Deep Fake Photos?!

To be clear – neither NVIDIA or its approved retailers were found to have supplied those chips. NVIDIA said that it complies with all applicable export control laws, and requires its customers to do the same:

If we learn that a customer has made an unlawful resale to third parties, we’ll take immediate and appropriate action.

– NVIDIA spokesperson

Even though Chinese state entities appear to be able to purchase high-performance AI chips, the Reuters report also shows the effectiveness of the American AI chip ban.

The training of large artificial intelligence models require thousands of high-performance AI chips, and China does not seem to be able to procure more than a handful of these critical chips.

That does not mean China is slowing down its AI initiatives. Instead of relying on “gray imports” of AMD or NVIDIA AI chips, Chinese entities are doing their best to switch to local alternatives. In 2023, HUAWEI received orders for some 5,000 of its Ascent 910B chips.

Chinese mega-companies like Baidu, Alibaba, and Tencent also have their own in-house AI chips like the Kunlunxin Gen 2, Hanguang 800, and Zixiao.

 

Please Support My Work!

Support my work through a bank transfer /  PayPal / credit card!

Name : Adrian Wong
Bank Transfer : CIMB 7064555917 (Swift Code : CIBBMYKL)
Credit Card / Paypal : https://paypal.me/techarp

Dr. Adrian Wong has been writing about tech and science since 1997, even publishing a book with Prentice Hall called Breaking Through The BIOS Barrier (ISBN 978-0131455368) while in medical school.

He continues to devote countless hours every day writing about tech, medicine and science, in his pursuit of facts in a post-truth world.

[/su_note]

 

Recommended Reading

Go Back To > Business | ComputerTech ARP

 

Support Tech ARP!

Please support us by visiting our sponsors, participating in the Tech ARP Forums, or donating to our fund. Thank you!

How NVIDIA A800 Bypasses US Chip Ban On China!

Find out how NVIDIA created the new A800 GPU to bypass the US ban on sale of advanced chips to China!

 

NVIDIA Offers A800 GPU To Bypass US Ban On China!

Two months after it was banned by the US government from selling high-performance AI chips to China, NVIDIA introduced a new A800 GPU designed to bypass those restrictions.

The new NVIDIA A800 is based on the same Ampere microarchitecture as the A100, which was used as the performance baseline by the US government.

Despite its numerically larger model number (the lucky number 8 was probably picked to appeal to the Chinese), this is a detuned part, with slightly reduced performance to meet export control limitations.

The NVIDIA A800 GPU, which went into production in Q3, is another alternative product to the NVIDIA A100 GPU for customers in China.

The A800 meets the U.S. government’s clear test for reduced export control and cannot be programmed to exceed it.

NVIDIA is probably hoping that the slightly slower NVIDIA A800 GPU will allow it to continue supplying China with A100-level chips that are used to power supercomputers and high-performance datacenters for artificial intelligence applications.

As I will show you in the next section, except in very high-end applications, there won’t be truly significant performance difference between the A800 and the A100. So NVIDIA customers who want or need the A100 will have no issue opting for the A800 instead.

However, this can only be a stopgap fix, as NVIDIA is stuck selling A100-level chips to China until and unless the US government changes its mind.

Read more : AMD, NVIDIA Banned From Selling AI Chips To China!

 

How Fast Is The NVIDIA A800 GPU?

The US government considers the NVIDIA A100 as the performance baseline for its export control restrictions on China.

Any chip equal or faster to that Ampere-based chip, which was launched on May 14, 2020, is forbidden to be sold or exported to China. But as they say, the devil is in the details.

The US government didn’t specify just how much slower chips must be, to qualify for export to China. So NVIDIA could technically get away by slightly detuning the A100, while offering almost the same performance level.

And that was what NVIDIA did with the A800 – it is basically the A100 with a 33% slower NVLink interconnect speed. NVIDIA also limited the maximum number of GPUs supported in a single server to 8.

That only slightly reduces the performance of A800 servers, compare to A100 servers, while offering the same amount of GPU compute performance. Most users will not notice the difference.

The only significant impediment is on the very high-end – Chinese companies are now restricted to a maximum of eight GPUs per server, instead of up to sixteen.

To show you what I mean, I dug into the A800 specifications, and compared them to the A100 below:

NVIDIA A100 vs A800 : 80GB PCIe Version

Specifications A100
80GB PCIe
A800
80GB PCIe
FP64 9.7 TFLOPS
FP64 Tensor Core 19.5 TFLOPS
FP32 19.5 TFLOPS
Tensor Float 32 156 TFLOPS
BFLOAT 16 Tensor Core 312 TFLOPS
FP16 Tensor Core 312 TFLOPS
INT8 Tensor Core 624 TOPS
GPU Memory 80 GB HBM2
GPU Memory Bandwifth 1,935 GB/s
TDP 300 W
Multi-Instance GPU Up to 7 MIGs @ 10 GB
Interconnect NVLink : 600 GB/s
PCIe Gen4 : 64 GB/s
NVLink : 400 GB/s
PCIe Gen4 : 64 GB/s
Server Options 1-8 GPUs

NVIDIA A100 vs A800 : 80GB SXM Version

Specifications A100
80GB SXM
A800
80GB SXM
FP64 9.7 TFLOPS
FP64 Tensor Core 19.5 TFLOPS
FP32 19.5 TFLOPS
Tensor Float 32 156 TFLOPS
BFLOAT 16 Tensor Core 312 TFLOPS
FP16 Tensor Core 312 TFLOPS
INT8 Tensor Core 624 TOPS
GPU Memory 80 GB HBM2
GPU Memory Bandwifth 2,039 GB/s
TDP 400 W
Multi-Instance GPU Up to 7 MIGs @ 10 GB
Interconnect NVLink : 600 GB/s
PCIe Gen4 : 64 GB/s
NVLink : 400 GB/s
PCIe Gen4 : 64 GB/s
Server Options 4/ 8 / 16 GPUs 4 / 8 GPUs

NVIDIA A100 vs A800 : 40GB PCIe Version

Specifications A100
40GB PCIe
A800
40GB PCIe
FP64 9.7 TFLOPS
FP64 Tensor Core 19.5 TFLOPS
FP32 19.5 TFLOPS
Tensor Float 32 156 TFLOPS
BFLOAT 16 Tensor Core 312 TFLOPS
FP16 Tensor Core 312 TFLOPS
INT8 Tensor Core 624 TOPS
GPU Memory 40 GB HBM2
GPU Memory Bandwifth 1,555 GB/s
TDP 250 W
Multi-Instance GPU Up to 7 MIGs @ 10 GB
Interconnect NVLink : 600 GB/s
PCIe Gen4 : 64 GB/s
NVLink : 400 GB/s
PCIe Gen4 : 64 GB/s
Server Options 1-8 GPUs

 

Please Support My Work!

Support my work through a bank transfer /  PayPal / credit card!

Name : Adrian Wong
Bank Transfer : CIMB 7064555917 (Swift Code : CIBBMYKL)
Credit Card / Paypal : https://paypal.me/techarp

Dr. Adrian Wong has been writing about tech and science since 1997, even publishing a book with Prentice Hall called Breaking Through The BIOS Barrier (ISBN 978-0131455368) while in medical school.

He continues to devote countless hours every day writing about tech, medicine and science, in his pursuit of facts in a post-truth world.

 

Recommended Reading

Go Back To > Business | ComputerTech ARP

 

Support Tech ARP!

Please support us by visiting our sponsors, participating in the Tech ARP Forums, or donating to our fund. Thank you!