Tag Archives: AI

NVIDIA Offers A800 GPU To Bypass US Ban On China!

How NVIDIA A800 Bypasses US Chip Ban On China!

Find out how NVIDIA created the new A800 GPU to bypass the US ban on sale of advanced chips to China!

 

NVIDIA Offers A800 GPU To Bypass US Ban On China!

Two months after it was banned by the US government from selling high-performance AI chips to China, NVIDIA introduced a new A800 GPU designed to bypass those restrictions.

The new NVIDIA A800 is based on the same Ampere microarchitecture as the A100, which was used as the performance baseline by the US government.

Despite its numerically larger model number (the lucky number 8 was probably picked to appeal to the Chinese), this is a detuned part, with slightly reduced performance to meet export control limitations.

The NVIDIA A800 GPU, which went into production in Q3, is another alternative product to the NVIDIA A100 GPU for customers in China.

The A800 meets the U.S. government’s clear test for reduced export control and cannot be programmed to exceed it.

NVIDIA is probably hoping that the slightly slower NVIDIA A800 GPU will allow it to continue supplying China with A100-level chips that are used to power supercomputers and high-performance datacenters for artificial intelligence applications.

As I will show you in the next section, except in very high-end applications, there won’t be truly significant performance difference between the A800 and the A100. So NVIDIA customers who want or need the A100 will have no issue opting for the A800 instead.

However, this can only be a stopgap fix, as NVIDIA is stuck selling A100-level chips to China until and unless the US government changes its mind.

Read more : AMD, NVIDIA Banned From Selling AI Chips To China!

 

How Fast Is The NVIDIA A800 GPU?

The US government considers the NVIDIA A100 as the performance baseline for its export control restrictions on China.

Any chip equal or faster to that Ampere-based chip, which was launched on May 14, 2020, is forbidden to be sold or exported to China. But as they say, the devil is in the details.

The US government didn’t specify just how much slower chips must be, to qualify for export to China. So NVIDIA could technically get away by slightly detuning the A100, while offering almost the same performance level.

And that was what NVIDIA did with the A800 – it is basically the A100 with a 33% slower NVLink interconnect speed. NVIDIA also limited the maximum number of GPUs supported in a single server to 8.

That only slightly reduces the performance of A800 servers, compare to A100 servers, while offering the same amount of GPU compute performance. Most users will not notice the difference.

The only significant impediment is on the very high-end – Chinese companies are now restricted to a maximum of eight GPUs per server, instead of up to sixteen.

To show you what I mean, I dug into the A800 specifications, and compared them to the A100 below:

NVIDIA A100 vs A800 : 80GB PCIe Version

Specifications A100
80GB PCIe
A800
80GB PCIe
FP64 9.7 TFLOPS
FP64 Tensor Core 19.5 TFLOPS
FP32 19.5 TFLOPS
Tensor Float 32 156 TFLOPS
BFLOAT 16 Tensor Core 312 TFLOPS
FP16 Tensor Core 312 TFLOPS
INT8 Tensor Core 624 TOPS
GPU Memory 80 GB HBM2
GPU Memory Bandwifth 1,935 GB/s
TDP 300 W
Multi-Instance GPU Up to 7 MIGs @ 10 GB
Interconnect NVLink : 600 GB/s
PCIe Gen4 : 64 GB/s
NVLink : 400 GB/s
PCIe Gen4 : 64 GB/s
Server Options 1-8 GPUs

NVIDIA A100 vs A800 : 80GB SXM Version

Specifications A100
80GB SXM
A800
80GB SXM
FP64 9.7 TFLOPS
FP64 Tensor Core 19.5 TFLOPS
FP32 19.5 TFLOPS
Tensor Float 32 156 TFLOPS
BFLOAT 16 Tensor Core 312 TFLOPS
FP16 Tensor Core 312 TFLOPS
INT8 Tensor Core 624 TOPS
GPU Memory 80 GB HBM2
GPU Memory Bandwifth 2,039 GB/s
TDP 400 W
Multi-Instance GPU Up to 7 MIGs @ 10 GB
Interconnect NVLink : 600 GB/s
PCIe Gen4 : 64 GB/s
NVLink : 400 GB/s
PCIe Gen4 : 64 GB/s
Server Options 4/ 8 / 16 GPUs 4 / 8 GPUs

NVIDIA A100 vs A800 : 40GB PCIe Version

Specifications A100
40GB PCIe
A800
40GB PCIe
FP64 9.7 TFLOPS
FP64 Tensor Core 19.5 TFLOPS
FP32 19.5 TFLOPS
Tensor Float 32 156 TFLOPS
BFLOAT 16 Tensor Core 312 TFLOPS
FP16 Tensor Core 312 TFLOPS
INT8 Tensor Core 624 TOPS
GPU Memory 40 GB HBM2
GPU Memory Bandwifth 1,555 GB/s
TDP 250 W
Multi-Instance GPU Up to 7 MIGs @ 10 GB
Interconnect NVLink : 600 GB/s
PCIe Gen4 : 64 GB/s
NVLink : 400 GB/s
PCIe Gen4 : 64 GB/s
Server Options 1-8 GPUs

 

Please Support My Work!

Support my work through a bank transfer /  PayPal / credit card!

Name : Adrian Wong
Bank Transfer : CIMB 7064555917 (Swift Code : CIBBMYKL)
Credit Card / Paypal : https://paypal.me/techarp

Dr. Adrian Wong has been writing about tech and science since 1997, even publishing a book with Prentice Hall called Breaking Through The BIOS Barrier (ISBN 978-0131455368) while in medical school.

He continues to devote countless hours every day writing about tech, medicine and science, in his pursuit of facts in a post-truth world.

 

Recommended Reading

Go Back To > Business | ComputerTech ARP

 

Support Tech ARP!

Please support us by visiting our sponsors, participating in the Tech ARP Forums, or donating to our fund. Thank you!

NetApp BlueXP : A New Unified Hybrid MultiCloud Control Plane!

NetApp just introduced BlueXP – a new unified control plane for hybrid multicloud environments. Here is what you need to know.

 

NetApp BlueXP : A New Unified Hybrid MultiCloud Control Plane!

On November 3, 2022, NetApp (NASDAQ: NTAP) announced the availability of NetApp BlueXP – a unified control plane designed for easy hybrid multicloud management for storage and data services across on-premises and cloud environments.

NetApp BlueXP allows users to manage their broader hybrid multicloud data estate, including on-premises unified storage and first-party native storage with the leading public cloud providers like Microsoft Azure, Google Cloud and AWS.

Offering a simple, yet powerful experience driven by AIOps, BlueXP delivers integrated, broad data service capabilities to deploy, automate, discover, manage, protect, govern and optimize data, infrastructure, and the business processes that support them – with the flexible consumption options required in today’s cloud-led environment.

“We are excited to deliver a whole new, unified cloud experience that is simple, secure, sustainable and cost effective,” said Sanjay Rohatgi, Senior Vice President and General Manager, NetApp Asia Pacific & Japan. “With BlueXP, our customers and partners in Asia Pacific can better compete in the hyper-dynamic digital economy of tomorrow and realize the full potential of a true hybrid multicloud world.”

NetApp BlueXP is the preferred method to manage NetApp ONTAP, NetApp’s industry-leading data management software, both in the cloud and on-premises.

The latest release of ONTAP, announced today, contains over twenty major innovations including a new tamper-proof snapshot feature and integrated AIOps-driven anti-ransomware protection, making ONTAP the leading option for secure data storage.

In addition, an expansion of NetApp’s innovative unified multi-protocol technology allows simultaneous NAS file and S3 object access to the same data, increasing the flexibility of ONTAP as a repository for massive data lakes used for today’s modern AI/ML pipelines.

“Organisations today have increasingly moved to hybrid multicloud environments to accelerate their digital transformation and drive growth, even in uncertain times. But in managing these environments, companies face daunting challenges and inefficiencies that can impede innovation.”

“With BlueXP, NetApp is leading the way to a more ‘evolved cloud’ to simplify and automate critical operations across on-premises and public clouds in order to help organizations drive business impact and improve customer experience.”

“Today’s cloud environments are complex. Organizations are looking for a better cloud experience – simpler, streamlined, governed, and optimized for performance and cost across their entire hybrid multicloud environment,” said Archana Venkatraman, Research Director, Cloud Data Management, IDC. “An ‘evolved cloud’ is one of the most grounded, AI-driven, and practical approaches to cloud management as a whole – and organizations will quickly benefit from migration to operations, to FinOps, to innovation.”

“With the launch of BlueXP, NetApp is uniquely positioned to help organizations unlock the promise of the cloud by making infrastructure, applications and data true assets to their business,” said George Kurian, Chief Executive Officer at NetApp. “By taking an evolved cloud approach, customers can integrate cloud into their architecture and operations, eliminate complexity and increase their speed of innovation to deliver quickly on the business outcomes that matter most.”

 

NetApp BlueXP Capabilities

NetApp BlueXP capabilities include:

  • Unified Storage Management: The SaaS-delivered BlueXP global control plane gives a single point of visibility and management over wide-ranging hybrid multicloud environments. This includes the ability to manage NetApp AFF, FAS, StorageGRID, and E-Series on-premises storage, as well as the major clouds with Amazon FSx for NetApp ONTAP, Azure NetApp Files, Google Cloud Volumes Service and Cloud Volumes ONTAP, all in a single console.
  • AIOps-Driven Health: Integrated AI/ML-driven automation reduces manpower demands, resource loads, and risk profile, while AI-enabled health and status monitoring not only alerts of infrastructure and workload issues, but offers proactive guidance to avoid trouble scenarios. BlueXP integrates NetApp’s leading Active IQ technology for always-on telemetry across the hybrid multicloud.
  • Cyber Resilience: Unified control of data protection and security with an integrated zero-trust model. A single ransomware dashboard provides company-wide visibility into ransomware vulnerabilities with the ability to fix many issues automatically with a single click.
  • Governance at a Glance: A complete view of the digital estate to monitor compliance and permissions. The AI/ML capability audits both user and data level activity, immediately detecting anomalies and taking prescribed actions.
  • Seamless Mobility: Integrated data movers allow for copying, syncing, tiering, and caching data across all major clouds and the data center as easily as “drag and drop.” Integrated security and efficiencies ensure data is protected in transit and stored on the lower-cost storage tier possible.
  • Flexible Consumption: BlueXP allows customers to only pay for the capabilities they need, billed on usage. A Digital Wallet allows for licenses for data services to be easily interchanged as an enterprise’s needs change. NetApp Keystone, the leading Storage-as-a-Service (STaaS) offering, is integrated into BlueXP to allow customers to manage their consumption-based data center storage side by side with their cloud storage.

 

NetApp BlueXP Offer

NetApp currently has a special BlueXP offer for new users (excluding current Cloud Manager users):

  • 30 days of free BlueXP data protection
  • 30 days of free BlueXP tiering
  • 1 TB of free BlueXP governance and ransomware protection.

Just register here for the NetApp BlueXP offer!

 

Please Support My Work!

Support my work through a bank transfer /  PayPal / credit card!

Name : Adrian Wong
Bank Transfer : CIMB 7064555917 (Swift Code : CIBBMYKL)
Credit Card / Paypal : https://paypal.me/techarp

Dr. Adrian Wong has been writing about tech and science since 1997, even publishing a book with Prentice Hall called Breaking Through The BIOS Barrier (ISBN 978-0131455368) while in medical school.

He continues to devote countless hours every day writing about tech, medicine and science, in his pursuit of facts in a post-truth world.

 

Recommended Reading

Go Back To > Business | SoftwareTech ARP

 

Support Tech ARP!

Please support us by visiting our sponsors, participating in the Tech ARP Forums, or donating to our fund. Thank you!

How Biren Got Its Own AI Chips Banned At TSMC!

TSMC stopped making artificial intelligence chips for China’s Biren Technology, and it was all Biren’s own fault!

 

TSMC Stops Making Biren AI Chips Over US Sanctions

TSMC – Taiwan Semiconductor Manufacturing Company – has suspended production of advanced artificial intelligence (AI) chips for China’s Biren Technology.

TSMC was forced to make this decision after public domain information revealed that the Biren BR100 and BR104 chips outperformed the NVIDIA A100 chip, which was used as the baseline of US sanctions.

While TSMC has not reached a conclusion on whether the top-of-the-line Biren BR100 or the slower BR104 meet or exceed US government threshold on advanced AI chip technology restrictions, it decided to stop production and supply of the Biren chips for now.

For TSMC to continue producing BR100 or BR104 chips, Biren must now prove that their chips do not offer “peak performance and chip-to-chip I/O performance equal to or greater than thresholds that are roughly equivalent to the [NVIDIA] A100“, or get an export licence from the US Department of Commerce.

And believe it or not – it was Biren Technology that created this mess for itself!

 

How Biren Screwed Up Its Own BR100 AI Chips

Biren, which is one of China’s most promising semiconductor design firms, earlier claimed that its AI chips that were being produced by TSMC are not covered by the latest US export restrictions.

However, its own website touts that the BR100 family of chips offers “world-class performance“, and has “improved by more than 3X” compared to mainstream rivals.

On top of that, Biren actually released a press statement on September 9, 2022, declaring that the slower BR104 was proven by the MLPerf to beat the NVIDIA A100!

Releasing such a statement less than 2 weeks after the US government ordered both AMD and NVIDIA to stop exporting their MI250 and A100 and faster AI chips to China is either amazing chutzpah, or a combination of hubris and idiocy.

Either way, the US government took notice, and TSMC came under pressure to comply with American export restrictions. Awesome PR, but stupid move, Biren…

Take a look at the benchmark results that Biren itself released into public domain, showing that the slower BR104 chip was between 27% and 58% faster than the NVIDIA A100.

With such results, the BR104 would certainly fall under the latest US tech export restrictions. No wonder TSMC quickly stopped making and supplying Biren BR100 series chips.

As powerful as the BR100 and BR104 GPGPU chips may be, they are now dead in the water as TSMC will not manufacture them anymore, and Biren Technology has no plausible alternatives for 7nm fabrication.

Read more : US Targets Chinese Military With New Chip Export Ban!

 

Biren BR100 AI Chips That TSMC Stopped Producing

The Biren BR100 and slower BR104 are General Purpose GPU (GPGPU) chips, which are targeted at artificial intelligence applications.

They are both fabricated on the TSMC 7nm process technology, and use chipset and 2.5D Chip-on-Wafer-on-Substrate (CoWoS) packaging technologies to achieve high yield, and high performance.

The Biren BR100 family of GPGPU chips supports up to eight independent virtual instances (SVI) – each physically isolated with their own hardware resources, for improved security.

Their chips are designed with a proprietary Blink high-speed GPU interconnect bus offering bandwidth of up to 448 GB/s, with the ability to connect up to 8 cards in a single node, using state-of-the-art PCI Express 5.0.

Biren Technology offers two BR100-based products – the Bili 100P OCP Accelerator Module (OAM), and the Bill 104P PCI Express accelerator card.

 

Please Support My Work!

Support my work through a bank transfer /  PayPal / credit card!

Name : Adrian Wong
Bank Transfer : CIMB 7064555917 (Swift Code : CIBBMYKL)
Credit Card / Paypal : https://paypal.me/techarp

Dr. Adrian Wong has been writing about tech and science since 1997, even publishing a book with Prentice Hall called Breaking Through The BIOS Barrier (ISBN 978-0131455368) while in medical school.

He continues to devote countless hours every day writing about tech, medicine and science, in his pursuit of facts in a post-truth world.

 

Recommended Reading

Go Back To > Computer | BusinessTech ARP

 

Support Tech ARP!

Please support us by visiting our sponsors, participating in the Tech ARP Forums, or donating to our fund. Thank you!

Intel’s Habana Labs Cuts Over 10% Of Workforce!

Artificial intelligence chip developer, Habana Labs, just laid off over 10% of its workforce, just as Intel plans to layoff up to 20% of its employees.

 

Intel’s Habana Labs Cuts Over 10% Of Workforce!

Intel acquired artificial intelligence chip developer Habana Labs in 2019 for $2 billion. The acquisition allowed Habana Labs to rapidly increase its workforce from 180 people to over 900.

That breakneck expansion has not only come to a shuddering halt, it is being reversed – Habana Labs is laying off 100 of their employees. According to an Intel statement:

Habana Labs assesses and updates its technical and business focus from time to time in order to adapt to the current business reality and to continue and improve its competitiveness.

As part of these processes, it makes adjustments to its workforce and the balance between different disciplines from time to time. This is a normal process which occurs constantly and allows Habana to continue and develop attractive and competitive products and solutions.

Habana Labs was founded by David Dahan and Ran Halutz in San Jose in 2016, for the purpose of developing processors optimised for Artificial Intelligence (AI) applications.

Both Dahan and Halutz were former executives of PrimeSense Limited, which was acquired by Apple for $360 million in 2013. Its first investor and chairman was Israeli tech entrepreneur Avigdor Willenz.

Recommended : Intel Planning Major Layoffs, Amid Market Downturn!

Habana Labs Cuts Employees Before Major Intel Layoffs

Habana Labs functions as an independent unit within Intel, which is why it announced its layoffs before Intel announced its own major layoffs later this month.

According to Bloomberg News, Intel is planning major layoffs of its staff, “as early as this month” – around the same time its third quarter earnings report is announced on October 27.

This reduction in headcount will be substantial, affecting Intel sales and marketing team the hardest – around 20% of their members are expected to receive pink slips.

This decision comes after two years of booming sales during the COVID-19 pandemic, and just as Intel is set to receive billions in funding from the US government under the CHIPS Act.

The PC market is currently struggling due to high inflation, new US-Chinese tech restrictions, and the Russian invasion of Ukraine.

Gartner recently announced that worldwide PC shipments only totalled 68 million units in the third quarter of 2022 – a 19.5% drop from a year ago. This was the steepest market decline it recorded since it started tracking the market in the mid-1990s.

 

Please Support My Work!

Support my work through a bank transfer /  PayPal / credit card!

Name : Adrian Wong
Bank Transfer : CIMB 7064555917 (Swift Code : CIBBMYKL)
Credit Card / Paypal : https://paypal.me/techarp

Dr. Adrian Wong has been writing about tech and science since 1997, even publishing a book with Prentice Hall called Breaking Through The BIOS Barrier (ISBN 978-0131455368) while in medical school.

He continues to devote countless hours every day writing about tech, medicine and science, in his pursuit of facts in a post-truth world.

 

Recommended Reading

Go Back To > Computer | BusinessTech ARP

 

Support Tech ARP!

Please support us by visiting our sponsors, participating in the Tech ARP Forums, or donating to our fund. Thank you!

Did James Earl Jones Sign Over Rights To His Voice?!

James Earl Jones did NOT sign over the Darth Vader voice rights to Lucasfilm, or a Ukrainian AI company!

Take a look at the viral claims, and find out what the facts really are!

 

Claim : James Earl Jones Signed Over Darth Vader Voice Rights!

Many media outlets breathlessly reported that James Earl Jones, who voiced Darth Vader, had apparently signed over the rights to his own voice!

They claimed that he was retiring from voicing Darth Vader, and signed over the rights to his voice to Respeecher, a Ukrainian AI technology company.

Gizmodo : James Earl Jones is Retiring from Darth Vader, But an AI Firm Owns His Voice

New York Post : James Earl Jones gives rights to Darth Vader voice to Ukrainian AI company

Mashable : James Earl Jones signs over rights to voice of Darth Vader to be replaced by AI

Deadline : James Earl Jones Signs Over Rights To Voice Of Darth Vader, Signalling Retirement From Legendary Role

Verge : James Earl Jones lets AI take over the voice of Darth Vader

Recommended : Was Donald Trump Queen Elizabeth II’s Favourite President?!

 

Truth : James Earl Jones Did NOT Sign Over Darth Vader Voice Rights!

It is peculiar that so many mainstream media FALSELY reported that James Earl Jones signed over the rights to his own voice.

That is categorically FALSE, and here are the reasons why…

Fact #1 : Vanity Fair Never Said He Signed Over Rights To Darth Vader Voice

The claims that James Earl Jones signed over the rights to his voice were based on a Vanity Fair article, called “Darth Vader’s Voice Emanated From War-Torn Ukraine“.

The story was about Respeecher, a Ukrainian voice cloning company that uses artificial intelligence to create new dialogue from archival recordings.

Respeecher had earlier helped Lucasfilm generate the voice of a young Luke Skywalker for The Book of Boba Fett, as well as making Darth Vader sound 45 years younger in the Obi-Wan Kenobi series.

Anthony Breznican of Vanity Fair never said, in his article, that Jones signed over the rights to the Darth Vader voice, to Respeecher.

In fact, the word “rights” was never mentioned in the entire article!

Fact #2 : JEJ Approved Of The AI-Generated Dialogue

What Anthony wrote was that when James Earl Jones heard the new dialogue created by Respeecher, he approved (signed off) of the work. (my emphasis in bold)

When he ultimately presented Jones with Respeecher’s work, the actor signed off on using his archival voice recordings to keep Vader alive and vital even by artificial means.

The term “signed off” in this context meant “to approve or acknowledge something”.

It does not mean Jones “signed away” his rights – “to give (something, such as rights or property) to someone”.

Recommended : Did BTS RM Beat Henry Cavill As World’s Most Handsome Man?!

Fact #3 : Respeecher Does Not Own Rights To Darth Vader’s Voice

Respeecher was contracted by Lucasfilm to generate new Darth Vader dialogue based on James Earl Jones’ archival voice recordings.

James Earl Jones never signed over the rights to his voice to Respeecher, and Respeecher certainly does NOT own the rights to his voice.

Ukrainian start-up Respeecher … uses archival recordings and a proprietary A.I. algorithm to create new dialogue with the voices of performers from long ago.

The company worked with Lucasfilm to generate the voice of a young Luke Skywalker for Disney+’s The Book of Boba Fett, and the recent Obi-Wan Kenobi series tasked them with making Darth Vader sound like James Earl Jones’s dark side villain from 45 years ago.

Fact #4 : Lucasfilms Owns The Rights To Darth Vader

The Darth Vader character, including his voice, is owned by LucasFilms.

James Earl Jones does not actually own the Darth Vader voice, and is unable to legally “sign away” those rights to anyone, even if he wanted to.

It’s like how he doesn’t own the rights to his famous CNN announcements – “This is CNN” and “This is CNN International”, which belong to… you guessed it – CNN.

Recommended : Did Justin Bieber Wear Balenciaga Bottle Slippers?

Fact #5 : James Earl Jones Did Not Officially Retire

Many media outlets also falsely claimed that Jones had retired. Not once was it mentioned in the Vanity Fair article that he retired.

In fact, the Vanity Fair article pointed out that Jones helped Lucasfilm and the Respeecher team with the Darth Vader speech in Obi-Wan Kenobi :

Jones is credited for guiding the performance on Obi-Wan Kenobi, and Wood describes his contribution as “a benevolent godfather.”

They inform the actor about their plans for Vader and heed his advice on how to stay on the right course.

Now, Jones is 91 years old, and has not appeared in public in a long time. However, he still appears to be working in some fashion, and has not officially retired, as alleged.

 

Please Support My Work!

Support my work through a bank transfer /  PayPal / credit card!

Name : Adrian Wong
Bank Transfer : CIMB 7064555917 (Swift Code : CIBBMYKL)
Credit Card / Paypal : https://paypal.me/techarp

Dr. Adrian Wong has been writing about tech and science since 1997, even publishing a book with Prentice Hall called Breaking Through The BIOS Barrier (ISBN 978-0131455368) while in medical school.

He continues to devote countless hours every day writing about tech, medicine and science, in his pursuit of facts in a post-truth world.

 

Recommended Reading

Go Back To > Fact Check | CelebrityTech ARP

 

Support Tech ARP!

Please support us by visiting our sponsors, participating in the Tech ARP Forums, or donating to our fund. Thank you!

AMD, NVIDIA Banned From Selling AI Chips To China!

Both AMD and NVIDIA have just been banned from selling high-performance AI chips to both China and Russia!

Here is what you need to know…

 

AMD, NVIDIA Banned From Selling AI Chips To China + Russia!

On Friday, 26 August 2022, NVIDIA and AMD were both ordered by the US government to stop exporting high-performance AI chips to both China and Russia.

In its regulatory filing to the SEC, NVIDIA stated that the US government introduced this ban to prevent them from being used for military purposes, or by the Chinese or Russian military.

The ban uses the A100 Tensor Core GPU as the baseline for NVIDIA, and the 3rd Gen Instinct MI250 accelerator chip for AMD.

Effective immediately, anything equal to, or faster than, the NVIDIA A100 or the AMD Instinct MI250 can no longer be exported to either China or Russia. That includes the upcoming H100 chip.

On August 26, 2022, the U.S. government, or USG, informed NVIDIA Corporation, or the Company, that the USG has imposed a new license requirement, effective immediately, for any future export to China (including Hong Kong) and Russia of the Company’s A100 and forthcoming H100 integrated circuits.

DGX or any other systems which incorporate A100 or H100 integrated circuits and the A100X are also covered by the new license requirement.

The license requirement also includes any future NVIDIA integrated circuit achieving both peak performance and chip-to-chip I/O performance equal to or greater than thresholds that are roughly equivalent to the A100, as well as any system that includes those circuits. A license is required to export technology to support or develop covered products.

The USG indicated that the new license requirement will address the risk that the covered products may be used in, or diverted to, a ‘military end use’ or ‘military end user’ in China and Russia. The Company does not sell products to customers in Russia.

I should point out that the NVIDIA A100 was launched two years ago – on June 22, 2020, while the AMD Instinct MI250 was introduced on November 8, 2021.

So it appears that the US government wants to maintain at least a 1-year advantage in high-performance AI chips.

 

AMD, NVIDIA Set To Lose Billions From AI Chip Sales To China!

Both AMD and NVIDIA stopped selling chips to Russia, after the invasion of Ukraine, so there is no material loss from such a ban on sales to Russia.

However, they both stand to lose billions of dollars worth of sales to China. NVIDIA alone estimates that the ban will affect US$400 million of sales, just in the third fiscal quarter.

Even more worrying was the likelihood that this ban may delay the H100 chip’s launch, and force NVIDIA to move “certain operations” out of China.

The new license requirement may impact the Company’s ability to complete its development of H100 in a timely manner or support existing customers of A100 and may require the Company to transition certain operations out of China.

AMD however claims that the ban will not have a material impact on its business, because shipments of its less powerful MI100 chips are not affected.

At this time, we do not believe that shipments of MI100 integrated circuits are impacted by the new requirements.

We do not currently believe it is a material impact on our business.

Both AMD and NVIDIA will now have to discuss with their customers in China on using alternative (less powerful) chips, or obtaining special licenses from the US government.

 

Please Support My Work!

Support my work through a bank transfer /  PayPal / credit card!

Name : Adrian Wong
Bank Transfer : CIMB 7064555917 (Swift Code : CIBBMYKL)
Credit Card / Paypal : https://paypal.me/techarp

Dr. Adrian Wong has been writing about tech and science since 1997, even publishing a book with Prentice Hall called Breaking Through The BIOS Barrier (ISBN 978-0131455368) while in medical school.

He continues to devote countless hours every day writing about tech, medicine and science, in his pursuit of facts in a post-truth world.

 

Recommended Reading

Go Back To > Enterprise | ComputerTech ARP

 

Support Tech ARP!

Please support us by visiting our sponsors, participating in the Tech ARP Forums, or donating to our fund. Thank you!

2022 Samsung Neo QLED 8K TV + Soundbar Showcase!

Samsung just launched their 2022 Neo QLED 8K televisions, with a new 2022 Ultra Slim Soundbar to complement them!

Take a look at both of them in our video showcase!

 

2022 Samsung Neo QLED 8K TV + Soundbar Showcase!

We were given the opportunity to view Samsung’s new 2022 Neo QLED 8K televisions, as well as the new 2022 Ultra Slim Soundbar that complements them.

The 2022 Samsung Neo QLED 8K televisions come in a variety of sizes, from 55 inches all the way to 85 inches. Take a look!

Every model is built around a new Neural Quantum Processor 8K, which has 20 independent neural AI networks, each analysing the content’s characteristics to deliver the best picture quality, regardless of the source.

It also powers a new feature called Real Depth Enhancer. Real Depth Enhancer scans each frame, and maximises the object’s contrast with the background.

The new processor also uses Shape Adaptive Light Control technology to create precise lighting and pure blacks using Samsung’s best Quantum mini LED display panel.

In fact, the 2022 Neo QLED 8K televisions are the world’s first Pantone-validated displays, which mean they will authentically reproduce more than 2000 colours, as well as 110 new skin tone shades.

These new models also use AI technology to automatically adjust the brightness and colour temperature of the display to match the room’s lighting condition.

The 2022 Neo QLED 8K televisions boast Samsung’s new Smart Hub user interface, powered by Tizen. They are also designed to provide the ultimate gaming experience through features like :

  • four HDMI 2.1 ports
  • Motion Xcelerator Turbo Pro 4K 144 Hz gaming
  • Super Ultrawite GameView, and
  • Game Bar

These televisions are also more eco-friendly, featuring eco-packaging and a more efficient SolarCell Remote that is completely battery-free, and can recharge through 2.4 GHz radio frequency harvesting – drawing power from your WiFi router’s signals!

The Neural Quantum Processor 8K also uses artificial intelligence to create a truly immersive soundscape by analysing the picture in real time, to deliver positional audio cues through the Adaptive Sound technology.

The flagship QN900B model even features a 90 watt 6.2.4 channel audio system, featuring Dolby Atmos with Object Tracking Sound Pro.

Finally, Samsung added Wireless Dolby Atmos to these models, ensuring that you can connect them to the new 2022 Ultra Slim Soundbar without using a HDMI cable!

The new Ultra Slim Soundbar delivers true 3.1.2 sound with two up-firing channels, and is now much thinner and smaller.

 

2022 Samsung Neo QLED 8K TV : Price + Models

The 2022 Samsung Neo QLED 8K televisions are available in a variety of sizes, at these price points :

  • 85″ Neo QLED 8K QN900B : RM49,999 (about US$11,385)
  • 75″ Neo QLED 8K QN900B : RM34,999 (about US$7,969)
  • 65″ Neo QLED 8K QN900B : RM23,999 (about US$5,465)
  • 85″ Neo QLED 8K QN800B : RM33,999 (about US$7,742)
  • 75″ Neo QLED 8K QN800B : RM24,999 (about US$5,693)
  • 65″ Neo QLED 8K QN800B : RM18,999 (about US$4,326)
  • 55″ Neo QLED 8K QN800B : RM12,499 (about US$2,846)

Here are some online purchase options in Malaysia :

 

Please Support My Work!

Support my work through a bank transfer /  PayPal / credit card!

Name : Adrian Wong
Bank Transfer : CIMB 7064555917 (Swift Code : CIBBMYKL)
Credit Card / Paypal : https://paypal.me/techarp

Dr. Adrian Wong has been writing about tech and science since 1997, even publishing a book with Prentice Hall called Breaking Through The BIOS Barrier (ISBN 978-0131455368) while in medical school.

He continues to devote countless hours every day writing about tech, medicine and science, in his pursuit of facts in a post-truth world.

 

Recommended Reading

Go Back To > Home + EntertainmentTech ARP

 

Support Tech ARP!

Please support us by visiting our sponsors, participating in the Tech ARP Forums, or donating to our fund. Thank you!

Samsung Galaxy S22 Ultra Photography Review!

Take a look at the Samsung Galaxy S22 Ultra’s photography performance, and find out why it’s such an awesome smartphone for photographers!

 

Samsung Galaxy S22 Ultra : Flagship Class Photography!

The Samsung Galaxy S22 Ultra is their flagship smartphone for the year 2022, and it boasts the best cameras money can buy.

Headlining its top-of-the-line quad-camera system is a Dual Pixel camera, which is built around the 108 MP ISOCELL HM3 sensor, and optically-stabilised lens with a large f/1.8 aperture.

You can seamlessly switch to its ultra-wide camera, with a 12 MP sensor and a f/2.2 aperture; or optically zoom up to 10X with its dual telephoto and periscope cameras.

In this review, we are going to take a look at the photography capability of Galaxy S22 Ultra, and show you just how well its cameras perform!

 

Samsung Galaxy S22 Ultra Photography Review

The Samsung Galaxy S22 Ultra boasts a super-high resolution 108 MP main camera, built around the same ISOCELL HM3 sensor used in the Galaxy S21 Ultra.

However, this camera features improved AI optimisations. Thanks to the improved performance of its GPU and NPU, it now runs approximately 60 new AI models through its Neural Processing Unit (NPU) to optimise the final output.

Here are six shots I took around Ipoh Old Town, using the 108 MP main camera, and the 12 MP ultra-wide angle camera.

They are 4000 x 3000 pixels in size, with a JPEG file size of between 4 MB and 15 MB – it all depends on the amount of detail in the picture. To save space, I definitely recommend that you enable HEIF – the High Efficiency Image Format.

By default, the 108 MP camera takes 12 MP photos, combining nine pixels into one super-large 2.4 µm pixel. Coupled with the new AI models, the final photo is not only much brighter, but has significantly lower noise.

When you take shots at night, the Galaxy S22 Ultra combines images taken using both the 108 MP main camera, and the 12 MP ultra-wide camera. The 12 MP camera provides the rough details, while the fine details are provided by the 108 MP camera.

Here are four shots to show you the incredible amount of detail you can capture at just 12 MP. If you need greater detail, you can always increase the resolution to the full 108 MP, but frankly – these pictures should show you that you don’t really need to.

Its digital zoom has also been improved by the new AI models and the faster processor. For images taken above 30X digital zoom, the camera actually takes 10-20 shots at once, and processes them to improve the details in the final photo.

Finally, the Galaxy S22 Ultra features a new AI portrait model that uses two cameras to accurately differentiate the depth between the person and the background.

This is critical because the 108 MP camera has a very wide f/1.8 aperture, which makes for very narrow depth-of-field (DOF). If you do not accurately focus on the subject, you could end up with a blurry photo even though it looks sharp on the phone.

In these four shots, you can see just how narrow of depth of field you have to work with. This makes for better photos – you just need to make sure the subject is really in focus.

 

Samsung Galaxy S22 Ultra Photography Summary

If you want the best cameras on a smartphone money can buy, you should definitely consider the Samsung Galaxy S22 Ultra.

Ignore the 108 MP marketing hoohah. What you are really getting is an awesome 12 MP camera with super-large pixels for really great noise-free images, and better low-light photos.

You may be wary about Samsung reusing the ISOCELL HM3 sensor from last year, but it has better photographic capabilities, thanks mostly to software and AI improvements.

For one thing, I noticed that it is now much better at accurately focusing on the subject, resulting in far fewer wrongly-focused shots.

It also does a better job of correcting for disparate lighting conditions, although some of you may find the photos oversaturated.

But there is no doubt that the Samsung Galaxy S22 Ultra has some of the best smartphone cameras I’ve tried this year.

 

Samsung Galaxy S22 Ultra : Price + Availability

Samsung is offering the Galaxy S22 Ultra in four colour options – Burgundy, Green, Phantom Black and Phantom White.

I had the opportunity to attend an exclusive Samsung preview event, to check out all four colours for ourselves.

Note : Samsung only allowed us to use digital cameras, so the video is not stabilised as it would be with smartphones that I generally use.

Here in Malaysia, it is available starting 10 February 2022 in three variants, at these price points :

  • 8 GB + 128 GB : RM 5,099 (about US$1,219 / £899 / A$1,699 / S$1,629)
  • 12 GB + 256 GB : RM 5,499 (about US$1,314 / £969 / A$1,829 / S$1,759)
  • 12 GB + 512 GB : RM 5,899 (about US$1,409 / £1,039 / A$1,959 / S$1,889)

Here are some online pre-order options :

 

Please Support My Work!

Support my work through a bank transfer /  PayPal / credit card!

Name : Adrian Wong
Bank Transfer : CIMB 7064555917 (Swift Code : CIBBMYKL)
Credit Card / Paypal : https://paypal.me/techarp

Dr. Adrian Wong has been writing about tech and science since 1997, even publishing a book with Prentice Hall called Breaking Through The BIOS Barrier (ISBN 978-0131455368) while in medical school.

He continues to devote countless hours every day writing about tech, medicine and science, in his pursuit of facts in a post-truth world.

 

Recommended Reading

Go Back To > Mobile | Photo + VideoTech ARP

 

Support Tech ARP!

Please support us by visiting our sponsors, participating in the Tech ARP Forums, or donating to our fund. Thank you!

IBM z16 : Industry’s First Quantum-Safe System Explained!

IBM just introduced the z16 system, powered by their new Telum processor with an integrated AI accelerator!

Take a look at the z16, and find out why it is the industry’s first quantum-safe system!

 

IBM z16 : Industry’s First Quantum-Safe System!

On 25 April 2022, IBM officially unveiled their new z16 system in Malaysia – the industry’s first quantum-safe system.

IBM Vice President for Worldwide Sales of IBM Z and LinuxONE, Jose Castano, flew to Kuala Lumpur, to give us an exclusive briefing on the new z16 system, and tell us why it is the industry’s first quantum-safe system.

IBM Z and LinuxONE Security CTO Michael Jordan also briefed us on why quantum-safe computing will be critical for enterprises, as quantum computing improves.

Thanks to its Telum processor, the IBM z16 system delivers low and consistent latency for embedding AI into response time-sensitive transactions. This can enable customers to leverage AI inference to better control the outcome of transactions before they complete.

For example, they can leverage AI inference to mitigate risk in Clearing & Settlement applications, to predict which transactions have high risk exposure, and highlight questionable transactions, to prevent costly consequences.

In a use-case example, one international bank uses AI on IBM Z as part of their credit card authorization process instead of using an off-platform inference solution. As a result, the bank can detect fraud during its credit card transaction authorisation processing.

The IBM z16 will offer better AI inference capacity, thanks to its integrated AI accelerator offering up to 1 ms of latency, expanding use cases that include :

  • tax fraud and organised retail theft detection
  • real-time payments and alternative payment methods, including cryptocurrencies
  • speed up business or consumer loan approvals

As the industry’s first quantum-safe system, the IBM z16 is protected by lattice-based crypto graphs – an approach for constructing security primitives that help protect data and systems against current and future threats.

 

IBM z16 : Powered By The New Telum Processor!

The IBM z16 is built around the new IBM Telum processor, which is specifically designed for secure processing, and real-time AI inference.

Here are the key features of the IBM Telum processor that powers the new IBM z16 system :

  • Fabricated on the 7 nm process technology
  • Has 8 processor cores, clocked at over 5 GHz
  • Each processor core has a dedicated 32 MB private L2 cache
  • The eight 32 MB L2 cache can form a virtual 256 MB L3 cache, and a 2 GB L4 cache.
  • Transparent encryption of main memory, with 8-channel fault tolerant memory interface
  • Integrated AI accelerator with 6 TFLOPS compute capacity
  • Centralised AI accelerator architecture, with direct connection to the cache infrastructure

The Telum processor is designed to enable extremely low latency inference for response-time sensitive workloads. With planned system support for up to 200 TFLOPs, the AI acceleration is also designed to scale up to the requirements of the most demanding workloads.

Thanks to the Telum processor, the IBM z16 can process 300 billion inference requests per day, with just one millisecond of latency.

 

Please Support My Work!

Support my work through a bank transfer /  PayPal / credit card!

Name : Adrian Wong
Bank Transfer : CIMB 7064555917 (Swift Code : CIBBMYKL)
Credit Card / Paypal : https://paypal.me/techarp

Dr. Adrian Wong has been writing about tech and science since 1997, even publishing a book with Prentice Hall called Breaking Through The BIOS Barrier (ISBN 978-0131455368) while in medical school.

He continues to devote countless hours every day writing about tech, medicine and science, in his pursuit of facts in a post-truth world.

 

Recommended Reading

Go Back To > Enterprise | ComputerTech ARP

 

Support Tech ARP!

Please support us by visiting our sponsors, participating in the Tech ARP Forums, or donating to our fund. Thank you!

Why Xiaomi’s ROIDMI Eve Plus Is So Awesome!

The Xiaomi ROIDMI Eve Plus robot vacuum is really, REALLY AWESOME!

Find out why every home should have a little ROIDMI Eve Plus running around!

 

Why Xiaomi’s ROIDMI Eve Plus Is So Awesome!

One of the hottest gadgets for homeowners is a robot vacuum, which can be incredibly useful or a real pain in the ass.

What most homeowners in the market for a robot vacuum don’t understand is that many models don’t clean well, and are a hassle to maintain.

That’s why Xiaomi’s ROIDMI Eve Plus robot vacuum is so awesome – it is not only a cleaning powerhouse, it does away with almost all of the maintenance hassle!

Self-Emptying System

One of the biggest hassle in maintaining a robot vacuum is the constant need to empty the dust box.

This limitation prevents ordinary robot vacuums from automatically and continuously cleaning your house for days on end.

This is not a problem for the ROIDMI Eve Plus. When it returns to its base station to recharge its battery, its dust box is automatically emptied into the base station!

This allows the Eve Plus to continuously recharge and empty its dust box, and clean your house for weeks on end!

The base station’s large 3L dust bag can hold enough dust for 60 days, before you need to clear it.

Self-Sanitising System

The ROIDMI Eve Plus not only comes with a HEPA filter – a common feature in robot vacuums, it boasts a pioneering Deodorizing Particle Generator in the base station.

The Deodorizing Particle Generator uses Active Oxygen technology to sterilise the collected dust, eliminating dust mites as well as microbes and mold like E. coli, Candida albicans and Staphylococcus aureus.

It can also eliminate toxic chemicals like formaldehyde, ammonia, benzene and TVOC, as well as remove the smell of cigarettes and perfume.

4th Generation Super-Sensing LDS Sensor

The ROIDMI Eve Plus comes with their 4th generation laser distance sensor, which allows for highly-precise scanning of its environment.

It is constantly scanning for obstacles, so it will quickly detect people or pets nearby and avoid them.

AI Smart Room Mapping

The ROIDMI Eve Plus can precisely map and later remember every room in your house, across multiple levels!

Its built-in AI mapping and path planning algorithms allow it to better and faster clean the rooms, and resume its cleaning duties after charging or emptying its dust box.

The room maps it generates also allows you to demarcate forbidden areas, or schedule more frequent cleaning for certain rooms.

Separate maps can be saved for each floor, and the settings are automatically matched and switched, when switching floors.

4+ Hours Of Vacuum + Mopping!

The ROIDMI Eve Plus has a powerful digital brushless vacuum with 2,700 Pa of suction power, with side brushes and a flexible inlet that adjusts to the floor condition.

And thanks to its large 5,200 mAh battery, it can keep on cleaning for over 4 hours (250 minutes) before it needs to return to the base station!

If you like squeaky clean floors, the Eve Plus has a mopping module and a 250 ml water tank, to give you just that.

It simulates hand-mopping with its 3-stage Y-route (not available in the US) or U-route mopping

Obstacle Avoidance

The ROIDMI Eve Plus will never get stuck under your bed, sofa or cabinets, because it can automatically sense their height.

It is also smart enough to detect low obstacles like door strips or cables (up to 2 cm in height) and crawl over them.

It can also sense and climb up ramps with slopes of up to 20°, which lesser robot vacuums will balk at.

 

Xiaomi ROIDMI Eve Plus : What’s Inside The Box?

The Xiaomi ROIDMI Eve Plus comes in two separate boxes – one for the robot vacuum cleaner, and the other for its base station.

In this video, we take a look at what’s inside both boxes!

Once unboxed, you should find these items :

  • ROIDMI Eve Plus robot vacuum cleaner + mop module
  • ROIDMI Eve Plus dust collector base station
  • User guide + warranty card
  • Disposable mop wipes + filters + dust bags
  • Power cord

For more information, you can visit the official ROIDMI Eve Plus website.

 

Xiaomi ROIDMI Eve Plus : Specifications

Specifications Xiaomi ROIDMI Eve Plus
Type Robot vacuum
Voice Assistant Alexa, Google Assistant
Cleaning Modes Sweep + vacuum + mop
Suction Power 2700 Pa (maximum)
Recommended
Coverage
Area
250 m²
2690 square feet
Robot Container Dust : 300 ml
Water : 250 ml
Base Station
Container
Dust : 3 L
Robot Noise Level 60 dB (A)
Base Station
Noise Level
< 82 dB (A)
Hygiene Features HEPA Filter
Active Oxygen Technology
Navigation 4th Gen LDS SLAM
Automatic Partition Yes
Where To Clean Yes
Virtual Wall App-based Virtual Wall
Obstacle Clearance Slope : up to 20°
Height : up to 20 mm
Battery 5,200 mAh lithium-ion
Battery Life 250 minutes
Charging Time 250 minutes
Power Consumption 50 watt (robot)
850 watt (base station)
Robot Dimensions 350 x 350 x 98 mm
Robot Weight 3.6 kg
Base Station
Dimensions
358 x 350 x 175 mm
Base Station
Weight
2.7 kg

 

Xiaomi ROIDMI Eve Plus : Price + Deals

The ROIDMI Eve Plus + dust collector base station kit is surprisingly affordable.

You can now purchase the full ROIDMI Eve Plus set in Malaysia at these incredible prices :

  • Standard 1 Year Warranty : RM1,588
  • 18 Month Warranty : RM1,688
  • 2 Year Warranty : RM1,788
  • Premium 1 Year Warranty : RM1,988 (1 to 1 Exchange)

In addition, you can save more by purchasing these Add-on Deals :

  • Replacement parts like dust bag, mop wipes, roller brush, side brush, filter, etc.
  • Extended 1 year warranty on the battery

But on 11/11, the ROIDMI Eve Plus goes on sale at only RM1,458 for just TWO HOURS – from 12 AM until 2 AM!

So don’t miss this offer. Grab it during those two hours!

 

Please Support My Work!

Support my work through a bank transfer /  PayPal / credit card!

Name : Adrian Wong
Bank Transfer : CIMB 7064555917 (Swift Code : CIBBMYKL)
Credit Card / Paypal : https://paypal.me/techarp

Dr. Adrian Wong has been writing about tech and science since 1997, even publishing a book with Prentice Hall called Breaking Through The BIOS Barrier (ISBN 978-0131455368) while in medical school.

He continues to devote countless hours every day writing about tech, medicine and science, in his pursuit of facts in a post-truth world.

 

Recommended Reading

Go Back To > Home TechTech ARP

 

Support Tech ARP!

Please support us by visiting our sponsors, participating in the Tech ARP Forums, or donating to our fund. Thank you!

Dell UltraSharp Webcam (WB7022) : All You Need To Know!

Dell just introduced the UltraSharp Webcam (WB7022) for the new Work From Home normal!

Here is what you need to know about the new Dell WB7022 UltraSharp Webcam!

 

Dell UltraSharp Webcam (WB7022) : All You Need To Know!

More than 18 months into the COVID-19 pandemic, the Work From Home (WFH) normal appears to be here to stay, at least for now.

To ensure that those who need to remotely work have the best possible video conferencing webcam, Dell just introduced the UltraSharp Webcam.

With nine patent-pending technologies, the Dell UltraSharp Webcam offers 4K image quality, built-in AI tracking and other video conferencing features.

4K Webcam With World’s Best Image Quality

Dell claims that the UltraSharp Webcam offers the world’s best image quality, courtesy of the large 4K Sony STARVIS CMOS sensor and the multi-element lens that captures more light to deliver “crystal-clear video”.

It supports frame rates of 24 and 30 fps at the full 4K UHD resolution. It also supports the higher 60 fps frame rate, at the lower 1080p and 720p resolutions.

It’s not just about the hardware. It’s also about the software embedded into the UltraSharp Webcam.

Its Digital Overlap HDR capability lets the Dell UltraSharp Webcam preserve true-to-life colours and balance exposure.

Meanwhile, its 2D/3D video noise reduction capability eliminates grainy images, making you look good even in low light conditions.

AI Auto-Framing

The Dell UltraSharp Webcam has AI Auto-Framing capability built-in. This feature uses Artificial Intelligence to keep you focused and centred in the frame.

You don’t have to manually adjust the webcam. It will track your movements and shift the frame to keep you centered, even if you stand up!

5X Digital Zoom + Three Fields of View

The Dell UltraSharp Webcam supports up to 5X digital zoom, as well as three fields of view – 65°, 78° and 90°.

This allows you to fit multiple people in the same conference call, or focus exclusively on you in a crowded place.

Smart Security Features

The Dell UltraSharp Webcam supports Windows Hello, allowing you to sign-in quickly and securely usual facial recognition.

It also has Dell ExpressSign-in capability built-in, allowing Dell PCs to detect your presence as you approach and automatically log you out when you step away, offering you an extra level of security.

Finally, it comes with a magnetic privacy cover that securely snaps onto the lens, or the back of the webcam.

Webcam Mounting Solutions

The Dell UltraSharp Webcam comes with two mounts in the box :

  • a magnetic mount
  • a tripod adaptor

The magnetic mount makes it easy to attach to the top of any desktop or laptop monitor, without obscuring the display.

And if you need to mount it on a tripod, simply slide the webcam off the magnetic mount and slide it into the tripod adaptor!

 

Dell UltraSharp Webcam (WB7022) : Specifications

Specifications Dell UltraSharp Webcam
Model WB7022
Camera Resolution
+ Frame Rates
4K : 24 / 30 fps
1080p : 24 / 30 / 60 fps
720p : 24 / 30 / 60 fps
Sensor 8.3 MP Sony STARVIS CMOS
Field of View 65, 78, 90 degrees
HD Digital Zoom Up to 5X
Autofocus Yes
Auto-Light
Correction
Advanced Digital Overlap (DOL) HDR
3D + 2D Video Noise Reduction
– Temporal Noise (3DNR)
– Spatial Noise Reduction (2DNR)
Auto White Balance Yes
AI Auto-Framing Yes
Microphone No
Lens Cap Yes (Magnetic)
Windows Hello Yes
Dell on Dell ExpressSign-In
Certification Microsoft Teams, Zoom
Other Optimised Apps Skype for Business
Go To Meeting
Google Meet
Google Hangout
Blue Jeans
Slack, Lifesize
OS Support Windows 10, macOS
Plug & Play Support Yes
Connectivity USB C
– comes with USB-A to USB-C cable
Material Anodised Aluminium
Webcam Dimensions 42 mm wide
90 mm long
Mount Dimensions 32 mm wide
64 mm deep
9.4 mm high
Warranty 3 years

 

Dell UltraSharp Webcam (WB7022) : Price + Availability

The Dell UltraSharp Webcam (WB7022) is available for purchase at the official price of US$199 / RM 1,122 (about £144 / A$266 / S$268).

 

Recommended Reading

Go Back To > ComputerBusiness | Tech ARP

 

Support Tech ARP!

If you like our work, please support us by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

NVIDIA RTX A6000 + Omniverse : Specs + Details!

At GTC 2020, NVIDIA announced both the new Ampere-based RTX A6000 GPU and the Omniverse 3D platform.

Here are the specifications of the NVIDIA RTX A6000, as well as details and a demo of NVIDIA Omniverse!

 

NVIDIA RTX A6000 : Powered By Ampere!

Like the recently-announced GeForce RTX 30 Series, NVIDIA RTX A6000 is based on the new NVIDIA Ampere architecture.

But unlike the gaming-centric GeForce RTX 30 Series, RTX A6000 is a flagship GPU for creators with a massive 48 GB GDDR6 memory buffer.

With dedicated RT Cores and improved CUDA performance, NVIDIA claims the RTX A6000 will deliver up to 2X better performance than current renderers.

Its second-generation RT Cores add hardware acceleration for ray-traced motion blur rendering, offering up to 7X faster performance than the last generation.

Its third-generation Tensor Cores provides up to 5X the training throughput over the previous generation. In addition, hardware support for structural sparsity doubles the throughput for inferencing.

The NVIDIA RTX A6000 also comes with the PCI Express 4.0 interface, doubling the transfer rate between the GPU and the PC.

 

NVIDIA RTX A6000 : Specifications

Specifications NVIDIA RTX A6000
GPU GA102
CUDA Cores 10,752
TMUs 336
Tensor Cores 336
RT Cores 84
ROPs 112
Base Clock 1.455 GHz
Boost Clock 1.86 GHz
Max. Texture Rate 625 GT/s
Max. Pixel Rate 208 GT/s
Graphics Memory GDDR6 with ECC
Memory Size 48 GB
Memory Speed 16 Gbps
Memory Speed 384-bit
Memory Bandwidth 768 GB/s
Display Interface 4 x DisplayPort 1.4
Graphics Interface PCI Express Gen 4 x16
NVLink 2-way
Form Factor Dual Slot
Size 4.4-inch high
10.5-inch wide
Power Consumption 300 W max.

 

NVIDIA RTX A6000 : Availability

The RTX A6000 will be available through NVIDIA sales channels starting in mid-December 2020, and through their OEM partners early 2021.

 

NVIDIA Omniverse : Powered By Ampere!

Together with the RTX A6000, NVIDIA also announced Omniverse – their new RTX-based simulation, collaboration and rendering platform for 3D workflows.

NVIDIA Omniverse users can collaborate on their 3D workflows live between applications, without imports and exports, and then render them using NVIDIA RTX.

Even though their Marbles At Night demo above highlights the performance of the RTX A6000, Omniverse will work on all NVIDIA RTX hardware, including NVIDIA Studio laptops and RTX-powered workstations and servers.

You will be able to sign up later in Q4 2020 for the open beta.

 

Recommended Reading

Go Back To > Computer | EnterpriseHome

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


Dell EMC Ready Solutions for AI + vHPC on VMware vSphere!

Dell Technologies just introduced Dell EMC Ready solutions for both AI and virtualised HPC workloads on VMware vSphere 7!

Join us for the tech briefing on both new Dell EMC computing solutions for VMware, and find out how it can simplify your advanced computing needs!

 

Simplified Advanced Computing With Dell EMC Ready Solutions

Let’s start with the Dell Technologies briefing on the two new Dell EMC Ready solutions for both AI and virtualised HPC workloads.

Based on VMware Cloud Foundation, they are designed to make AI easier to deploy and consume, with new features from VMware vSphere 7, including Bitfusion.

 

 

Dell EMC Ready Solutions for AI : GPU-as-a-Service (GaaS)

GPUs in individual workstations or servers are often under-utilised at less than 15% of capacity. The new Dell EMC Ready Solutions for AI : GPU-as-a-Service fixes that and maximises your investment with virtual GPU pools.

The newest design includes the latest VMware vSphere 7 with Bitfusion, making it possible to virtualise GPUs on-premise. Factory-installed by Dell, VMware vSphere 7 with Bitfusion will let developers and data scientists pool IT resources and share them across datacenters.

Dell EMC Ready Solutions for AI : GPU-as-a-Service also uses the latest VMware Cloud Foundation with VMware vSphere 7 support for Kubernetes and containerised applications to run AI workloads anywhere. Containers make it easier to bring cloud-native applications into production, with the ability to move workloads.

 

Dell EMC Ready Solutions for Virtualised HPC

Most HPC workloads run on dedicated systems that require specialised skills to deploy and manage. Dell EMC Ready Solutions for Virtualised HPC can include VMware Cloud Foundation with VMware vSphere 7 featuring Bitfusion.

That should make it simpler and more economical to use VMware environments for HPC and AI applications in computational chemistry, bioinformatics and computer-aided engineering. IT teams can quickly provision hardware as needed, speed up initial deployment and configuration, saving time with simpler centralised management and security.

For very large HPC implementations, Dell EMC Ready Solutions for vHPC can include VMware vSphere Scale-Out Edition for additional cost savings.

 

Dell EMC OpenManage for Dell EMC Ready Solutions

The new Dell EMC Ready Solutions for AI and Virtualised HPC ship with the Dell EMC OpenManage systems management software, which helps administrators improve system uptime, keep data insights flowing and prepare for AI operations.

New Dell EMC OpenManage improvements include :

  • OpenManage Integration for VMware vCenter, supporting vSphere Lifecycle Manager, automates software, driver and firmware updates holistically to save time and simplify operations.
  • The enhanced OpenManage Mobile app gives administrators the ability to view power and thermal policies, perform emergency power reduction and monitor internal storage from anywhere in the world.

 

Recommended Reading

Go Back To > Enterprise | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


Cyber Crime WhatsApp Warning Hoax Debunked!

The Cyber Crime hoax on WhatsApp is circulating, warning people that they are being monitored by the government.

Well, not to worry – unless you are living in China – this is yet another Internet hoax. Here is why we know that…

 

The Cyber Crime WhatsApp Warning Hoax

This is the Cyber Crime warning hoax that has been circulating on WhatsApp :

From tomorrow onwards there are new communication regulations.

All calls are recorded

All phone call recordings saved

WhatsApp is monitored

Twitter is monitored

Facebook is monitored

All social media and forums are monitored

Inform those who do not know.

Your devices are connected to ministry systems.

Take care not to send unnecessary messages

Inform your children, Relatives and friends about this to take care

​​Don’t forward any posts or videos etc., you receive regarding politics/present situation about Government/PM etc.​​

Police have put out a notification termed ..Cyber Crime … and action will be taken…just don’t delete …

Inform your friends & others too.

Writing or forwarding any msg on any political & religious debate is an offence now….arrest without warrant…

This is very serious, plz let it be known to all our groups and individual members as group admin can b in deep trouble.

Take care not to send unnecessary messages.
Inform everyone about this to take care.

Please share it; it’s very much true. Groups please be careful.

Note that it’s generic enough that it can apply to almost any government in the world.

 

The Cyber Crime WhatsApp Warning Hoax Debunked!

And here is why this is nothing more than yet another Internet hoax :

Only China Is Capable Of Doing This

The only country that has accomplished most of what was shared above is China, but it took them decades to erect the Great Firewall of China.

It’s not just the massive infrastructure that needs to be created, it also requires legislation to be enacted, and considerable manpower and resources to maintain such a system.

That’s why China is leaning heavily on AI and cloud computing capabilities to automatically and quickly censor information deemed “sensitive”.

However, no other country has come close to spending the money and resources on a similar scale, although Cuba, Vietnam, Zimbabwe and Belarus have imported some surveillance technology from China.

WhatsApp, Instagram + Facebook Messenger Have End-to-End Encryption

All three Facebook-owned apps are now running on the same common platform, which provides end-to-end encryption.

End-to-end encryption protects messages as they travel through the Internet, and specifically prevents anyone (bad guys or your friendly government censor) from snooping into your conversation.

That is also why all three are banned in China…

The Police Cannot Enact Laws

There are cybercrime laws in most, if not every, country in the world. But they are all enacted by legislative bodies of some sort, not the police.

The police is the executive arm in a country, empowered to enforce the law. They do not have the power to create a law, and then act on it.

Even The Government Has Debunked It!

Just in case you are still not convinced, even the Malaysian government issued a fact check on this hoax, debunking it as fake news :

Basically, it states “The Ministry of Home Affairs has NEVER recorded telephone calls or monitored social media in this country“.

 

Recommended Reading

Go Back To > Cybersecurity | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

Jeff Clarke : Tech Predictions For 2020 + Next Data Decade!

Dell Technologies COO and Vice Chairman, Jeff Clarke, reveals his tech predictions for 2020, the start of what Dell Technologies considers as the Next Data Decade!

 

Jeff Clarke : Tech Predictions For 2020 + Next Data Decade!

It’s hard to believe that we’re heading into the year 2020 – a year that many have marked as a milestone in technology. Autonomous cars lining our streets, virtual assistants predicting our needs and taking our requests, connected and intelligent everything across every industry.

When I stop to think about what has been accomplished over the last decade – it’s quite remarkable.  While we don’t have fully autonomous cars zipping back and forth across major freeways with ease, automakers are getting closer to deploying autonomous fleets in the next few years.

Many of the every-day devices, systems and applications we use are connected and intelligent – including healthcare applications, industrial machines and financial systems – forming what is now deemed as “the edge.”

At the root of all that innovation and advancement are massive amounts of data and compute power, and the capacity across edge, cloud and core data center infrastructure to put data through its paces. And with the amount of data coming our way in the next 10 years – we can only imagine what the world around us will look like in 2030, with apps and services we haven’t even thought of yet.

2020 marks the beginning of what we at Dell Technologies are calling the Next Data Decade, and we are no doubt entering this era with new – and rather high – expectations of what technology can make possible for how we live, work and play. So what new breakthroughs and technology trends will set the tone for what’s to come over the next 10 years? Here are my top predictions for the year ahead.

2020 proves it’s time to keep IT simple

We’ve got a lot of data on our hands…big data, meta data, structured and unstructured data – data living in clouds, in devices at the edge, in core data centers…it’s everywhere. But organisations are struggling to ensure the right data is moving to the right place at the right time. They lack data visibility – the ability for IT teams to quickly access and analyse the right data – because there are too many systems and services woven throughout their IT infrastructure. As we kick off 2020, CIOs will make data visibility a top IT imperative because after all, data is what makes the flywheel of innovation spin.

We’ll see organisations accelerate their digital transformation by simplifying and automating their IT infrastructure and consolidating systems and services into holistic solutions that enable more control and clarity. Consistency in architectures, orchestration and service agreements will open new doors for data management – and that ultimately gives data the ability be used as part of AI and Machine Learning to fuel IT automation.  And all of that enables better, faster business outcomes that the innovation of the next decade will thrive on.

Cloud co-existence sees rolling thunder

The idea that public and private clouds can and will co-exist becomes a clear reality in 2020. Multi-cloud IT strategies supported by hybrid cloud architectures will play a key role in ensuing organisations have better data management and visibility, while also ensuring that their data remains accessible and secure.  In fact, IDC predicted that by 2021, over 90% of enterprises worldwide will rely on a mix of on-premises/dedicated private clouds, several public clouds, and legacy platforms to meet their infrastructure needs.

But private clouds won’t simply exist within the heart of the data center. As 5G and edge deployments continue to rollout, private hybrid clouds will exist at the edge to ensure the real-time visibility and management of data everywhere it lives.

That means organisations will expect more of their cloud and service providers to ensure they can support their hybrid cloud demands across all environments. Further, we’ll see security and data protection become deeply integrated as part of hybrid cloud environments, notably where containers and Kubernetes continue to gain momentum for app development. Bolting security measures onto cloud infrastructure will be a non-starter…it’s got to be inherently built into the fiber of the overall data management strategy edge to core to cloud.

What you get is what you pay

One of the biggest hurdles for IT decision makers driving transformation is resources. CapEx and OpEx can often be limiting factors when trying to plan and predict for compute and consumption needs for the year ahead…never mind the next three-five years. SaaS and cloud consumption models have increased in adoption and popularity, providing organisations with the flexibility to pay for what they use, as they go.

In 2020, flexible consumption and as-a-service options will accelerate rapidly as organisations seize the opportunity to transform into software-defined and cloud-enabled IT. As a result – they’ll be able to choose the right economic model for their business to take advantage of end-to-end IT solutions that enable data mobility and visibility, and crunch even the most intensive AI and Machine Learning workloads when needed.

“The Edge” rapidly expands into the enterprise

The “Edge” continues to evolve – with many working hard to define exactly what it is and where it exists.   Once limited to the Internet of Things (IoT), it’s hard to find any systems, applications, services – people and places – that aren’t connected. The edge is emerging in many places and it’s going to expand with enterprise organisations leading the way, delivering the IT infrastructure to support it.

5G connectivity is creating new use cases and possibilities for healthcare, financial services, education and industrial manufacturing. As a result, SD-WAN and software-defined networking solutions become a core thread of a holistic IT infrastructure solution – ensuring massive data workloads can travel at speed – securely – between edge, core and cloud environments. Open networking solutions will prevail over proprietary as organisations recognise the only way to successfully manage and secure data for the long haul requires the flexibility and agility that only open software defined networking can deliver.

Intelligent devices change the way you work and collaborate

PC innovation continues to push new boundaries every year – screens are more immersive and bigger than ever, yet the form factor becomes smaller and thinner. But more and more, it’s what is running at the heart of that PC that is more transformational than ever. Software applications that use AI and machine learning create systems that now know where and when to optimise power and compute based on your usage patterns. With biometrics, PCs know it’s you from the moment you gaze at the screen. And now, AI and machine learning applications are smart enough to give your system the ability to dial up the sound and colour based on the content you’re watching or the game you’re playing.

Over the next year, these advancements in AI and machine learning will turn our PCs into even smarter and more collaborative companions. They’ll have the ability to optimise power and battery life for our most productive moments – and even become self-sufficient machines that can self-heal and self-advocate for repair – reducing the burden on the user and of course, reducing the number of IT incidents filed. That’s a huge increase in happiness and productivity for both the end users and the IT groups that support them.

Innovating with integrity, sourcing sustainably

Sustainable innovation will continue to take center stage, as organisations like ours want to ensure the impact they have in the world doesn’t come with a dangerous one on the planet. Greater investments in reuse and recycling for closed-loop innovation will accelerate – hardware becomes smaller and more efficient and built with recycled and reclaimed goods – minimising eWaste and maximising already existing materials. At Dell Technologies, we met our Legacy of Good 2020 goals ahead of schedule – so we’ve retired them and set new goals for 2030 to recycle an equivalent product for every product a customer buys, lead the circular economy with more than half of all product content being made from recycled or renewable material, and use 100% recycled or renewable material in all packaging.

As we enter the Next Data Decade, I’m optimistic and excited about what the future holds. The steps our customers will take in the next year to get the most out of their data will set forth new breakthroughs in technology that everyone will experience in some way – whether it’s a more powerful device, faster medical treatment, more accessible education, less waste and cleaner air. And before we know it, we’ll be looking forward to what the following 10 years will have in store.

 

Recommended Reading

Go Back To > Business + Enterprise | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


Here Are The Top 10 Tech Trends In 2020 From Alibaba!

Alibaba, specifically its research institute – the Alibaba DAMO Academy, just published their top 10 tech trends in 2020.

Here are the highlights from those top 10 tech trends that they are predicting will go big in 2020!

 

Here Are The Top 10 Tech Trends In 2020 From Alibaba!

Tech Trend #1 : AI Evolves From Perceptual Intelligence To Cognitive Intelligence

Artificial intelligence has reached or surpassed humans in the areas of perceptual intelligence such as speech to text, natural language processing, video understanding etc; but in the field of cognitive intelligence that requires external knowledge, logical reasoning, or domain migration, it is still in its infancy.

Cognitive intelligence will draw inspiration from cognitive psychology, brain science, and human social history, combined with techniques such as cross domain knowledge graph, causality inference, and continuous learning to establish effective mechanisms for stable acquisition and expression of knowledge.

These make machines to understand and utilize knowledge, achieving key breakthroughs from perceptual intelligence to cognitive intelligence.

Tech Trend #2 : In-Memory Computing Addresses Memory Wall Challenge In AI Computer

In Von Neumann architecture, memory and processor are separate and the computation requires data to be moved back and forth.

With the rapid development of data-driven AI algorithms in recent years, it has come to a point where the hardware becomes the bottleneck in the explorations of more advanced algorithms.

In Processing-in-Memory (PIM) architecture, in contrast to the Von Neumann architecture, memory and processor are fused together and computations are performed where data is stored with minimal data movement.

As such, computation parallelism and power efficiency can be significantly improved. We believe the innovations on PIM architecture are the tickets to next-generation AI.

Tech Trend #3 : Industrial IoT Will Power Digital Transformation

In 2020, 5G, rapid development of IoT devices, cloud computing and edge computing will accelerate the fusion of information system, communication system, and industrial control system.

Through advanced Industrial IoT, manufacturing companies can achieve automation of machines, in-factory logistics, and production scheduling, as a way to realise C2B smart manufacturing.

In addition, interconnected industrial system can adjust and coordinate the production capability of both upstream and downstream vendors.

Ultimately it will significantly increase the manufacturers’ productivity and profitability. For manufacturers with production goods that value hundreds of trillion RMB, If the productivity increases 5-10%, it means additional trillions of RMB.

Tech Trend #4 : Large Scale Collaboration Between Machines Become Possible

Traditional single intelligence cannot meet the real-time perception and decision of large-scale intelligent devices.

The development of collaborative sensing technology of Internet of things and 5G communication technology will realise the collaboration among multiple agents – machines cooperate with each other and compete with each other to complete the target tasks.

The group intelligence brought by the cooperation of multiple intelligent bodies will further amplify the value of the intelligent system:

  • large-scale intelligent traffic light dispatching will realise dynamic and real-time adjustment,
  • warehouse robots will work together to complete cargo sorting more efficiently,
  • driverless cars can perceive the overall traffic conditions on the road, and
  • group unmanned aerial vehicle (UAV) collaboration will get through the last-mile delivery more efficiently.

Tech Trend #5 : Modular Chiplet Design Makes Chips Easier & Faster To Create

Traditional model of chip design cannot efficiently respond to the fast evolving, fragmented and customised needs of chip production.

The open source SoC chip design based on RISC-V, high-level hardware description language, and IP-based modular chip design methods have accelerated the rapid development of agile design methods and the ecosystem of open source chips.

In addition, the modular design method based on chiplets uses advanced packaging methods to package the chiplets with different functions together, which can quickly customise and deliver chips that meet specific requirements of different applications.

Tech Trend #6 : Large Scale Blockchain Applications Will Gain Mass Adoption

BaaS (Blockchain-as-a-Service) will further reduce the barriers of entry for enterprise blockchain applications.

A variety of hardware chips embedded with core algorithms used in edge, cloud and designed specifically for blockchain will also emerge, allowing assets in the physical world to be mapped to assets on blockchain, further expanding the boundaries of the Internet of Value and realising “multi-chain interconnection”.

In the future, a large number of innovative blockchain application scenarios with multi-dimensional collaboration across different industries and ecosystems will emerge, and large-scale production-grade blockchain applications with more than 10 million DAI (Daily Active Items) will gain mass adoption.

Tech Trend #7 : A Critical Period Before Large-Scale Quantum Computing

In 2019, the race in reaching “Quantum Supremacy” brought the focus back to quantum computing. The demonstration, using superconducting circuits, boosts the overall confidence on superconducting quantum computing for the realisation of a large-scale quantum computer.

In 2020, the field of quantum computing will receive increasing investment, which comes with enhanced competitions. The field is also expected to experience a speed-up in industrialization and the gradual formation of an eco-system.

In the coming years, the next milestones will be the realization of fault-tolerant quantum computing and the demonstration of quantum advantages in real-world problems. Either is of a great challenge given the present knowledge. Quantum computing is entering a critical period.

Tech Trend #8 : New Materials Will Revolutionise Semiconductor Devices

Under the pressure of both Moore’s Law and the explosive demand of computing power and storage, it is difficult for classic Si based transistors to maintain sustainable development of the semiconductor industry.

Until now, major semiconductor manufacturers still have no clear answer and option to chips beyond 3nm. New materials will make new logic, storage, and interconnection devices through new physical mechanisms, driving continuous innovation in the semiconductor industry.

For example, topological insulators, two-dimensional superconducting materials, etc. that can achieve lossless transport of electron and spin can become the basis for new high-performance logic and interconnect devices; while new magnetic materials and new resistive switching materials can realize high-performance magnetics Memory such as SOT-MRAM and resistive memory.

Tech Trend #9 : Growing Adoption Of AI Technologies That Protect Data Privacy

The compliance costs demanded by the recent data protection laws and regulations related to data transfer are getting increasingly higher than ever before.

In light of this, there have been growing interests in using AI technologies to protect data privacy. The essence is to enable the data user to compute a function over input data from different data providers while keeping those data private.

Such AI technologies promise to solve the problems of data silos and lack of trust in today’s data sharing practices, and will truly unleash the value of data in the foreseeable future.

Tech Trend #10 : Cloud Becomes The Center Of IT Innovation

With the ongoing development of cloud computing technology, the cloud has grown far beyond the scope of IT infrastructure, and gradually evolved into the center of all IT technology innovations.

Cloud has close relationship with almost all IT technologies, including new chips, new databases, self-driving adaptive networks, big data, AI, IoT, blockchain, quantum computing and so forth.

Meanwhile, it creates new technologies, such as serverless computing, cloud-native software architecture, software-hardware integrated design, as well as intelligent automated operation.

Cloud computing is redefining every aspect of IT, making new IT technologies more accessible for the public. Cloud has become the backbone of the entire digital economy.

 

Recommended Reading

Go Back To > Business + Enterprise | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


NVIDIA TensorRT 7 with Real-Time Conversational AI!

NVIDIA just launched TensorRT 7, introducing the capability for Real-Time Conversational AI!

Here is a primer on the NVIDIA TensorRT 7, and the new real-time conversational AI capability!

 

NVIDIA TensorRT 7 with Real-Time Conversational AI

NVIDIA TensorRT 7 is their seventh-generation inference software development kit. It introduces the capability for real-time conversational AI, opening the door for human-to-AI interactions.

TensorRT 7 features a new deep learning compiler designed to automatically optimise and accelerate the increasingly complex recurrent and transformer-based neural networks needed for AI speech applications.

This boosts the performance of conversational AI components by more than 10X, compared to running them on CPUs. This drives down the latency below the 300 millisecond (0.3 second) threshold considered necessary for real-time interactions.

 

TensorRT 7 Targets Recurrent Neural Networks

TensorRT 7 is designed to speed up AI models that are used to make predictions on time-series, sequence-data scenarios that use recurrent loop structures (RNNs).

RNNs are used not only for conversational AI speed networks, they also help with arrival time planning for cars and satellites, predictions of events in electronic medical records, financial asset forecasting and fraud detection.

The use of RNN has hitherto been limited to a few companies with the talent and manpower to hand-optimise the code to meet real-time performance requirements.

With TensorRT 7’s new deep learning compiler, developers now have the ability to automatically optimise these neural networks to deliver the best possible performance and lowest latencies.

The new compiler also optimises transformer-based models like BERT for natural language processing.

 

TensorRT 7 Availability

NVIDIA TensorRT 7 will be made available in the coming days for development and deployment for free to members of the NVIDIA Developer program.

The latest versions of plug-ins, parsers and samples are also available as open source from the TensorRT GitHub repository.

 

Recommended Reading

Go Back To > Software | Business | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


NVIDIA DRIVE Deep Neural Networks : Access Granted!

NVIDIA just announced that they will be providing the transportation industry access to their NVIDIA DRIVE Deep Neural Networks (DNNs) for autonomous vehicle development! Here are the details!

 

NVIDIA DRIVE Deep Neural Networks : Access Granted!

To accelerate the adoption of NVIDIA DRIVE by the transportation industry for autonomous vehicle development, NVIDIA is providing access to the NVIDIA DRIVE Deep Neural Networks.

What this means is autonomous vehicle developers will now be able to access all of NVIDIA”s pre-trained AI models and training code, and use them to improve their self-driving systems.

Using AI is central to the development of safe, self-driving cars. AI lets autonomous vehicles perceive and react to obstacles and potential dangers, or even changes in their surroundings.

Powering every self-driving car are dozens of Deep Neural Networks (DNNs) that tackle redundant and diverse tasks, to ensure accurate perception, localisation and path planning.

These DNNs cover tasks like traffic light and sign detection, object detection for vehicles, pedestrians and bicycles, and path perception, as well as gaze detection and gesture recognition within the vehicle.

 

Advanced NVIDIA DRIVE Tools

In addition to providing access to their DRIVE DNNs, NVIDIA also made available a suite of advanced NVIDIA DRIVE tools.

These NVIDIA DRIVE tools allow autonomous vehicle developers to customise and enhance the NVIDIA DRIVE DNNs using their own datasets and target feature set.

  • Active Learning improves model accuracy and reduces data collection costs by automating data selection using AI, rather than manual curation.
  • Federated Learning lets developers utilise datasets across countries, and with other developers while maintaining data privacy and protecting their own intellectual property.

  • Transfer Learning gives NVIDIA DRIVE customers the ability to speed up development of their own perception software by leveraging NVIDIA’s own autonomous vehicle development.

 

Recommended Reading

Go Back To > Automotive | Business | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


DiDi Adopts NVIDIA AI + GPUs For Self-Driving Cars!

At GTC China 2019, DiDi announced that they will adopt NVIDIA GPUs and AI technologies to develop self-driving cars, as well as their cloud computing solutions.

 

DiDi Adopts NVIDIA AI + GPUs For Self-Driving Cars!

This announcement comes after DiDi spliced out their autonomous driving unit as an independent company in August 2019.

In their announcement, DiDi confirmed that they will use NVIDIA technologies in both their data centres and onboard their self-driving cars :

  • NVIDIA GPUs will be used to train machine learning algorithms in the data center
  • NVIDIA DRIVE will be used for inference in their Level 4 self-driving cars

NVIDIA DRIVE will fuse data from all types of sensors – cameras, LIDAR, radar, etc – and use numerous deep neural networks (DNNs) to understand the surrounding area, so the self-driving car can plan a safe way forward.

Those DNNs (deep neural networks) will require prior training using NVIDIA GPU data centre servers, and machine learning algorithms.

Recommended : NVIDIA DRIVE AGX Orin for Autonomous Vehicles Revealed!

 

DiDi Cloud Computing Will Use NVIDIA Tech Too

DiDi also announced that DiDi Cloud will adopt and launch new vGPU (virtual GPU) cloud servers based on NVIDIA GPUs.

The new vGPU licence mode will offer more affordable and flexible GPU cloud computing services for remote computing, rendering and gaming.

 

Recommended Reading

Go Back To > Automotive | Business | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


Intel oneAPI Unified Programming Model Overview!

At Supercomputing 2019, Intel unveiled their oneAPI initiative for heterogenous computing, promising to deliver a unified programming experience for developers.

Here is an overview of the Intel oneAPI unified programming model, and what it means for programmers!

 

The Need For Intel oneAPI

The modern computing environment is now a lot less CPU-centric, with the greater adoption of GPUs, FGPAs and custom-built accelerators (like the Alibaba Hanguang 800).

Their different scalar, vector, matrix and spatial architectures require different APIs and code bases, which complicates attempts to utilise a mix of those capabilities.

 

Intel oneAPI For Heterogenous Computing

Intel oneAPI promises to change all that, offering a unified programming model for those different architectures.

It allows developers to create workloads and applications for multiple architectures on their platform of choice, without the need to develop and maintain separate code bases, tools and workflow.

Intel oneAPI comprises of two components – the open industry initiative, and the Intel oneAPI beta toolkit :

oneAPI Initiative

This is a cross-architecture development model based on industry standards, and an open specification, to encourage broader adoption.

Intel oneAPI Beta Toolkit

This beta toolkit offers the Intel oneAPI specification components with direct programming (Data Parallel C++), API-based programming with performance libraries, advanced analysis and debug tools.

Developers can test code and workloads in the Intel DevCloud for oneAPI on multiple Intel architectures.

 

What Processors + Accelerators Are Supported By Intel oneAPI?

The beta Intel oneAPI reference implementation currently supports these Intel platforms :

  • Intel Xeon Scalable processors
  • Intel Core and Atom processors
  • Intel processor graphics (as a proxy for future Intel discrete data centre GPUs)
  • Intel FPGAs (Intel Arria, Stratix)

The oneAPI specification is designed to support a broad range of CPUs and accelerators from multiple vendors. However, it is up to those vendors to create their own oneAPI implementations and optimise them for their own hardware.

 

Are oneAPI Elements Open-Sourced?

Many oneAPI libraries and components are already, or will soon be open sourced.

 

What Companies Are Participating In The oneAPI Initiative?

According to Intel, more than 30 vendors and research organisations support the oneAPI initiative, including CERN openlab, SAP and the University of Cambridge.

Companies that create their own implementation of oneAPI and complete a self-certification process will be allowed to use the oneAPI initiative brand and logo.

 

Available Intel oneAPI Toolkits

At the time of its launch (17 November 2019), here are the toolkits that Intel has made available for developers to download and use :

Intel oneAPI Base Toolkit (Beta)

This foundational kit enables developers of all types to build, test, and deploy performance-driven, data-centric applications across CPUs, GPUs, and FPGAs. Comes with :

[adrotate group=”2″]
  • Intel oneAPI Data Parallel C++ Compiler
  • Intel Distribution for Python
  • Multiple optimized libraries
  • Advanced analysis and debugging tools

Domain Specific oneAPI Toolkits for Specialised Workloads :

  • oneAPI HPC Toolkit (beta) : Deliver fast C++, Fortran, OpenMP, and MPI applications that scale.
  • oneAPI DL Framework Developer Toolkit (beta) : Build deep learning frameworks or customize existing ones.
  • oneAPI IoT Toolkit (beta) : Build high-performing, efficient, reliable solutions that run at the network’s edge.
  • oneAPI Rendering Toolkit (beta) : Create high-performance, high-fidelity visualization applications.

Additional Toolkits, Powered by oneAPI

  • Intel AI Analytics Toolkit (beta) : Speed AI development with tools for DL training, inference, and data analytics.
  • Intel Distribution of OpenVINO Toolkit : Deploy high-performance inference applications from device to cloud.
  • Intel System Bring-Up Toolkit (beta) : Debug and tune systems for power and performance.

You can download all of those toolkits here.

 

Recommended Reading

Go Back To > Business + Enterprise | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


Dell Forecasts The Future of Connected Living In 2030!

Dell Technologies just shared with us the key findings from their research that explore the future of connected living by the year 2030!

Find out how emerging technologies will transform how our lives will change by the year 2030!

 

Dell On The Future of Connected Living In 2030!

Dell Technologies conducted their research in partnership with the Institute for the Future (IFTF) and Vanson Bourne, surveying 1,100 business leaders across ten countries in Asia Pacific and Japan.

Let’s take a look at their key findings, and find out why they believe the future is brimming with opportunity thanks to emerging technologies.

 

Technological Shifts Transforming The Future By 2030

IFTF and a forum of global experts forecast that emerging technologies like edge computing, 5G, AI, Extended Reality (XR) and IoT will create these five major shifts in society :

1. Networked Reality

Over the next decade, the line between the virtual and the real will vanish. Cyberspace will become an overlay on top of our existing reality as our digital environment extends beyond televisions, smartphones and other displays.

This transformation will be driven by the deployment of 5G networks that enable high bandwidth, low-latency connections for streaming, interactive services, and multi-user media content.

2. Connected Mobility and Networked Matter

The vehicles of tomorrow will essentially be mobile computers, with the transportation system resembling packet-switched networks that power the Internet.

We will trust them to take us where we need to go in the physical world as we interact in the virtual spaces available to us wherever we are.

3. From Digital Cities to Sentient Cities

More than half of the world’s population live in urban areas. This will increase to 68% over the next three decades, according to the United Nations.

This level of growth presents both huge challenges and great opportunities for businesses, governments and citizens.

Cities will quite literally come to life through their own networked infrastructure of smart objects, self-reporting systems and AI-powered analytics.

4. Agents and Algorithms

Our 2030 future will see everyone supported by a highly personalised “operating system for living” that is able to anticipate our needs and proactively support our day-to-day activities to free up time.

Such a Life Operating System (Life OS) will be context-aware, anticipating our needs and behaving proactively.

Instead of interacting with different apps today, the intelligent agent of the future will understand what you need and liaise with various web services, other bots and networked objects to get the job done.

5. Robot with Social Lives

Within 10 years, we will have personal robots that will become our partners in life – enhancing our skills and extending our abilities.

In some cases, they will replace us, but this can mean freeing us to do the things we are good at, and enjoy.

In most cases, they can become our collaborators, helping to crowdsource innovations and accelerate progress through robot social networks.

 

Preparing For The Future Of Connected Living By 2030

Anticipating Change

Many businesses in APJ are already preparing for these shifts, with business leaders expressing these perceptions :

  • 80% (82% in Malaysia) will restructure the way they spend their time by automating more tasks
  • 70% (83% in Malaysia) welcome people partnering with machines/robots to surpass our human limitations
  • More than half of businesses anticipate Networked Reality becoming commonplace
    – 63% (67% in Malaysia) say they welcome day-to-day immersion in virtual and augmented realities
    – 62% (63% in Malaysia) say they welcome people being fitted with brain computer interfaces

Navigating Challenges

These technological shifts are seismic in nature, leaving people and organisations grappling with change. Organisations that want to harness these emerging technologies will need to collect, process and make use of the data, while addressing public concerns about data privacy.

APJ business leaders are already anticipating some of these challenges :

  • 78% (88% in Malaysia) will be more concerned about their own privacy by 2030 than they are today
  • 74% (83% in Malaysia) consider data privacy to be a top societal-scale challenge that must be solved
  • 49% (56% in Malaysia) would welcome self-aware machines
  • 49% (43% in Malaysia) call for regulation and clarity on how AI is used
  • 84% (85% in Malaysia) believe that digital transformation should be more widespread throughout their organisation

 

Recommended Reading

Go Back To > Business + EnterpriseHome

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


Intel Nervana NNP-I1000 PCIe + M.2 Cards Revealed!

The new Intel Nervana NNP-I1000 neural network processor comes in PCIe and M.2 card options designed for AI inference acceleration.

Here is EVERYTHING you need to know about the Intel Nervana NNP-I1000 PCIe and M.2 card options!

 

Intel Nervana Neural Network Processors

Intel Nervana neural network processors, NNPs for short, are designed to accelerated two key deep learning technologies – training and inference.

To target these two different tasks, Intel created two AI accelerator families – Nervana NNP-T that’s optimised for training, and Nervana NNP-I that’s optimised for inference.

They are both paired with a full software stack, developed with open components and deep learning framework integration.

Recommended : Intel Nervana AI Accelerators : Everything You Need To Know!

 

Intel Nervana NNP-I1000

The Intel Nervana NNP-I1000, on the other hand, is optimised for multi-modal inferencing of near-real-time, high-volume compute.

Each Nervana NNP-I1000 features 12 Inference Compute Engines (ICE), which are paired with two Intel CPU cores, a large on-die 75 MB SRAM cache and an on-die Network-on-Chip (NoC).

It offers mixed-precision support, with a special focus on low-precision applications for near-real-time performance.

Like the NNP-T, the NNP-I comes with a full software stack that is built with open components, including direct integration with deep learning frameworks.

 

Intel Nervana NNP-I1000 Models

The Nervana NNP-I1000 comes in a M.2 form factor, or a PCI Express card, to accommodate exponentially larger and more complex models, or to run dozens of models and networks in parallel.

Specifications Intel Nervana NNP-I1100 Intel Nervana NNP-I1300
Form Factor M.2 Card PCI Express Card
Compute 1 x Intel Nervana NNP-I1000 2 x Intel Nervana NNP-I1000
SRAM 75 MB 2 x 75 MB
Int8 Performance Up to 50 TOPS Up to 170 TOPS
TDP 12 W 75 W

 

Intel Nervana NNP-I1000 PCIe Card

This is what the Intel Nervana NNP-I1000 (also known as the NNP-I1100) PCIe card looks like :

 

Intel Nervana NNP-I1000 M.2 Card

This is what the Intel Nervana NNP-I1000 (also known as the NNP-I1300) M.2 card looks like :

 

Recommended Reading

[adrotate group=”2″]

Go Back To > Business + Enterprise | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


Intel Nervana NNP-T1000 PCIe + Mezzanine Cards Revealed!

The new Intel Nervana NNP-T1000 neural network processor comes in PCIe and Mezzanine card options designed for AI training acceleration.

Here is EVERYTHING you need to know about the Intel Nervana NNP-T1000 PCIe and Mezzanine card options!

 

Intel Nervana Neural Network Processors

Intel Nervana neural network processors, NNPs for short, are designed to accelerated two key deep learning technologies – training and inference.

To target these two different tasks, Intel created two AI accelerator families – Nervana NNP-T that’s optimised for training, and Nervana NNP-I that’s optimised for inference.

They are both paired with a full software stack, developed with open components and deep learning framework integration.

Recommended : Intel Nervana AI Accelerators : Everything You Need To Know!

 

Intel Nervana NNP-T1000

The Intel Nervana NNP-T1000 is not only capable of training even the most complex deep learning models, it is highly scalable – offering near linear scaling and efficiency.

By combining compute, memory and networking capabilities in a single ASIC, it allows for maximum efficiency with flexible and simple scaling.

Each Nervana NNP-T1000 is powered by up to 24 Tensor Processing Clusters (TPCs), and comes with 16 bi-directional Inter-Chip Links (ICL).

Its TPC supports 32-bit floating point (FP32) and brain floating point (bfloat16) formats, allowing for multiple deep learning primitives with maximum processing efficiency.

Its high-speed ICL communication fabric allows for near-linear scaling, directly connecting multiple NNP-T cards within servers, between servers and even inside and across racks.

  • High compute utilisation using Tensor Processing Clusters (TPC) with bfloat16 numeric format
  • Both on-die SRAM and on-package High-Bandwidth Memory (HBM) keep data local, reducing movement
  • Its Inter-Chip Links (ICL) glueless fabric architecture and fully-programmable router achieves near-linear scaling across multiple cards, systems and PODs
  • Available in PCIe and OCP Open Accelerator Module (OAM) form factors
  • Offers a programmable Tensor-based instruction set architecture (ISA)
  • Supports common open-source deep learning frameworks like TensorFlow, PaddlePaddle and PyTorch

 

Intel Nervana NNP-T1000 Models

The Intel Nervana NNP-T1000 is currently available in two form factors – a dual-slot PCI Express card, and a OAM Mezzanine Card, with these specifications :

Specifications Intel Nervana NNP-T1300 Intel Nervana NNP-T1400
Form Factor Dual-slot PCIe Card OAM Mezzanine Card
Compliance PCIe CEM OAM 1.0
Compute Cores 22 TPCs 24 TPCs
Frequency 950 MHz 1100 MHz
SRAM 55 MB on-chip, with ECC 60 MB on-chip, with ECC
Memory 32 GB HBM2, with ECC 32 GB HBM2, with ECC
Memory Bandwidth 2.4 Gbps (300 MB/s)
Inter-Chip Link (ICL) 16 x 112 Gbps (448 GB/s)
ICL Topology Ring Ring, Hybrid Cube Mesh,
Fully Connected
Multi-Chassis Scaling Yes Yes
Multi-Rack Scaling Yes Yes
I/O to Host CPU PCIe Gen3 / Gen4 x16
Thermal Solution Passive, Integrated Passive Cooling
TDP 300 W 375 W
Dimensions 265.32 mm x 111.15 mm 165 mm x 102 mm

 

Intel Nervana NNP-T1000 PCIe Card

This is what the Intel Nervana NNP-T1000 (also known as the NNP-T1300) PCIe card looks like :

 

Intel Nervana NNP-T1000 OAM Mezzanine Card

This is what the Intel Nervana NNP-T1000 (also known as NNP-T1400) Mezzanine card looks like :

 

Recommended Reading

[adrotate group=”2″]

Go Back To > Business + Enterprise | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


Intel Nervana AI Accelerators : Everything You Need To Know!

Intel just introduced their Nervana AI accelerators – the Nervana NNP-T1000 for training, and Nervana NNP-I1000 for inference!

Here is EVERYTHING you need to know about these two new Intel Nervana AI accelerators!

 

Intel Nervana Neural Network Processors

Intel Nervana neural network processors, NNPs for short, are designed to accelerated two key deep learning technologies – training and inference.

To target these two different tasks, Intel created two AI accelerator families – Nervana NNP-T that’s optimised for training, and Nervana NNP-I that’s optimised for inference.

They are both paired with a full software stack, developed with open components and deep learning framework integration.

 

Nervana NNP-T For Training

The Intel Nervana NNP-T1000 is not only capable of training even the most complex deep learning models, it is highly scalable – offering near linear scaling and efficiency.

By combining compute, memory and networking capabilities in a single ASIC, it allows for maximum efficiency with flexible and simple scaling.

Recommended : Intel NNP-T1000 PCIe + Mezzanine Cards Revealed!

Each Nervana NNP-T1000 is powered by up to 24 Tensor Processing Clusters (TPCs), and comes with 16 bi-directional Inter-Chip Links (ICL).

Its TPC supports 32-bit floating point (FP32) and brain floating point (bfloat16) formats, allowing for multiple deep learning primitives with maximum processing efficiency.

Its high-speed ICL communication fabric allows for near-linear scaling, directly connecting multiple NNP-T cards within servers, between servers and even inside and across racks.

  • High compute utilisation using Tensor Processing Clusters (TPC) with bfloat16 numeric format
  • Both on-die SRAM and on-package High-Bandwidth Memory (HBM) keep data local, reducing movement
  • Its Inter-Chip Links (ICL) glueless fabric architecture and fully-programmable router achieves near-linear scaling across multiple cards, systems and PODs
  • Available in PCIe and OCP Open Accelerator Module (OAM) form factors
  • Offers a programmable Tensor-based instruction set architecture (ISA)
  • Supports common open-source deep learning frameworks like TensorFlow, PaddlePaddle and PyTorch

 

Intel Nervana NNP-T Accelerator Models

The Intel Nervana NNP-T is currently available in two form factors – a dual-slot PCI Express card, and a OAM Mezzanine Card, with these specifications :

Specifications Intel Nervana NNP-T1300 Intel Nervana NNP-T1400
Form Factor Dual-slot PCIe Card OAM Mezzanine Card
Compliance PCIe CEM OAM 1.0
Compute Cores 22 TPCs 24 TPCs
Frequency 950 MHz 1100 MHz
SRAM 55 MB on-chip, with ECC 60 MB on-chip, with ECC
Memory 32 GB HBM2, with ECC 32 GB HBM2, with ECC
Memory Bandwidth 2.4 Gbps (300 MB/s)
Inter-Chip Link (ICL) 16 x 112 Gbps (448 GB/s)
ICL Topology Ring Ring, Hybrid Cube Mesh,
Fully Connected
Multi-Chassis Scaling Yes Yes
Multi-Rack Scaling Yes Yes
I/O to Host CPU PCIe Gen3 / Gen4 x16
Thermal Solution Passive, Integrated Passive Cooling
TDP 300 W 375 W
Dimensions 265.32 mm x 111.15 mm 165 mm x 102 mm

 

Nervana NNP-I For Inference

The Intel Nervana NNP-I1000, on the other hand, is optimised for multi-modal inferencing of near-real-time, high-volume compute.

Each Nervana NNP-I1000 features 12 Inference Compute Engines (ICE), which are paired with two Intel CPU cores, a large on-die 75 MB SRAM cache and an on-die Network-on-Chip (NoC).

Recommended : Intel NNP-I1000 PCIe + M.2 Cards Revealed!

Intel Nervana NNP-I1000 PCIe + M.2 Cards Revealed!

It offers mixed-precision support, with a special focus on low-precision applications for near-real-time performance.

Like the NNP-T, the NNP-I comes with a full software stack that is built with open components, including direct integration with deep learning frameworks.

Intel Nervana NNP-I Accelerator Models

The NNP-I1000 comes in a 12 W M.2 form factor, or a 75 W PCI Express card, to accommodate exponentially larger and more complex models, or to run dozens of models and networks in parallel.

Specifications Intel Nervana NNP-I1100 Intel Nervana NNP-I1300
Form Factor M.2 Card PCI Express Card
Compute 1 x Intel Nervana NNP-I1000 2 x Intel Nervana NNP-I1000
SRAM 75 MB 2 x 75 MB
Int8 Performance Up to 50 TOPS Up to 170 TOPS
TDP 12 W 75 W

 

Recommended Reading

[adrotate group=”2″]

Go Back To > Business + Enterprise | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


NVIDIA Jetson Xavier NX : World’s Smallest AI Supercomputer

On 7 November 2019, NVIDIA introduced the Jetson Xavier NX – the world’s smallest AI supercomputer designed for robotics and embedded computing applications at the edge!

Here is EVERYTHING you need to know about the new NVIDIA Jetson Xavier NX!

 

NVIDIA Jetson Xavier NX : World’s Smallest AI Supercomputer

At just 70 x 45 mm, the new NVIDIA Jetson Xavier NX is smaller than a credit card. Yet it delivers server-class AI performance at up to 21 TOPS, while consuming as little as 10 watts of power.

Short for Nano Xavier, the NX is a low-power version of the Xavier SoC that came up tops in the MLPerf Inference benchmarks.

Recommended : NVIDIA Wins MLPerf Inference Benchmarks For DC + Edge!

With its small size and low-power, it opens up the possibility of adding AI on-the-edge computing capabilities to small commercial robots, drones, industrial IoT systems, network video recorders and portable medical devices.

The Jetson Xavier NX can be configured to deliver up to 14 TOPS at 10 W, or 21 TOPS at 15 W. It is powerful enough to run multiple neural networks in parallel, and process data from multiple high-resolution sensors simultaneously.

The NVIDIA Jetson Xavier NX runs on the same CUDA-X AI software architecture as all other Jetson processors, and is supported by the NVIDIA JetPack software development kit.

It is pin-compatible with the Jetson Nano, offering up to 15X higher performance than the Jetson TX2 in a smaller form factor.

It is not available for a few more months, but developers can begin development today using the Jetson AGX Xavier Developer Kit, with a software patch to emulate Jetson Xavier NX.

 

NVIDIA Jetson Xavier NX Specifications

Specifications NVIDIA Jetson Xavier NX
CPU NVIDIA Carmel
– 6 x Arm 64-bit cores
– 6 MB L2 + 4 MB L3 caches
GPU NVIDIA Volta
– 384 CUDA cores, 48 Tensor cores, 2 NVDLA cores
AI Performance 21 TOPS : 15 watts
14 TOPS : 10 watts
Memory Support 128-bit LPDDR4x-3200
– Up to 8 GB, 51.2 GB/s
Video Support Encoding : Up to 2 x 4K30 streams
Decoding : Up to 2 x 4K60 streams
Camera Support Up to six CSI cameras (32 via virtual channels)
Up to 12 lanes (3×4 or 6×2) MIPI CSI-2
Connectivity Gigabit Ethernet
OS Support Ubuntu-based Linux
Module Size 70 x 45 mm (Nano)

 

NVIDIA Jetson Xavier NX Price + Availability

The NVIDIA Jetson Xavier NX will be available in March 2020 from NVIDIA’s distribution channels, priced at US$399.

 

Recommended Reading

Go Back To > Enterprise | Software | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


NVIDIA Wins MLPerf Inference Benchmarks For DC + Edge!

The MLPerf Inference 0.5 benchmarks are officially released today, with NVIDIA declaring that they aced them for both datacenter and edge computing workloads.

Find out how well NVIDIA did, and why it matters!

 

The MLPerf Inference Benchmarks

MLPerf Inference 0.5 is the industry’s first independent suite of five AI inference benchmarks.

Applied across a range of form factors and four inference scenarios, the new MLPerf Inference Benchmarks test the performance of established AI applications like image classification, object detection and translation.

 

NVIDIA Wins MLPerf Inference Benchmarks For Datacenter + Edge

Thanks to the programmability of its computing platforms to cater to diverse AI workloads, NVIDIA was the only company to submit results for all five MLPerf Inference Benchmarks.

According to NVIDIA, their Turing GPUs topped all five benchmarks for both datacenter scenarios (server and offline) among commercially-available processors.

Meanwhile, their Jetson Xavier scored highest among commercially-available edge and mobile SoCs under both edge-focused scenarios – single stream and multi-stream.

The new NVIDIA Jetson Xavier NX that was announced today is a low-power version of the Xavier SoC that won the MLPerf Inference 0.5 benchmarks.

All of NVIDIA’s MLPerf Inference Benchmark results were achieved using NVIDIA TensorRT 6 deep learning inference software.

 

Recommended Reading

Go Back To > Enterprise | Software | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


Samsung – IBM AI IoT Cloud Platform For 5G Mobile Solutions!

At the Samsung Developer Conference 2019, Samsung and IBM announced a joint platform that leverages Samsung Galaxy devices and IBM cloud technologies to introduce new 5G, AI-powered mobile solutions!

Here is what you need to know about this new Samsung-IBM AI IoT cloud platform, and the 5G AI-powered mobile solutions it’s powering for governments and enterprises.

 

Samsung – IBM AI IoT Cloud Platform For 5G Mobile Solutions!

Built using IBM Cloud technologies and Samsung Galaxy mobile devices, the new platform will help improve the work environment for employees in high-stress or high-risk occupations.

This will help reduce the risks to these public employees who work in dangerous and high-stress situations. This is critical because nearly 3 million deaths occur each year due to occupational accidents.

This new, unnamed Samsung-IBM platform will help governments and enterprises track their employee’s vitals, including heart rate and physical activity. This will allow them to determine if that employee is in distress and requires help.

 

The Samsung – IBM AI IoT Cloud Platform In Use

5G mobile solutions based on the new Samsung-IBM AI IoT platform is being piloted by multiple police forces to monitor their health in real-time, and provide situational awareness insights to first responders and their managers.

The platform can track in real time, the safety and wellness indicators of first responders equipped with Samsung Galaxy Watches and Galaxy smartphones with 5G connectivity.

It can instantly alert emergency managers if there is a significant change in the safety parameters, which may indicate the first responder is in danger of a heart attack, heat exhaustion or other life-threatening events.

This allows them to anticipate potential dangers, and quickly send assistance. This should greatly reduce the risk of death and injuries to their employees.

 

Recommended Reading

Go Back To > Business + Enterprise | Mobile | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


Key NVIDIA EGX Announcements @ MWC Los Angeles 2019!

At MWC Los Angeles 2019, NVIDIA announced major partnerships on their EGX edge computing platform with Microsoft, Ericsson and Red Hat to accelerate AI, IoT and 5G at the Edge.

Catch the official NVIDIA EGX briefing on these new updates, and find out what it means for edge computing!

 

The Official NVIDIA EGX Briefing @ MWC Los Angeles 2019

Before the official announcement, NVIDIA gave us an exclusive briefing on their EGX edge computing platform. We are sharing it with you, so you can hear it directly from Justin Boitano, General Manager of NVIDIA’s Enterprise & Edge Computing division.

Here is a summary of what NVIDIA unveiled at MWC Los Angeles 2019 :

NVIDIA EGX Early Adopters

NVIDIA actually announced EGX at Computex Taipei in June 2019 – a combination of the NVIDIA CUDA-X software and NVIDIA-certified GPU servers and devices.

Now, they are announcing that Walmart, BMW, Proctor & Gamble, Samsung Electronics and ETT East have adopted the EGX platform, and so have the cities of San Francisco and Las Vegas.

  • Samsung Electronics — The Korean electronics giant is using AI at the edge for highly complex semiconductor design and manufacturing processes.
  • BMW — The German automaker is using intelligent video analytics in its South Carolina manufacturing facility to automate inspection. With EGX gathering data from multiple cameras and other sensors in inspection lines, BMW is helping ensure only the highest quality automobiles leave the factory floor.
  • NTT East — The Japanese telecom services giant is using EGX in its data centers to develop new AI-powered services in remote areas through its broadband access network. Using the EGX platform, NTT East will provide remote populations the computing power and connectivity required to build and deploy a wide range of AI applications at the edge.
  • Procter & Gamble — The world’s leading consumer goods company is working with NVIDIA to develop AI-enabled applications on top of the EGX platform for the inspection of products and packaging to help ensure they meet the highest safety and quality standards. P&G is using NVIDIA EGX to analyze thousands of hours of footage from inspection lines and immediately flag imperfections.
  • Las Vegas — The city is using EGX to capture vehicle and pedestrian data to ensure safer streets and expand economic opportunity. Las Vegas plans to use the data to autonomously manage signal timing and other operational capabilities.
  • San Francisco — The city’s Union Square Business Improvement District is using EGX to capture real-time pedestrian counts for local retailers, providing them a powerful business intelligence tool for engaging with their customers more effectively.

NVIDIA EGX Case Study : Walmart

Walmart is a pioneer user of EGX, deploying it in its Levittown, New York, Intelligent Retail Lab — a fully-operational grocery store where it’s exploring the ways AI can further improve in-store shopping experiences.

Using EGX’s advanced AI and edge capabilities, Walmart is able to compute in real time more than 1.6 terabytes of data generated each second, and can use AI to :

  • automatically alert associates to restock shelves,
  • open up new checkout lanes,
  • retrieve shopping carts, and
  • ensure product freshness in meat and produce departments.

NVIDIA EGX Partnership With Microsoft

NVIDIA announced a collaboration with Microsoft to enable the closer integration between Microsoft Azure and the NVIDIA EGX platform, to advance edge-to-cloud AI computing capabilities for their clients.

The NVIDIA Metropolis video analytics application framework, which runs on EGX, has been optimized to work with Microsoft’s Azure IoT Edge, Azure Machine Learning solutions and a new form factor of the Azure Data Box Edge appliance powered by NVIDIA T4 GPUs.

In addition, NVIDIA-certified off-the-shelf servers — optimised to run Azure IoT Edge and ML services — are now available from more than a dozen leading OEMs, including Dell, Hewlett Packard Enterprise and Lenovo.

NVIDIA EGX Partnership With Ericsson

NVIDIA also announced that they are collaborating with Ericsson on developing virtualised RAN technologies.

Their ultimate goal is to commercialise those virtualised RAN technologies to deliver 5G networks with flexibility and shorter time-to-market for new services like augmented reality, virtual reality and gaming.

NVIDIA EGX Partnership With Red Hat

Finally, NVIDIA announced a collaboration with Red Hat to deliver software-defined 5G wireless infrastructure running on Red Hat OpenShift to the telecom industry.

Their customers will be able to use NVIDIA EGX and Red Hat OpenShift to deploy NVIDIA GPUs to accelerate AI, data science and machine learning at the edge.

The critical element enabling 5G providers to move to cloud-native infrastructure is NVIDIA Aerial. This software developer kit, also announced today, allows providers to build and deliver high-performance, software-defined 5G wireless RAN by delivering two essential advancements.

They are a low-latency data path directly from Mellanox network interface cards to GPU memory, and a 5G physical layer signal-processing engine that keeps all data within the GPU’s high-performance memory.

Next Page > NVIDIA EGX Presentation Slides From MWC Los Angeles 2019!

 

Recommended Reading

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


NVIDIA EGX Presentation Slides From MWC Los Angeles 2019!

Here is the full set of NVIDIA EGX presentation slides from MWC Los Angeles 2019 for your perusal :

 

Recommended Reading

Go Back To > First PageBusiness + Enterprise | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


The Alibaba Hanguang 800 (含光 800) AI NPU Explained!

At the Apsara Computing Conference 2019, Alibaba Group unveiled details of their first AI inference NPU – the Hanguang 800 (含光 800).

Here is EVERYTHING you need to know about the Alibaba Hanguang 800 AI inference NPU!

Updated @ 2019-09-27 : Added more details, including a performance comparison against its main competitors.

Originally posted @ 2019-09-25

 

What Is The Alibaba Hanguang 800?

The Alibaba Hanguang 800 is a neural processing unit (NPU) for AI inference applications. It was specifically designed to accelerate machine learning and AI inference tasks.

 

What Does Hanguang Mean?

The name 含光 (Hanguang) literally means “contains light“.

While the name may suggest that it uses photonics, that light-based technology is still at least a decade from commercialisation.

 

What Are The Hanguang 800 Specifications?

Not much is known about the Hanguang 800, other than that it has 17 billion transistors, and is fabricated on the 12 nm process technology.

Also, it is designed for inferencing only, unlike the HUAWEI Ascend 910 AI chip which can handle both training and inference.

Recommended : 3rd Gen X-Dragon Architecture by Alibaba Cloud Explained!

 

Who Designed The Hanguang 800?

The Hanguang 800 was developed over a period of 7 months, by Alibaba’s research unit, T-Head, followed by a 3-month tape-out.

T-Head, whose Chinese name is Pintougehoney badger in English, is responsible for designing chips for cloud and edge computing under Alibaba Cloud / Aliyun.

Earlier this year, T-Head revealed a high-performance IoT processor called XuanTie 910.

Based on the RISC-V open-source instruction set, 16-core XuanTie 910 is targeted at heavy-duty IoT applications like edge servers, networking gateways, and self-driving automobiles.

 

How Fast Is Hanguang 800?

Alibaba claims that the Hanguang 800 “largely” outpaces the industry average performance, with image processing efficiency about 12X better than GPUs :

  • Single chip performance : 78,563 images per second (IPS)
  • Computational efficiency : 500 IPS per watt (Resnet-50 Inference Test)
Hanguang 800 Habana Goya Cambricon MLU270 NVIDIA T4 NVIDIA P4
Fab Process 12 nm 16 nm 16 nm 12 nm 16 nm
Transistors 17 billion NA NA 13.6 billion 7.2 billion
Performance
(ResNet-50)
78,563 IPS 15,433 IPS 10,000 IPS 5,402 IPS 1,721 IPS
Peak Efficiency
(ResNet-50)
500 IPS/W 150 IPS/W 143 IPS/W 78 IPS/W 52 IPS/W

Recommended : 2nd Gen EPYC – Everything You Need To Know Summarised!

 

Where Will Hanguang 800 Be Used?

The Hanguang 800 chip will be used exclusively by Alibaba to power their own business operations, especially in product search and automatic translation, personalised recommendations and advertising.

According to Alibaba, merchants upload a billion product images to Taobao every day. It used to take their previous platform an hour to categorise those pictures, and then tailor search and personalise recommendations for millions of Taobao customers.

With Hanguang 800, they claim that the Taboo platform now takes just 5 minutes to complete the task – a 12X reduction in time!

Alibaba Cloud will also be using it in their smart city projects. They are already using it in Hangzhou, where they previously used 40 GPUs to process video feeds with a latency of 300 ms.

After migrating to four Hanguang 800 NPUs, they were able to process the same video feeds with half the latency – just 150 ms.

 

Can We Buy Or Rent The Hanguang 800?

No, Alibaba will not be selling the Hanguang 800 NPU. Instead, they are offering it as a new AI cloud computing service.

Developers can now make a request for a Hanguang 800 cloud compute quota, which Alibaba Cloud claims is 100% more cost-effective than traditional GPUs.

 

Are There No Other Alternatives For Alibaba?

In our opinion, this is Alibaba’s way of preparing for an escalation of the US-Chinese trade war that has already savaged HUAWEI.

While Alibaba certainly have a few AI inference accelerator alternatives, from AMD and NVIDIA for example, it makes sense for them to spend money and time developing their own AI inference chip.

In the long term, the Chinese government wants to build a domestic capability to design and fabricate their own computer chips for national security reasons.

Recommended : The HUAWEI Trump Ban – Everything You Need To Know!

 

Recommended Reading

Go Back To > Business + Enterprise | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


Kambyan ManUsIA + AleX Laser Cutting Drone Technology!

Kambyan Network recently invited us to a demonstration of their AleX laser cutting drone, which is designed to harvest oil palm fruits.

They also invited David Cirulli from the Embry Riddle Aeronautical University, and Associate Professor Sagaya Amalathas from Taylors University, to talk about the ManUsIA digital agriculture technology and the future jobs available to young teens today.

 

Kambyan ManUsIA Digital Agriculture

Kambyan Network has been working with David Cirulli of the Embry Riddle Aeronautical University in Singapore to develop what they call the ManUsIA digital agriculture technology.

Manusia is actually a Malay word for human, and it is an apt moniker because according to David Cirulli, ManUsIA stands for Man Using Intelligent Applications.

ManUsIA is a digital agriculture platform that Kambyan is developing as a SPaaS (Solution Platform as a Service) offering to improve yield and reduce manpower in agriculture.

It combines the use of drones with artificial intelligence and machine learning capabilities on the cloud to make use of surveillance data and weather information to maximise yield and reduce manpower requirements for dirty, difficult and dangerous jobs.

ManUsIA will start with drones that are remotely controlled using mobile device integration, and eventually hope to integrate intelligent drones that work independently.

 

Future Jobs For Teens Today

Kambyan also invited Associate Professor Dr. Sagaya Amalathas, a Programme Director at Taylors University to talk about future jobs that teens today should consider.

She points out that the future will be highly dependent on new digital skills in the areas of Big Data Analytics and Artificial Intelligence, as well as Blockchain technology, and the Internet of Things.

She also shared some really useful information on what careers will remain stable in the fast-changing times, and what jobs will be lost and what new opportunities will arise.

 

The reason why Kambyan invited her was because their training arm, Adroit College offers a Drone Operator & Robotics course.

The Professional Certificate in Robotic Process Automation – Field Operations (RPA-FO) course combines a 5-week intensive workshop with an apprenticeship and internship program at Kambyan, allowing the student to graduate with a Professional Certificate in 11 months.

 

Kambyan AleX Laser Cutting Drone Demonstration

The star of the event was the Kambyan AleX laser cutting drone – the Airborne Laser Cutter Mark 1.

Designed to be a laser harvesting drone for the oil palm industry, it weighs 3 kilograms and is approximately 70 cm in diameter.

Powered by a 150 watt pulsed laser in the operational model, it is capable of cutting through 6 inches of plant material.

Piloted remotely by a drone operator in the current iteration, it will be used to trim the fronds of the oil palm trees and cut through the stem of oil palm fruit bunches to harvest them.

Using drones will not only reduce manpower, it will allow plantations to let their oil palm trees grow much higher, reducing the need to cut them down so often.

This will increase profit over the long term, while reducing the oil palm industry’s impact on the environment… in particular their contribution to the slash and burn activity that results in terrible haze in Southeast Asia.

In the demo, they used a less powerful laser for safety reasons. But as this video shows, that itself is a danger!

 t

Fortunately, the operational drone uses a much more powerful laser to cut at a safer distance. This would prevent the drone from getting hit by falling oil palm fruits or flying debris.

 

Recommended Reading

Go Back To > Business + Enterprise | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!