Tag Archives: GPU Computing

Intel oneAPI Unified Programming Model Overview!

At Supercomputing 2019, Intel unveiled their oneAPI initiative for heterogenous computing, promising to deliver a unified programming experience for developers.

Here is an overview of the Intel oneAPI unified programming model, and what it means for programmers!

 

The Need For Intel oneAPI

The modern computing environment is now a lot less CPU-centric, with the greater adoption of GPUs, FGPAs and custom-built accelerators (like the Alibaba Hanguang 800).

Their different scalar, vector, matrix and spatial architectures require different APIs and code bases, which complicates attempts to utilise a mix of those capabilities.

 

Intel oneAPI For Heterogenous Computing

Intel oneAPI promises to change all that, offering a unified programming model for those different architectures.

It allows developers to create workloads and applications for multiple architectures on their platform of choice, without the need to develop and maintain separate code bases, tools and workflow.

Intel oneAPI comprises of two components – the open industry initiative, and the Intel oneAPI beta toolkit :

oneAPI Initiative

This is a cross-architecture development model based on industry standards, and an open specification, to encourage broader adoption.

Intel oneAPI Beta Toolkit

This beta toolkit offers the Intel oneAPI specification components with direct programming (Data Parallel C++), API-based programming with performance libraries, advanced analysis and debug tools.

Developers can test code and workloads in the Intel DevCloud for oneAPI on multiple Intel architectures.

 

What Processors + Accelerators Are Supported By Intel oneAPI?

The beta Intel oneAPI reference implementation currently supports these Intel platforms :

  • Intel Xeon Scalable processors
  • Intel Core and Atom processors
  • Intel processor graphics (as a proxy for future Intel discrete data centre GPUs)
  • Intel FPGAs (Intel Arria, Stratix)

The oneAPI specification is designed to support a broad range of CPUs and accelerators from multiple vendors. However, it is up to those vendors to create their own oneAPI implementations and optimise them for their own hardware.

 

Are oneAPI Elements Open-Sourced?

Many oneAPI libraries and components are already, or will soon be open sourced.

 

What Companies Are Participating In The oneAPI Initiative?

According to Intel, more than 30 vendors and research organisations support the oneAPI initiative, including CERN openlab, SAP and the University of Cambridge.

Companies that create their own implementation of oneAPI and complete a self-certification process will be allowed to use the oneAPI initiative brand and logo.

 

Available Intel oneAPI Toolkits

At the time of its launch (17 November 2019), here are the toolkits that Intel has made available for developers to download and use :

Intel oneAPI Base Toolkit (Beta)

This foundational kit enables developers of all types to build, test, and deploy performance-driven, data-centric applications across CPUs, GPUs, and FPGAs. Comes with :

[adrotate group=”2″]
  • Intel oneAPI Data Parallel C++ Compiler
  • Intel Distribution for Python
  • Multiple optimized libraries
  • Advanced analysis and debugging tools

Domain Specific oneAPI Toolkits for Specialised Workloads :

  • oneAPI HPC Toolkit (beta) : Deliver fast C++, Fortran, OpenMP, and MPI applications that scale.
  • oneAPI DL Framework Developer Toolkit (beta) : Build deep learning frameworks or customize existing ones.
  • oneAPI IoT Toolkit (beta) : Build high-performing, efficient, reliable solutions that run at the network’s edge.
  • oneAPI Rendering Toolkit (beta) : Create high-performance, high-fidelity visualization applications.

Additional Toolkits, Powered by oneAPI

  • Intel AI Analytics Toolkit (beta) : Speed AI development with tools for DL training, inference, and data analytics.
  • Intel Distribution of OpenVINO Toolkit : Deploy high-performance inference applications from device to cloud.
  • Intel System Bring-Up Toolkit (beta) : Debug and tune systems for power and performance.

You can download all of those toolkits here.

 

Recommended Reading

Go Back To > Business + Enterprise | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


The New NVIDIA TITAN RTX Graphics Card Revealed!

The new NVIDIA TITAN RTX is the latest and most powerful desktop GPU to date. Find out what’s new in the NVIDIA TITAN RTX graphics card!

The TITAN RTX Graphics Card Revealed!

NVIDIA just introduced TITAN RTX, the world’s most powerful desktop GPU which provides performance for AI research, data science and creative applications.

Driven by the new NVIDIA Turing architecture, the TITAN RTX, also dubbed T-RexΩ, delivers 130 teraflops of deep learning performance and 11 GigaRays of ray-tracing performance. It is aimed at the most demanding users.

The NVIDIA TITAN RTX features new RT Cores to accelerate ray tracing, plus new multi-precision Tensor Cores for AI training and inferencing.

These two engines along with more powerful compute and enhanced rasterization enable capabilities that will transform the work of millions of developers, designers and artists across multiple industries.

Designed for a variety of computationally demanding applications, TITAN RTX provides an unbeatable combination of AI, real-time ray-traced graphics, next-gen virtual reality and high performance computing.

 

TITAN RTX Performance

  • 576 multi-precision Turing Tensor Cores, providing up to 130 teraflops of deep learning performance.
  • 72 Turing RT Cores, delivering up to 11 GigaRays per second of real-time ray-tracing performance.
  • 24GB of high-speed GDDR6 memory with 672GB/s of bandwidth — 2x the memory of previous generation TITAN GPUs — to fit larger models and datasets.
  • 100GB/s NVIDIA NVLink can pair two TITAN RTX GPUs to scale memory and compute.
  • Incredible performance and memory bandwidth for real-time 8K video editing.
  • VirtualLink port provides the performance and connectivity required by next-gen VR headsets.

 

Built For AI Researchers And Deep Learning Developers

TITAN RTX transforms the PC into a supercomputer for AI researchers and developers.

It provides multi-precision Turing Tensor Cores for breakthrough performance from FP32, FP16, INT8 and INT4.

This allows faster training and inference of neural networks.

It also offers twice the memory capacity of previous generation TITAN GPUs, along with NVLink to allow researchers to experiment with larger neural networks and data sets.

 

Perfect For Data Scientists!

A powerful tool for data scientists, TITAN RTX accelerates data analytics with NVIDIA RAPIDS.

RAPIDS open-source libraries integrate seamlessly with the world’s most popular data science workflows to speed up machine learning.

 

Great For Content Creators!

TITAN RTX brings the power of real-time ray tracing and AI to creative applications, so 5 million PC-based creators can iterate faster.

It also delivers the computational horsepower and memory bandwidth needed for real-time 8K video editing.

 

TITAN RTX Price + Availability

TITAN RTX will be available later this month in the US and Europe for US$2,499.

This amounts to a staggering RM 10,325 and there is no word if or when it will be launched here in Malaysia.

 

Recommended Reading

[adrotate group=”2″]

Go Back To > Computer Hardware + Systems | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


NVIDIA To Accelerate Research At Monash University

MELBOURNE, Australia — Feb. 29, 2016NVIDIA today announced it is collaborating with Monash University to power a new wave of GPU-accelerated research, marking the first step toward a deeply integrated research and development program between research and industry innovation in Australia.

At a ceremony today at Monash’s Clayton campus in Melbourne, NVIDIA’s Chief Technology Officer of Accelerated Computing, Steve Oberlin (right), announced that Monash will join the NVIDIA Technology Centre Asia Pacific. The centre is dedicated to driving scientific research and development work in the region.

[adrotate banner=”4″]NVIDIA and Monash will jointly fund research students, facilitate access to GPU-accelerated computing technologies and leverage their worldwide network of experts to provide industryrelevant training and knowledge exchange.

Also announced at the ceremony by the Australian Chief Scientist Alan Finkel AO was the M3 supercomputer, the third-generation supercomputer available through the MASSIVE (Multimodal Australian ScienceS Imaging and Visualisation Environment) facility. Powered by ultra-high-performance NVIDIA Tesla K80 GPU accelerators, M3 will provide new simulation and real-time data processing capabilities to a wide selection of Australian researchers.

“Monash University has used GPU-accelerated computing to drive discovery in key areas like medicine, robotics, visualisation, mathematics, engineering, and computational chemistry for years,” said Oberlin. “We’re deepening our long-standing relationship with Monash, and look forward to working with them to expand their success using the latest GPU-accelerated computing technologies to drive insight and innovation.”

“Our collaboration with NVIDIA will take Monash research to new heights. By coupling some of Australia’s best researchers with NVIDIA’s accelerated computing technology we’re going to see some incredible impact. Our scientists will produce code that runs faster, but more significantly, their focus on deep learning algorithms will produce outcomes that are smarter,” said Professor Ian Smith, Vice Provost (Research and Research Infrastructure), Monash University.

 

Support Tech ARP!

If you like our work, you can help support out work by visiting our sponsors, participate in the Tech ARP Forums, or even donate to our fund. Any help you can render is greatly appreciated!