For future-oriented digital topics, the Volkswagen Group remains committed to artificial intelligence (AI). This is why Volkswagen IT is cooperating with US technology company NVIDIA with a view to expanding its competence in the field of deep learning. At the Volkswagen Data Lab, IT experts are developing advanced AI systems with deep learning.
Volkswagen & NVIDIA In Deep Learning Partnership
At Volkswagen, the Data Lab has been named the Group’s center of excellence for AI and data analysis. Specialists are exploring possibilities to use deep learning in corporate processes and in the field of mobility services. For example, they are developing new procedures for optimizing traffic flow in cities. Advanced AI systems are also among the prerequisites for developments such as intelligent human-robot cooperation.
Dr. Martin Hofmann, CIO of the Volkswagen Group, says: “Artificial intelligence is the key to the digital future of the Volkswagen Group. We want to develop and deploy high-performance AI systems ourselves. This is why we are expanding our expert knowledge required. Cooperation with NVIDIA will be a major step in this direction.”
[adrotate group=”2″]
“AI is the most powerful technological force of our era,” says Jensen Huang, CEO of NVIDIA. “Thanks to AI, data centers are changing dramatically and enterprise computing is being reinvented. NVIDIA’s deep learning solutions will enable Volkswagen to turn the enormous amounts of information in its data centers into valuable insight, and transform its business.”
In addition, Volkswagen has established a startup support program at its Data Lab. The program will provide technical and financial support for international startups developing machine learning and deep learning applications for the automotive industry. Together with NVIDIA, Volkswagen will be admitting five startups to the support program from this fall.
Both partners will also be launching a “Summer of Code” camp where high-performing students with qualifications in IT, mathematics or physics will have an opportunity to develop deep learning methods in teams and to implement them in a robotics environment.
If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!
Just before we flew to Computex 2017, we attended the AWS Masterclass on Artificial Intelligence. It offered us an in-depth look at AI concepts like machine learning, deep learning and neural networks. We also saw how Amazon Web Services (AWS) uses all that to create easy-to-use tools for developers to create their own AI applications at low cost and virtually no capital outlay.
The AWS Masterclass on Artificial Intelligence
AWS Malaysia flew in Olivier Klein, the AWS Asia Pacific Solutions Architect, to conduct the AWS Masterclass. During the two-hour session, he conveyed the ease by which the various AWS services and tools allow virtually anyone to create their own AI applications at lower cost and virtually no capital outlay.
The topic on artificial intelligence is rather wide-ranging, covering from the basic AI concepts all the way to demonstrations on how to use AWS services like Amazon Polly and Amazon Rekognition to easily and quickly create AI applications. We present to you – the complete AWS Masterclass on Artificial Intelligence!
The AWS Masterclass on AI is actually made up of 5 main topics. Here is a summary of those topics :
Topic
Duration
Remark
AWS Cloud and An Introduction to Artificial Intelligence, Machine Learning, Deep Learning
15 minutes
An overview on Amazon Web Services and the latest innovation in the data analytics, machine learning, deep learning and AI space.
The Road to Artificial Intelligence
20 minutes
Demystifying AI concepts and related terminologies, as well as the underlying technologies.
Let’s dive deeper into the concepts of machine learning, deep learning models, such as the neural networks, and how this leads to artificial intelligence.
Connecting Things and Sensing the Real World
30 minutes
As part of an AI that aligns with our physical world, we need to understand how Internet-of-Things (IoT) space helps to create natural interaction channels.
We will walk through real world examples and demonstration that include interactions with voice through Amazon Lex, Amazon Polly and the Alexa Voice Services, as well as understand visual recognitions with services such as Amazon Rekognition.
We will also bridge this with real-time data that is sensed from the physical world via AWS IoT.
Retrospective and Real-Time Data Analytics
30 minutes
Every AI must continuously “learn” and be “trained”” through past performance and feedback data. Retrospective and real-time data analytics are crucial to building intelligence model.
We will dive into some of the new trends and concepts, which our customers are using to perform fast and cost-effective analytics on AWS.
In the next two pages, we will dissect the video and share with you the key points from each segment of this AWS Masterclass.
If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!
The AWS Masterclass on AI Key Points (Part 1)
Here is an exhaustive list of key takeaway points from the AWS Masterclass on Artificial Intelligence, with their individual timestamps in the video :
Introduction To AWS Cloud
AWS has 16 regions around the world (0:51), with two or more availability zones per region (1:37), and 76 edge locations (1:56) to accelerate end connectivity to AWS services.
AWS offers 90+ cloud services (3:45), all of which use the On-Demand Model (4:38) – you pay only for what you use, whether that’s a GB of storage or transfer, or execution time for a computational process.
You don’t even need to plan for your requirements or inform AWS how much capacity you need (5:05). Just use and pay what you need.
AWS has a practice of passing their cost savings to their customers (5:59), cutting prices 61 times since 2006.
AWS keeps adding new services over the years (6:19), with over a thousand new services introduced in 2016 (7:03).
[adrotate group=”1″]
Introduction to Artificial Intelligence, Machine Learning, Deep Learning
Artificial intelligence is based on unsupervised machine learning (7:45), specifically deep learning models.
Insurance companies like AON use it for actuarial calculations (7:59), and services like Netflix use it to generate recommendations (8:04).
A lot of AI models have been built specifically around natural language understanding, and using vision to interact with customers, as well as predicting and understanding customer behaviour (9:23).
Here is a quick look at what the AWS services management console looks like (9:58).
This is how you launch 10 compute instances (virtual servers) in AWS (11:40).
The ability to access multiple instances quickly is very useful for AI training (12:40), because it gives the user access to large amounts of computational power, which can be quickly terminated (13:10).
Machine learning, or specifically artificial intelligence, is not new to Amazon.com, the parent company of AWS (14:14).
Amazon.com uses a lot of AI models (14:34) for recommendations and demand forecasting.
The visual search feature in Amazon app uses visual recognition and AI models to identify a picture you take (15:33).
Olivier introduces Amazon Go (16:07), a prototype grocery store in Seattle.
[adrotate group=”1″]
The Road to Artificial Intelligence
The first component of any artificial intelligence is the “ability to sense the real world” (18:46), connecting everything together.
Cheaper bandwidth (19:26) now allows more devices to be connected to the cloud, allowing more data to be collected for the purpose of training AI models.
Cloud computing platforms like AWS allow the storage and processing of all that sensor data in real time (19:53).
All of that information can be used in deep learning models (20:14) to create an artificial intelligence that understands, in a natural way, what we are doing, and what we want or need.
Olivier shows how machine learning can quickly solve a Rubik’s cube (20:47), which has 43 quintillion unique combinations.
You can even build a Raspberry Pi-powered machine (24:33) that can solve a Rubik’s cube puzzle in 0.9 seconds.
Some of these deep learning models are available on Amazon AI (25:11), which is a combination of different services (25:44).
Olivier shows what it means to “train a deep learning model” (28:19) using a neural network (29:15).
Deep learning is computationally-intensive (30:39), but once it derives a model that works well, the predictive aspect is not computationally-intensive (30:52).
A pre-trained AI model can be loaded into a low-powered device (31:02), allowing it to perform AI functions without requiring large amounts of bandwidth or computational power.
Olivier demonstrates the YOLO (You Only Look Once) project, which pre-trained an AI model with pictures of objects (31:58), which allows it to detect objects in any video.
The identification of objects is the baseline for autonomous driving systems (34:19), as used by Tu Simple.
Tu Simple also used a similar model to train a drone to detect and follow a person (35:28).
If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!
The AWS Masterclass on AI Key Points (Part 2)
Connecting Things and Sensing the Real World
Cloud services like AWS IoT (37:35) allow you to securely connect billions of IoT (Internet of Things) devices.
Olivier prefers to think of IoT as Intelligent Orchestrated Technology (37:52).
Olivier demonstrates how the combination of multiple data sources (maps, vehicle GPS, real-time weather reports) in Bangkok can be used to predict traffic as well as road conditions to create optimal routes (39:07), reducing traffic congestion by 30%.
The PetaBencana service in Jakarta uses picture recognition and IoT sensors to identify flooded roads (42:21) for better emergency response and disaster management.
Olivier demonstrates how easy it is to connect an IoT devices to the AWS IoT service (43:46), and use them to sense the environment and interact with.
Olivier shows how the capabilities of the Amazon Echo can be extended by creating an Alexa Skill using the AWS Lambda function (59:07).
Developers can create and publish Alexa Skills for sale in the Amazon marketplace (1:03:30).
Amazon Polly (1:04:10) renders life-like speech, while the Amazon Lex conversational engine (1:04:17) has natural language understanding and automatic speech recognition. Amazon Rekognition (1:04:29) performs image analysis.
Amazon Polly (1:04:50) turns text into life-like speech using deep learning to change the pitch and intonation according to the context. Olivier demonstrates Amazon Polly’s capabilities at 1:06:25.
Amazon Lex (1:11:06) is a web service that allows you to build conversational interfaces using natural language understanding (NLU) and automatic speech recognition (ASR) models like Alexa.
Amazon Lex does not just support spoken natural language understanding, it also recognisestext (1:12:09), which makes it useful for chatbots.
Olivier demonstrates that text recognition capabilities in a chatbot demo (1:13:50) of a customer applying for a credit card through Facebook.
Amazon Rekognition (1:21:37) is an image recognition and analysis service, which uses deep learning to identify objects in pictures.
Amazon Rekognition can even detect facial landmarks and sentiments (1:22:41), as well as image quality and other attributes.
You can actually try Amazon Rekognition out (1:23:24) by uploading photos at CodeFor.Cloud/image.
[adrotate group=”1″]
Retrospective and Real-Time Data Analytics
AI is a combination of 3 types of data analytics (1:28:10) – retrospective analysis and reporting + real-time processing + predictions to enable smart apps.
Cloud computing is extremely useful for machine learning (1:29:57) because it allows you to decouple storage and compute requirements for much lower costs.
Amazon Athena (1:31:56) allows you to query data stored in Amazon S3, without creating a compute instance to do it. You only pay for the TB of data that is processed by that query.
Best of all, you will get the same fast results even if your data set grows (1:32:31), because Amazon Athena will automatically parallelise your queries across your data set internally.
Olivier demonstrates (1:33:14) how Amazon Athena can be used to run queries on data stored in Amazon S3, as well as generate reports using Amazon QuickSight.
When it comes to data analytics, cloud computing allows you to quickly bring massive computing power to bear, achieving much faster results without additional cost (1:41:40).
The insurance company AON used this ability (1:42:44) to reduce an actuarial simulation that would normally take 10 days, to just 10 minutes.
Amazon Kinesis and Amazon Kinesis Analytics (1:45:10) allows the processing of real-time data.
A company called Dash is using this capability to analyse OBD data in real-time (1:47:23) to help improve fuel efficiency and predict potential breakdowns. It also notifies emergency services in case of a crash.
If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!
NVIDIA just announced the Jetson TX2 embedded AI supercomputer, based on the latest NVIDIA Pascal microarchitecture. It promises to offer twice the performance of the previous-generation Jetson TX1, in the same package. In this tech report, we will share with you the full details of the new Pascal-based NVIDIA Jetson TX2!
GPUs In Artificial Intelligence
Artificial intelligence is the new frontier in GPU compute technology. Whether they are used to power training or inference engines, AI research has benefited greatly from the massive amounts of compute power in modern GPUs.
The market is led by NVIDIA with their Tesla accelerators that run on their proprietary CUDA platform. AMD, on the other hand, is a relative newcomer with their Radeon Instinct accelerators designed to run on the open-source ROCm (Radeon Open Compute) platform.
The NVIDIA Jetson
GPUs today offer so much compute performance that NVIDIA has been able to create the NVIDIA Jetson family of embedded AI supercomputers. They differ from their Tesla big brother in their size, power efficiency and purpose. The NVIDIA Jetson modules are specifically built for “inference at the edge” or “AI at the edge“.
Unlike AI processing in the datacenters or in the cloud, AI in the edge refers to autonomous artificial intelligence processing, where there is poor or no Internet access or access must be restricted for privacy or security reasons. Therefore, the processor must be powerful enough for the AI application to run autonomously.
Whether it’s to automate robots in a factory, or to tackle industrial accidents like at the Fukushima Daiichi nuclear plant, AI at the edge is meant to allow for at least some autonomous capability right in the field. The AI in the edge processors must also be frugal in using power, as power or battery life is often limited.
[adrotate banner=”5″]
Hence, processors designed for AI on the edge applications must be small, power-efficient and yet, fast enough to run AI inference in real time. The NVIDIA Jetson family of embedded AI supercomputers promises to tick all of those boxes. Let’s take a look :
If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!
The NVIDIA Jetson TX2
The NVIDIA Jetson TX2 is the second-generation Jetson embedded AI module, based on the latest NVIDIA Pascal microarchitecture. It supersedes (but not replace) the previous-generation Jetson TX1, which was built on the NVIDIA Maxwell microarchitecture and released in November 2015.
Thanks to the faster and more power-efficient Pascal microarchitecture, the NVIDIA Jetson TX2 promises to be twice as energy-efficient as the Jetson TX1.
This means the developers switching to the Jetson TX2 can now opt to maximise power efficiency, or to maximise performance. In the Max-Q mode, the Jetson TX2 will use less than 7.5 W, and offer Jetson TX1-equivalent performance. In the Max-P mode, the Jetson TX2 will use less than 15 W, and offer up to twice the performance of the Jetson TX1.
NVIDIA Jetson Specification Comparison
The NVIDIA Jetson modules are actually built around the NVIDIA Tegra SoCs, instead of their GeForce GPUs. The Tegra SoC is a System On A Chip, which integrates an ARM CPU, an NVIDIA GPU, a chipset and a memory controller on a single package.
The Tegra SoC and the other components on a 50 x 87 mm board are what constitutes the NVIDIA Jetson module. The Jetson TX1 uses the Tegra X1 SoC, while the new Jetson TX2 uses the Tegra P1 SoC.
For those who have been following our coverage of the AMD Radeon Instinct, and its support for packed math, the NVIDIA Jetson TX2 and TX1 modules support FP16 operations too.
[adrotate banner=”5″]
NVIDIA Jetson TX2 Price & Availability
The NVIDIA Jetson TX2 Developer Kit is available for pre-order in the US and Europe right now, with a US$ 599 retail price and a US$ 299 education price. Shipping will start on March 14, 2017. This developer’s kit will be made available in APAC and other regions in April 2017.
The NVIDIA Jetson TX2 module itself will only be made available in the second quarter of 2017. It will be priced at US$ 399 per module, in quantities of 1,000 modules or more.
Note that the Jetson TX2 modules are exactly the same size and uses the same 400-pin connector. They are drop-in compatible replacements for the Jetson TX1 modules.
If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!
NVIDIA Jetson TX1 Price Adjustments
With the launch of the Jetson TX2, NVIDIA is adjusting the price of the Jetson TX1. The Jetson TX1 will continue to sell alongside the new Jetson TX2.
The NVIDIA Jetson TX1 production has been reduced to US$ 299, down from US$ 399. Again, this is in quantities of 1,000 modules or more.
NVIDIA Jetpack 3.0
The NVIDIA Jetson is more than just a processor module. It is a platform that is made up of developer tools and codes, as well as APIs. Like AMD offers their MIOpen deep learning library, NVIDIA offers Jetpack.
In conjunction with the launch of the Jetson TX2, NVIDIA also announced the NVIDIA Jetpack 3.0. It promises to offer twice the system performance of Jetpack 2.3.
Jetpack 3.0 is not just for the new Jetson TX2. It will offer a nice boost in performance for existing Jetson TX1 users and applications.
[adrotate banner=”5″]
The Presentation Slides
For those who want the full set of NVIDIA Jetson TX2 slides, here they are :
Support Tech ARP!
If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!