At GTC 2020, NVIDIA announced both the new Ampere-based RTX A6000 GPU and the Omniverse 3D platform.
Here are the specifications of the NVIDIA RTX A6000, as well as details and a demo of NVIDIA Omniverse!
NVIDIA RTX A6000 : Powered By Ampere!
Like the recently-announced GeForce RTX 30 Series, NVIDIA RTX A6000 is based on the new NVIDIA Ampere architecture.
But unlike the gaming-centric GeForce RTX 30 Series, RTX A6000 is a flagship GPU for creators with a massive 48 GB GDDR6 memory buffer.
With dedicated RT Cores and improved CUDA performance, NVIDIA claims the RTX A6000 will deliver up to 2X better performance than current renderers.
Its second-generation RT Cores add hardware acceleration for ray-traced motion blur rendering, offering up to 7X faster performance than the last generation.
Its third-generation Tensor Cores provides up to 5X the training throughput over the previous generation. In addition, hardware support for structural sparsity doubles the throughput for inferencing.
The NVIDIA RTX A6000 also comes with the PCI Express 4.0 interface, doubling the transfer rate between the GPU and the PC.
NVIDIA RTX A6000 : Specifications
NVIDIA RTX A6000
Max. Texture Rate
Max. Pixel Rate
GDDR6 with ECC
4 x DisplayPort 1.4
PCI Express Gen 4 x16
300 W max.
NVIDIA RTX A6000 : Availability
The RTX A6000 will be available through NVIDIA sales channels starting in mid-December 2020, and through their OEM partners early 2021.
NVIDIA Omniverse : Powered By Ampere!
Together with the RTX A6000, NVIDIA also announced Omniverse – their new RTX-based simulation, collaboration and rendering platform for 3D workflows.
NVIDIA Omniverse users can collaborate on their 3D workflows live between applications, without imports and exports, and then render them using NVIDIA RTX.
Even though their Marbles At Night demo above highlights the performance of the RTX A6000, Omniverse will work on all NVIDIA RTX hardware, including NVIDIA Studio laptops and RTX-powered workstations and servers.
You will be able to sign up later in Q4 2020 for the open beta.
Dell Technologies just introduced Dell EMC Ready solutions for both AI and virtualised HPC workloads on VMware vSphere 7!
Join us for the tech briefing on both new Dell EMC computing solutions for VMware, and find out how it can simplify your advanced computing needs!
Simplified Advanced Computing With Dell EMC Ready Solutions
Let’s start with the Dell Technologies briefing on the two new Dell EMC Ready solutions for both AI and virtualised HPC workloads.
Based on VMware Cloud Foundation, they are designed to make AI easier to deploy and consume, with new features from VMware vSphere 7, including Bitfusion.
Dell EMC Ready Solutions for AI : GPU-as-a-Service (GaaS)
GPUs in individual workstations or servers are often under-utilised at less than 15% of capacity. The new Dell EMC Ready Solutions for AI : GPU-as-a-Service fixes that and maximises your investment with virtual GPU pools.
The newest design includes the latest VMware vSphere 7 with Bitfusion, making it possible to virtualise GPUs on-premise. Factory-installed by Dell, VMware vSphere 7 with Bitfusion will let developers and data scientists pool IT resources and share them across datacenters.
Dell EMC Ready Solutions for AI : GPU-as-a-Service also uses the latest VMware Cloud Foundation with VMware vSphere 7 support for Kubernetes and containerised applications to run AI workloads anywhere. Containers make it easier to bring cloud-native applications into production, with the ability to move workloads.
Dell EMC Ready Solutions for Virtualised HPC
Most HPC workloads run on dedicated systems that require specialised skills to deploy and manage. Dell EMC Ready Solutions for Virtualised HPC can include VMware Cloud Foundation with VMware vSphere 7 featuring Bitfusion.
That should make it simpler and more economical to use VMware environments for HPC and AI applications in computational chemistry, bioinformatics and computer-aided engineering. IT teams can quickly provision hardware as needed, speed up initial deployment and configuration, saving time with simpler centralised management and security.
For very large HPC implementations, Dell EMC Ready Solutions for vHPC can include VMware vSphere Scale-Out Edition for additional cost savings.
Dell EMC OpenManage for Dell EMC Ready Solutions
The new Dell EMC Ready Solutions for AI and Virtualised HPC ship with the Dell EMC OpenManage systems management software, which helps administrators improve system uptime, keep data insights flowing and prepare for AI operations.
New Dell EMC OpenManage improvements include :
OpenManage Integration for VMware vCenter, supporting vSphere Lifecycle Manager, automates software, driver and firmware updates holistically to save time and simplify operations.
The enhanced OpenManage Mobile app gives administrators the ability to view power and thermal policies, perform emergency power reduction and monitor internal storage from anywhere in the world.
Dell Technologies COO and Vice Chairman, Jeff Clarke, reveals his tech predictions for 2020, the start of what Dell Technologies considers as the Next Data Decade!
Jeff Clarke : Tech Predictions For 2020 + Next Data Decade!
It’s hard to believe that we’re heading into the year 2020 – a year that many have marked as a milestone in technology. Autonomous cars lining our streets, virtual assistants predicting our needs and taking our requests, connected and intelligent everything across every industry.
When I stop to think about what has been accomplished over the last decade – it’s quite remarkable. While we don’t have fully autonomous cars zipping back and forth across major freeways with ease, automakers are getting closer to deploying autonomous fleets in the next few years.
Many of the every-day devices, systems and applications we use are connected and intelligent – including healthcare applications, industrial machines and financial systems – forming what is now deemed as “the edge.”
At the root of all that innovation and advancement are massive amounts of data and compute power, and the capacity across edge, cloud and core data center infrastructure to put data through its paces. And with the amount of data coming our way in the next 10 years – we can only imagine what the world around us will look like in 2030, with apps and services we haven’t even thought of yet.
2020 marks the beginning of what we at Dell Technologies are calling the Next Data Decade, and we are no doubt entering this era with new – and rather high – expectations of what technology can make possible for how we live, work and play. So what new breakthroughs and technology trends will set the tone for what’s to come over the next 10 years? Here are my top predictions for the year ahead.
2020 proves it’s time to keep IT simple
We’ve got a lot of data on our hands…big data, meta data, structured and unstructured data – data living in clouds, in devices at the edge, in core data centers…it’s everywhere. But organisations are struggling to ensure the right data is moving to the right place at the right time. They lack data visibility – the ability for IT teams to quickly access and analyse the right data – because there are too many systems and services woven throughout their IT infrastructure. As we kick off 2020, CIOs will make data visibility a top IT imperative because after all, data is what makes the flywheel of innovation spin.
We’ll see organisations accelerate their digital transformation by simplifying and automating their IT infrastructure and consolidating systems and services into holistic solutions that enable more control and clarity. Consistency in architectures, orchestration and service agreements will open new doors for data management – and that ultimately gives data the ability be used as part of AI and Machine Learning to fuel IT automation. And all of that enables better, faster business outcomes that the innovation of the next decade will thrive on.
Cloud co-existence sees rolling thunder
The idea that public and private clouds can and will co-exist becomes a clear reality in 2020. Multi-cloud IT strategies supported by hybrid cloud architectures will play a key role in ensuing organisations have better data management and visibility, while also ensuring that their data remains accessible and secure. In fact, IDC predicted that by 2021, over 90% of enterprises worldwide will rely on a mix of on-premises/dedicated private clouds, several public clouds, and legacy platforms to meet their infrastructure needs.
But private clouds won’t simply exist within the heart of the data center. As 5G and edge deployments continue to rollout, private hybrid clouds will exist at the edge to ensure the real-time visibility and management of data everywhere it lives.
That means organisations will expect more of their cloud and service providers to ensure they can support their hybrid cloud demands across all environments. Further, we’ll see security and data protection become deeply integrated as part of hybrid cloud environments, notably where containers and Kubernetes continue to gain momentum for app development. Bolting security measures onto cloud infrastructure will be a non-starter…it’s got to be inherently built into the fiber of the overall data management strategy edge to core to cloud.
What you get is what you pay
One of the biggest hurdles for IT decision makers driving transformation is resources. CapEx and OpEx can often be limiting factors when trying to plan and predict for compute and consumption needs for the year ahead…never mind the next three-five years. SaaS and cloud consumption models have increased in adoption and popularity, providing organisations with the flexibility to pay for what they use, as they go.
In 2020, flexible consumption and as-a-service options will accelerate rapidly as organisations seize the opportunity to transform into software-defined and cloud-enabled IT. As a result – they’ll be able to choose the right economic model for their business to take advantage of end-to-end IT solutions that enable data mobility and visibility, and crunch even the most intensive AI and Machine Learning workloads when needed.
“The Edge” rapidly expands into the enterprise
The “Edge” continues to evolve – with many working hard to define exactly what it is and where it exists. Once limited to the Internet of Things (IoT), it’s hard to find any systems, applications, services – people and places – that aren’t connected. The edge is emerging in many places and it’s going to expand with enterprise organisations leading the way, delivering the IT infrastructure to support it.
5G connectivity is creating new use cases and possibilities for healthcare, financial services, education and industrial manufacturing. As a result, SD-WAN and software-defined networking solutions become a core thread of a holistic IT infrastructure solution – ensuring massive data workloads can travel at speed – securely – between edge, core and cloud environments. Open networking solutions will prevail over proprietary as organisations recognise the only way to successfully manage and secure data for the long haul requires the flexibility and agility that only open software defined networking can deliver.
Intelligent devices change the way you work and collaborate
PC innovation continues to push new boundaries every year – screens are more immersive and bigger than ever, yet the form factor becomes smaller and thinner. But more and more, it’s what is running at the heart of that PC that is more transformational than ever. Software applications that use AI and machine learning create systems that now know where and when to optimise power and compute based on your usage patterns. With biometrics, PCs know it’s you from the moment you gaze at the screen. And now, AI and machine learning applications are smart enough to give your system the ability to dial up the sound and colour based on the content you’re watching or the game you’re playing.
Over the next year, these advancements in AI and machine learning will turn our PCs into even smarter and more collaborative companions. They’ll have the ability to optimise power and battery life for our most productive moments – and even become self-sufficient machines that can self-heal and self-advocate for repair – reducing the burden on the user and of course, reducing the number of IT incidents filed. That’s a huge increase in happiness and productivity for both the end users and the IT groups that support them.
Innovating with integrity, sourcing sustainably
Sustainable innovation will continue to take center stage, as organisations like ours want to ensure the impact they have in the world doesn’t come with a dangerous one on the planet. Greater investments in reuse and recycling for closed-loop innovation will accelerate – hardware becomes smaller and more efficient and built with recycled and reclaimed goods – minimising eWaste and maximising already existing materials. At Dell Technologies, we met our Legacy of Good 2020 goals ahead of schedule – so we’ve retired them and set new goals for 2030 to recycle an equivalent product for every product a customer buys, lead the circular economy with more than half of all product content being made from recycled or renewable material, and use 100% recycled or renewable material in all packaging.
As we enter the Next Data Decade, I’m optimistic and excited about what the future holds. The steps our customers will take in the next year to get the most out of their data will set forth new breakthroughs in technology that everyone will experience in some way – whether it’s a more powerful device, faster medical treatment, more accessible education, less waste and cleaner air. And before we know it, we’ll be looking forward to what the following 10 years will have in store.
Alibaba, specifically its research institute – the Alibaba DAMO Academy, just published their top 10 tech trends in 2020.
Here are the highlights from those top 10 tech trends that they are predicting will go big in 2020!
Here Are The Top 10 Tech Trends In 2020 From Alibaba!
Tech Trend #1 : AI Evolves From Perceptual Intelligence To Cognitive Intelligence
Artificial intelligence has reached or surpassed humans in the areas of perceptual intelligence such as speech to text, natural language processing, video understanding etc; but in the field of cognitive intelligence that requires external knowledge, logical reasoning, or domain migration, it is still in its infancy.
Cognitive intelligence will draw inspiration from cognitive psychology, brain science, and human social history, combined with techniques such as cross domain knowledge graph, causality inference, and continuous learning to establish effective mechanisms for stable acquisition and expression of knowledge.
These make machines to understand and utilize knowledge, achieving key breakthroughs from perceptual intelligence to cognitive intelligence.
Tech Trend #2 : In-Memory Computing Addresses Memory Wall Challenge In AI Computer
In Von Neumann architecture, memory and processor are separate and the computation requires data to be moved back and forth.
With the rapid development of data-driven AI algorithms in recent years, it has come to a point where the hardware becomes the bottleneck in the explorations of more advanced algorithms.
In Processing-in-Memory (PIM) architecture, in contrast to the Von Neumann architecture, memory and processor are fused together and computations are performed where data is stored with minimal data movement.
As such, computation parallelism and power efficiency can be significantly improved. We believe the innovations on PIM architecture are the tickets to next-generation AI.
Tech Trend #3 : Industrial IoT Will Power Digital Transformation
In 2020, 5G, rapid development of IoT devices, cloud computing and edge computing will accelerate the fusion of information system, communication system, and industrial control system.
Through advanced Industrial IoT, manufacturing companies can achieve automation of machines, in-factory logistics, and production scheduling, as a way to realise C2B smart manufacturing.
In addition, interconnected industrial system can adjust and coordinate the production capability of both upstream and downstream vendors.
Ultimately it will significantly increase the manufacturers’ productivity and profitability. For manufacturers with production goods that value hundreds of trillion RMB, If the productivity increases 5-10%, it means additional trillions of RMB.
Tech Trend #4 : Large Scale Collaboration Between Machines Become Possible
Traditional single intelligence cannot meet the real-time perception and decision of large-scale intelligent devices.
The development of collaborative sensing technology of Internet of things and 5G communication technology will realise the collaboration among multiple agents – machines cooperate with each other and compete with each other to complete the target tasks.
The group intelligence brought by the cooperation of multiple intelligent bodies will further amplify the value of the intelligent system:
large-scale intelligent traffic light dispatching will realise dynamic and real-time adjustment,
warehouse robots will work together to complete cargo sorting more efficiently,
driverless cars can perceive the overall traffic conditions on the road, and
group unmanned aerial vehicle (UAV) collaboration will get through the last-mile delivery more efficiently.
Tech Trend #5 : Modular Chiplet Design Makes Chips Easier & Faster To Create
Traditional model of chip design cannot efficiently respond to the fast evolving, fragmented and customised needs of chip production.
The open source SoC chip design based on RISC-V, high-level hardware description language, and IP-based modular chip design methods have accelerated the rapid development of agile design methods and the ecosystem of open source chips.
In addition, the modular design method based on chiplets uses advanced packaging methods to package the chiplets with different functions together, which can quickly customise and deliver chips that meet specific requirements of different applications.
Tech Trend #6 : Large Scale Blockchain Applications Will Gain Mass Adoption
BaaS (Blockchain-as-a-Service) will further reduce the barriers of entry for enterprise blockchain applications.
A variety of hardware chips embedded with core algorithms used in edge, cloud and designed specifically for blockchain will also emerge, allowing assets in the physical world to be mapped to assets on blockchain, further expanding the boundaries of the Internet of Value and realising “multi-chain interconnection”.
In the future, a large number of innovative blockchain application scenarios with multi-dimensional collaboration across different industries and ecosystems will emerge, and large-scale production-grade blockchain applications with more than 10 million DAI (Daily Active Items) will gain mass adoption.
Tech Trend #7 : A Critical Period Before Large-Scale Quantum Computing
In 2019, the race in reaching “Quantum Supremacy” brought the focus back to quantum computing. The demonstration, using superconducting circuits, boosts the overall confidence on superconducting quantum computing for the realisation of a large-scale quantum computer.
In 2020, the field of quantum computing will receive increasing investment, which comes with enhanced competitions. The field is also expected to experience a speed-up in industrialization and the gradual formation of an eco-system.
In the coming years, the next milestones will be the realization of fault-tolerant quantum computing and the demonstration of quantum advantages in real-world problems. Either is of a great challenge given the present knowledge. Quantum computing is entering a critical period.
Tech Trend #8 : New Materials Will Revolutionise Semiconductor Devices
Under the pressure of both Moore’s Law and the explosive demand of computing power and storage, it is difficult for classic Si based transistors to maintain sustainable development of the semiconductor industry.
Until now, major semiconductor manufacturers still have no clear answer and option to chips beyond 3nm. New materials will make new logic, storage, and interconnection devices through new physical mechanisms, driving continuous innovation in the semiconductor industry.
For example, topological insulators, two-dimensional superconducting materials, etc. that can achieve lossless transport of electron and spin can become the basis for new high-performance logic and interconnect devices; while new magnetic materials and new resistive switching materials can realize high-performance magnetics Memory such as SOT-MRAM and resistive memory.
Tech Trend #9 : Growing Adoption Of AI Technologies That Protect Data Privacy
The compliance costs demanded by the recent data protection laws and regulations related to data transfer are getting increasingly higher than ever before.
In light of this, there have been growing interests in using AI technologies to protect data privacy. The essence is to enable the data user to compute a function over input data from different data providers while keeping those data private.
Such AI technologies promise to solve the problems of data silos and lack of trust in today’s data sharing practices, and will truly unleash the value of data in the foreseeable future.
Tech Trend #10 : Cloud Becomes The Center Of IT Innovation
With the ongoing development of cloud computing technology, the cloud has grown far beyond the scope of IT infrastructure, and gradually evolved into the center of all IT technology innovations.
Cloud has close relationship with almost all IT technologies, including new chips, new databases, self-driving adaptive networks, big data, AI, IoT, blockchain, quantum computing and so forth.
Meanwhile, it creates new technologies, such as serverless computing, cloud-native software architecture, software-hardware integrated design, as well as intelligent automated operation.
Cloud computing is redefining every aspect of IT, making new IT technologies more accessible for the public. Cloud has become the backbone of the entire digital economy.
NVIDIA just launched TensorRT 7, introducing the capability for Real-Time Conversational AI!
Here is a primer on the NVIDIA TensorRT 7, and the new real-time conversational AI capability!
NVIDIA TensorRT 7 with Real-Time Conversational AI
NVIDIA TensorRT 7 is their seventh-generation inference software development kit. It introduces the capability for real-time conversational AI, opening the door for human-to-AI interactions.
TensorRT 7 features a new deep learning compiler designed to automatically optimise and accelerate the increasingly complex recurrent and transformer-based neural networks needed for AI speech applications.
This boosts the performance of conversational AI components by more than 10X, compared to running them on CPUs. This drives down the latency below the 300 millisecond (0.3 second) threshold considered necessary for real-time interactions.
TensorRT 7 Targets Recurrent Neural Networks
TensorRT 7 is designed to speed up AI models that are used to make predictions on time-series, sequence-data scenarios that use recurrent loop structures (RNNs).
RNNs are used not only for conversational AI speed networks, they also help with arrival time planning for cars and satellites, predictions of events in electronic medical records, financial asset forecasting and fraud detection.
The use of RNN has hitherto been limited to a few companies with the talent and manpower to hand-optimise the code to meet real-time performance requirements.
With TensorRT 7’s new deep learning compiler, developers now have the ability to automatically optimise these neural networks to deliver the best possible performance and lowest latencies.
The new compiler also optimises transformer-based models like BERT for natural language processing.
TensorRT 7 Availability
NVIDIA TensorRT 7 will be made available in the coming days for development and deployment for free to members of the NVIDIA Developer program.
NVIDIA just announced that they will be providing the transportation industry access to their NVIDIA DRIVE Deep Neural Networks (DNNs) for autonomous vehicle development! Here are the details!
NVIDIA DRIVE Deep Neural Networks : Access Granted!
To accelerate the adoption of NVIDIA DRIVE by the transportation industry for autonomous vehicle development, NVIDIA is providing access to the NVIDIA DRIVE Deep Neural Networks.
What this means is autonomous vehicle developers will now be able to access all of NVIDIA”s pre-trained AI models and training code, and use them to improve their self-driving systems.
Using AI is central to the development of safe, self-driving cars. AI lets autonomous vehicles perceive and react to obstacles and potential dangers, or even changes in their surroundings.
Powering every self-driving car are dozens of Deep Neural Networks (DNNs) that tackle redundant and diverse tasks, to ensure accurate perception, localisation and path planning.
These DNNs cover tasks like traffic light and sign detection, object detection for vehicles, pedestrians and bicycles, and path perception, as well as gaze detection and gesture recognition within the vehicle.
Advanced NVIDIA DRIVE Tools
In addition to providing access to their DRIVE DNNs, NVIDIA also made available a suite of advanced NVIDIA DRIVE tools.
These NVIDIA DRIVE tools allow autonomous vehicle developers to customise and enhance the NVIDIA DRIVE DNNs using their own datasets and target feature set.
Active Learning improves model accuracy and reduces data collection costs by automating data selection using AI, rather than manual curation.
Federated Learning lets developers utilise datasets across countries, and with other developers while maintaining data privacy and protecting their own intellectual property.
Transfer Learning gives NVIDIA DRIVE customers the ability to speed up development of their own perception software by leveraging NVIDIA’s own autonomous vehicle development.
At GTC China 2019, DiDi announced that they will adopt NVIDIA GPUs and AI technologies to develop self-driving cars, as well as their cloud computing solutions.
DiDi Adopts NVIDIA AI + GPUs For Self-Driving Cars!
This announcement comes after DiDi spliced out their autonomous driving unit as an independent company in August 2019.
In their announcement, DiDi confirmed that they will use NVIDIA technologies in both their data centres and onboard their self-driving cars :
NVIDIA GPUs will be used to train machine learning algorithms in the data center
NVIDIA DRIVE will be used for inference in their Level 4 self-driving cars
NVIDIA DRIVE will fuse data from all types of sensors – cameras, LIDAR, radar, etc – and use numerous deep neural networks (DNNs) to understand the surrounding area, so the self-driving car can plan a safe way forward.
Those DNNs (deep neural networks) will require prior training using NVIDIA GPU data centre servers, and machine learning algorithms.
At Supercomputing 2019, Intel unveiled their oneAPI initiative for heterogenous computing, promising to deliver a unified programming experience for developers.
Here is an overview of the Intel oneAPI unified programming model, and what it means for programmers!
The Need For Intel oneAPI
The modern computing environment is now a lot less CPU-centric, with the greater adoption of GPUs, FGPAs and custom-built accelerators (like the Alibaba Hanguang 800).
Their different scalar, vector, matrix and spatial architectures require different APIs and code bases, which complicates attempts to utilise a mix of those capabilities.
Intel oneAPI For Heterogenous Computing
Intel oneAPI promises to change all that, offering a unified programming model for those different architectures.
It allows developers to create workloads and applications for multiple architectures on their platform of choice, without the need to develop and maintain separate code bases, tools and workflow.
Intel oneAPI comprises of two components – the open industry initiative, and the Intel oneAPI beta toolkit :
This is a cross-architecture development model based on industry standards, and an open specification, to encourage broader adoption.
Intel oneAPI Beta Toolkit
This beta toolkit offers the Intel oneAPI specification components with direct programming (Data Parallel C++), API-based programming with performance libraries, advanced analysis and debug tools.
Developers can test code and workloads in the Intel DevCloud for oneAPI on multiple Intel architectures.
What Processors + Accelerators Are Supported By Intel oneAPI?
The beta Intel oneAPI reference implementation currently supports these Intel platforms :
Intel Xeon Scalable processors
Intel Core and Atom processors
Intel processor graphics (as a proxy for future Intel discrete data centre GPUs)
Intel FPGAs (Intel Arria, Stratix)
The oneAPI specification is designed to support a broad range of CPUs and accelerators from multiple vendors. However, it is up to those vendors to create their own oneAPI implementations and optimise them for their own hardware.
Are oneAPI Elements Open-Sourced?
Many oneAPI libraries and components are already, or will soon be open sourced.
What Companies Are Participating In The oneAPI Initiative?
According to Intel, more than 30 vendors and research organisations support the oneAPI initiative, including CERN openlab, SAP and the University of Cambridge.
Companies that create their own implementation of oneAPI and complete a self-certification process will be allowed to use the oneAPI initiative brand and logo.
Available Intel oneAPI Toolkits
At the time of its launch (17 November 2019), here are the toolkits that Intel has made available for developers to download and use :
Intel oneAPI Base Toolkit (Beta)
This foundational kit enables developers of all types to build, test, and deploy performance-driven, data-centric applications across CPUs, GPUs, and FPGAs. Comes with :
Intel oneAPI Data Parallel C++ Compiler
Intel Distribution for Python
Multiple optimized libraries
Advanced analysis and debugging tools
Domain Specific oneAPI Toolkits for Specialised Workloads :
oneAPI HPC Toolkit (beta) : Deliver fast C++, Fortran, OpenMP, and MPI applications that scale.
oneAPI DL Framework Developer Toolkit (beta) : Build deep learning frameworks or customize existing ones.
oneAPI IoT Toolkit (beta) : Build high-performing, efficient, reliable solutions that run at the network’s edge.
Dell Technologies just shared with us the key findings from their research that explore the future of connected living by the year 2030!
Find out how emerging technologies will transform how our lives will change by the year 2030!
Dell On The Future of Connected Living In 2030!
Dell Technologies conducted their research in partnership with the Institute for the Future (IFTF) and Vanson Bourne, surveying 1,100 business leaders across ten countries in Asia Pacific and Japan.
Let’s take a look at their key findings, and find out why they believe the future is brimming with opportunity thanks to emerging technologies.
Technological Shifts Transforming The Future By 2030
IFTF and a forum of global experts forecast that emerging technologies like edge computing, 5G, AI, Extended Reality (XR) and IoT will create these five major shifts in society :
1. Networked Reality
Over the next decade, the line between the virtual and the real will vanish. Cyberspace will become an overlay on top of our existing reality as our digital environment extends beyond televisions, smartphones and other displays.
This transformation will be driven by the deployment of 5G networks that enable high bandwidth, low-latency connections for streaming, interactive services, and multi-user media content.
2. Connected Mobility and Networked Matter
The vehicles of tomorrow will essentially be mobile computers, with the transportation system resembling packet-switched networks that power the Internet.
We will trust them to take us where we need to go in the physical world as we interact in the virtual spaces available to us wherever we are.
3. From Digital Cities to Sentient Cities
More than half of the world’s population live in urban areas. This will increase to 68% over the next three decades, according to the United Nations.
This level of growth presents both huge challenges and great opportunities for businesses, governments and citizens.
Cities will quite literally come to life through their own networked infrastructure of smart objects, self-reporting systems and AI-powered analytics.
4. Agents and Algorithms
Our 2030 future will see everyone supported by a highly personalised “operating system for living” that is able to anticipate our needs and proactively support our day-to-day activities to free up time.
Such a Life Operating System (Life OS) will be context-aware, anticipating our needs and behaving proactively.
Instead of interacting with different apps today, the intelligent agent of the future will understand what you need and liaise with various web services, other bots and networked objects to get the job done.
5. Robot with Social Lives
Within 10 years, we will have personal robots that will become our partners in life – enhancing our skills and extending our abilities.
In some cases, they will replace us, but this can mean freeing us to do the things we are good at, and enjoy.
In most cases, they can become our collaborators, helping to crowdsource innovations and accelerate progress through robot social networks.
Preparing For The Future Of Connected Living By 2030
Many businesses in APJ are already preparing for these shifts, with business leaders expressing these perceptions :
80% (82% in Malaysia) will restructure the way they spend their time by automating more tasks
70% (83% in Malaysia) welcome people partnering with machines/robots to surpass our human limitations
More than half of businesses anticipate Networked Reality becoming commonplace
– 63% (67% in Malaysia) say they welcome day-to-day immersion in virtual and augmented realities
– 62% (63% in Malaysia) say they welcome people being fitted with brain computer interfaces
These technological shifts are seismic in nature, leaving people and organisations grappling with change. Organisations that want to harness these emerging technologies will need to collect, process and make use of the data, while addressing public concerns about data privacy.
APJ business leaders are already anticipating some of these challenges :
78% (88% in Malaysia) will be more concerned about their own privacy by 2030 than they are today
74% (83% in Malaysia) consider data privacy to be a top societal-scale challenge that must be solved
49% (56% in Malaysia) would welcome self-aware machines
49% (43% in Malaysia) call for regulation and clarity on how AI is used
84% (85% in Malaysia) believe that digital transformation should be more widespread throughout their organisation
With its small size and low-power, it opens up the possibility of adding AI on-the-edge computing capabilities to small commercial robots, drones, industrial IoT systems, network video recorders and portable medical devices.
The Jetson Xavier NX can be configured to deliver up to 14 TOPS at 10 W, or 21 TOPS at 15 W. It is powerful enough to run multiple neural networks in parallel, and process data from multiple high-resolution sensors simultaneously.
The NVIDIA Jetson Xavier NX runs on the same CUDA-X AI software architecture as all other Jetson processors, and is supported by the NVIDIA JetPack software development kit.
It is pin-compatible with the Jetson Nano, offering up to 15X higher performance than the Jetson TX2 in a smaller form factor.
It is not available for a few more months, but developers can begin development today using the Jetson AGX Xavier Developer Kit, with a software patch to emulate Jetson Xavier NX.
NVIDIA Jetson Xavier NX Specifications
NVIDIA Jetson Xavier NX
– 6 x Arm 64-bit cores
– 6 MB L2 + 4 MB L3 caches
– 384 CUDA cores, 48 Tensor cores, 2 NVDLA cores
21 TOPS : 15 watts
14 TOPS : 10 watts
– Up to 8 GB, 51.2 GB/s
Encoding : Up to 2 x 4K30 streams
Decoding : Up to 2 x 4K60 streams
Up to six CSI cameras (32 via virtual channels)
Up to 12 lanes (3×4 or 6×2) MIPI CSI-2
70 x 45 mm (Nano)
NVIDIA Jetson Xavier NX Price + Availability
The NVIDIA Jetson Xavier NX will be available in March 2020 from NVIDIA’s distribution channels, priced at US$399.
The MLPerf Inference 0.5 benchmarks are officially released today, with NVIDIA declaring that they aced them for both datacenter and edge computing workloads.
Find out how well NVIDIA did, and why it matters!
The MLPerf Inference Benchmarks
MLPerf Inference 0.5 is the industry’s first independent suite of five AI inference benchmarks.
Applied across a range of form factors and four inference scenarios, the new MLPerf Inference Benchmarks test the performance of established AI applications like image classification, object detection and translation.
NVIDIA Wins MLPerf Inference Benchmarks For Datacenter + Edge
Thanks to the programmability of its computing platforms to cater to diverse AI workloads, NVIDIA was the only company to submit results for all five MLPerf Inference Benchmarks.
According to NVIDIA, their Turing GPUs topped all five benchmarks for both datacenter scenarios (server and offline) among commercially-available processors.
Meanwhile, their Jetson Xavier scored highest among commercially-available edge and mobile SoCs under both edge-focused scenarios – single stream and multi-stream.
The new NVIDIA Jetson Xavier NX that was announced today is a low-power version of the Xavier SoC that won the MLPerf Inference 0.5 benchmarks.
All of NVIDIA’s MLPerf Inference Benchmark results were achieved using NVIDIA TensorRT 6 deep learning inference software.
At the Samsung Developer Conference 2019, Samsung and IBM announced a joint platform that leverages Samsung Galaxy devices and IBM cloud technologies to introduce new 5G, AI-powered mobile solutions!
Here is what you need to know about this new Samsung-IBM AI IoT cloud platform, and the 5G AI-powered mobile solutions it’s powering for governments and enterprises.
Samsung – IBM AI IoT Cloud Platform For 5G Mobile Solutions!
Built using IBM Cloud technologies and Samsung Galaxy mobile devices, the new platform will help improve the work environment for employees in high-stress or high-risk occupations.
This will help reduce the risks to these public employees who work in dangerous and high-stress situations. This is critical because nearly 3 million deaths occur each year due to occupational accidents.
This new, unnamed Samsung-IBM platform will help governments and enterprises track their employee’s vitals, including heart rate and physical activity. This will allow them to determine if that employee is in distress and requires help.
The Samsung – IBM AI IoT Cloud Platform In Use
5G mobile solutions based on the new Samsung-IBM AI IoT platform is being piloted by multiple police forces to monitor their health in real-time, and provide situational awareness insights to first responders and their managers.
The platform can track in real time, the safety and wellness indicators of first responders equipped with Samsung Galaxy Watches and Galaxy smartphones with 5G connectivity.
It can instantly alert emergency managers if there is a significant change in the safety parameters, which may indicate the first responder is in danger of a heart attack, heat exhaustion or other life-threatening events.
This allows them to anticipate potential dangers, and quickly send assistance. This should greatly reduce the risk of death and injuries to their employees.
At MWC Los Angeles 2019, NVIDIA announced major partnerships on their EGX edge computing platform with Microsoft, Ericsson and Red Hat to accelerate AI, IoT and 5G at the Edge.
Catch the official NVIDIA EGX briefing on these new updates, and find out what it means for edge computing!
The Official NVIDIA EGX Briefing @ MWC Los Angeles 2019
Before the official announcement, NVIDIA gave us an exclusive briefing on their EGX edge computing platform. We are sharing it with you, so you can hear it directly from Justin Boitano, General Manager of NVIDIA’s Enterprise & Edge Computing division.
Here is a summary of what NVIDIA unveiled at MWC Los Angeles 2019 :
NVIDIA EGX Early Adopters
NVIDIA actually announced EGX at Computex Taipei in June 2019 – a combination of the NVIDIA CUDA-X software and NVIDIA-certified GPU servers and devices.
Now, they are announcing that Walmart, BMW, Proctor & Gamble, Samsung Electronics and ETT East have adopted the EGX platform, and so have the cities of San Francisco and Las Vegas.
Samsung Electronics — The Korean electronics giant is using AI at the edge for highly complex semiconductor design and manufacturing processes.
BMW — The German automaker is using intelligent video analytics in its South Carolina manufacturing facility to automate inspection. With EGX gathering data from multiple cameras and other sensors in inspection lines, BMW is helping ensure only the highest quality automobiles leave the factory floor.
NTT East — The Japanese telecom services giant is using EGX in its data centers to develop new AI-powered services in remote areas through its broadband access network. Using the EGX platform, NTT East will provide remote populations the computing power and connectivity required to build and deploy a wide range of AI applications at the edge.
Procter & Gamble — The world’s leading consumer goods company is working with NVIDIA to develop AI-enabled applications on top of the EGX platform for the inspection of products and packaging to help ensure they meet the highest safety and quality standards. P&G is using NVIDIA EGX to analyze thousands of hours of footage from inspection lines and immediately flag imperfections.
Las Vegas — The city is using EGX to capture vehicle and pedestrian data to ensure safer streets and expand economic opportunity. Las Vegas plans to use the data to autonomously manage signal timing and other operational capabilities.
San Francisco — The city’s Union Square Business Improvement District is using EGX to capture real-time pedestrian counts for local retailers, providing them a powerful business intelligence tool for engaging with their customers more effectively.
NVIDIA EGX Case Study : Walmart
Walmart is a pioneer user of EGX, deploying it in its Levittown, New York, Intelligent Retail Lab — a fully-operational grocery store where it’s exploring the ways AI can further improve in-store shopping experiences.
Using EGX’s advanced AI and edge capabilities, Walmart is able to compute in real time more than 1.6 terabytes of data generated each second, and can use AI to :
automatically alert associates to restock shelves,
open up new checkout lanes,
retrieve shopping carts, and
ensure product freshness in meat and produce departments.
NVIDIA EGX Partnership With Microsoft
NVIDIA announced a collaboration with Microsoft to enable the closer integration between Microsoft Azure and the NVIDIA EGX platform, to advance edge-to-cloud AI computing capabilities for their clients.
In addition, NVIDIA-certified off-the-shelf servers — optimised to run Azure IoT Edge and ML services — are now available from more than a dozen leading OEMs, including Dell, Hewlett Packard Enterprise and Lenovo.
NVIDIA EGX Partnership With Ericsson
NVIDIA also announced that they are collaborating with Ericsson on developing virtualised RAN technologies.
Their ultimate goal is to commercialise those virtualised RAN technologies to deliver 5G networks with flexibility and shorter time-to-market for new services like augmented reality, virtual reality and gaming.
NVIDIA EGX Partnership With Red Hat
Finally, NVIDIA announced a collaboration with Red Hat to deliver software-defined 5G wireless infrastructure running on Red Hat OpenShift to the telecom industry.
Their customers will be able to use NVIDIA EGX and Red Hat OpenShift to deploy NVIDIA GPUs to accelerate AI, data science and machine learning at the edge.
The critical element enabling 5G providers to move to cloud-native infrastructure is NVIDIA Aerial. This software developer kit, also announced today, allows providers to build and deliver high-performance, software-defined 5G wireless RAN by delivering two essential advancements.
They are a low-latency data path directly from Mellanox network interface cards to GPU memory, and a 5G physical layer signal-processing engine that keeps all data within the GPU’s high-performance memory.
The Hanguang 800 chip will be used exclusively by Alibaba to power their own business operations, especially in product search and automatic translation, personalised recommendations and advertising.
According to Alibaba, merchants upload a billion product images to Taobao every day. It used to take their previous platform an hour to categorise those pictures, and then tailor search and personalise recommendations for millions of Taobao customers.
With Hanguang 800, they claim that the Taboo platform now takes just 5 minutes to complete the task – a 12X reduction in time!
Alibaba Cloud will also be using it in their smart city projects. They are already using it in Hangzhou, where they previously used 40 GPUs to process video feeds with a latency of 300 ms.
After migrating to four Hanguang 800 NPUs, they were able to process the same video feeds with half the latency – just 150 ms.
Can We Buy Or Rent The Hanguang 800?
No, Alibaba will not be selling the Hanguang 800 NPU. Instead, they are offering it as a new AI cloud computing service.
Developers can now make a request for a Hanguang 800 cloud compute quota, which Alibaba Cloud claims is 100% more cost-effective than traditional GPUs.
Are There No Other Alternatives For Alibaba?
In our opinion, this is Alibaba’s way of preparing for an escalation of the US-Chinese trade war that has already savaged HUAWEI.
While Alibaba certainly have a few AI inference accelerator alternatives, from AMD and NVIDIA for example, it makes sense for them to spend money and time developing their own AI inference chip.
In the long term, the Chinese government wants to build a domestic capability to design and fabricate their own computer chips for national security reasons.
Kambyan Network recently invited us to a demonstration of their AleX laser cutting drone, which is designed to harvest oil palm fruits.
They also invited David Cirulli from the Embry Riddle Aeronautical University, and Associate Professor Sagaya Amalathas from Taylors University, to talk about the ManUsIA digital agriculture technology and the future jobs available to young teens today.
Kambyan ManUsIA Digital Agriculture
Kambyan Network has been working with David Cirulli of the Embry Riddle Aeronautical University in Singapore to develop what they call the ManUsIA digital agriculture technology.
Manusia is actually a Malay word for human, and it is an apt moniker because according to David Cirulli, ManUsIA stands for Man Using Intelligent Applications.
ManUsIA is a digital agriculture platform that Kambyan is developing as a SPaaS (Solution Platform as a Service) offering to improve yield and reduce manpower in agriculture.
It combines the use of drones with artificial intelligence and machine learning capabilities on the cloud to make use of surveillance data and weather information to maximise yield and reduce manpower requirements for dirty, difficult and dangerous jobs.
ManUsIA will start with drones that are remotely controlled using mobile device integration, and eventually hope to integrate intelligent drones that work independently.
Future Jobs For Teens Today
Kambyan also invited Associate Professor Dr. Sagaya Amalathas, a Programme Director at Taylors University to talk about future jobs that teens today should consider.
She points out that the future will be highly dependent on new digital skills in the areas of Big Data Analytics and Artificial Intelligence, as well as Blockchain technology, and the Internet of Things.
She also shared some really useful information on what careers will remain stable in the fast-changing times, and what jobs will be lost and what new opportunities will arise.
The reason why Kambyan invited her was because their training arm, Adroit College offers a Drone Operator & Robotics course.
The Professional Certificate in Robotic Process Automation – Field Operations (RPA-FO) course combines a 5-week intensive workshop with an apprenticeship and internship program at Kambyan, allowing the student to graduate with a Professional Certificate in 11 months.
Kambyan AleX Laser Cutting Drone Demonstration
The star of the event was the Kambyan AleX laser cutting drone – the Airborne Laser Cutter Mark 1.
Designed to be a laser harvesting drone for the oil palm industry, it weighs 3 kilograms and is approximately 70 cm in diameter.
Powered by a 150 watt pulsed laser in the operational model, it is capable of cutting through 6 inches of plant material.
Piloted remotely by a drone operator in the current iteration, it will be used to trim the fronds of the oil palm trees and cut through the stem of oil palm fruit bunches to harvest them.
Using drones will not only reduce manpower, it will allow plantations to let their oil palm trees grow much higher, reducing the need to cut them down so often.
This will increase profit over the long term, while reducing the oil palm industry’s impact on the environment… in particular their contribution to the slash and burn activity that results in terrible haze in Southeast Asia.
In the demo, they used a less powerful laser for safety reasons. But as this video shows, that itself is a danger!
Fortunately, the operational drone uses a much more powerful laser to cut at a safer distance. This would prevent the drone from getting hit by falling oil palm fruits or flying debris.
Microsoft Asia and IDC Asia Pacific just released findings of a study which suggests that higher education institutions in APAC can double their rate of innovation with AI (artificial intelligence)!
APAC Higher Education Can Double Innovation With AI!
The Microsoft-IDC study – Future Ready Skills : Assessing APAC Education Sector’s Use of AI – found that AI (artificial intelligence) will help double the rate of innovation for higher education institutions.
This involves using AI to better manage student performance and enhance student engagements, while optimising operations to reduce work amongst the faculty and administrative staff.
Based on the study, the top business drivers to adopt AI in higher education include better student engagement, higher funding, and accelerated innovation.
Institutions that have already adopted AI say that they are seeing improvements in the rate of 11% to 28% in those areas.
By 2021, Microsoft and IDC predict that institutions using AI will experience the biggest jump in funding – 3.7X, which is higher than most industry sectors in Asia Pacific.
AI In Higher Education Case Study
Developing a globally engaged citizenry is one of Japan’s key priorities. However, many students avoid studying or going abroad, as doing so can delay them from taking classes they need to graduate.
The Faculty of Engineering at Hokkaido University, for example, has chosen to implement AI as part of its mission to encourage students to study abroad.
They developed a Microsoft Azure-based e-learning system that leverages AI and automation capabilities. This system lets students keep up with coursework back home, with course preparation streamlined from days to mere hours.
AI Skills Required For The Future Of Higher Education
Both education leaders and their staff are equally positive about the impact of AI on higher education jobs.
A large majority (61%) of both segments believe that AI will either help them do their jobs better, or reduce repetitive tasks.
21% of education leaders, and 13% of their staff also agree that AI will help create new jobs in higher education.
However, the requisite skills for an AI future are currently in shortage. The top three skills that education leaders believe will face a shortage in the next three years include :
IT skills and programming
Quantitative, analytical and statistical skills
The Study also noted a disconnect with the perception of education leaders of their staff’s willingness to reskill to adapt to an AI future.
26% of education leaders believe that their staff have no interest to reskill, but in reality, only 11% of their staff had no interest to reskill.
NVIDIA just announced a FREE course on getting started with AI on the Jetson Nano!
Here is everything you need to know about this new Jetson Nano AI course – the first to be offered for FREE by the Deep Learning Institute!
The FREE AI Course For NVIDIA Jetson Nano
Looking to get started with AI, but don’t know how? The NVIDIA Deep Learning Institute has just published a new self-paced course that uses the newly released Jetson Nano Developer Kit to get up and running fast.
Best of all – this AI course for the NVIDIA Jetson Nano is FREE. This is the first Deep Learning Institute course to be offered for free.
In the course, students will learn to collect image data and use it to train, optimize, and deploy AI models for custom tasks like recognizing hand gestures, and image regression for locating a key point in an image.
Set up your Jetson Nano and camera
Collect image data for classification models
Annotate image data for regression models
Train a neural network on your data to create your own models
Run inference on the Jetson Nano with the models you create
Upon completion, you’ll be able to create your own deep learning classification and regression models with the Jetson Nano.
Some experience with Python is helpful but not required. You will need the NVIDIA Jetson Nano Developer Kit, of course.
The FREE Jetson Nano AI Course Requirements
Duration : 8 hours
Prerequisites: Basic familiarity with Python (helpful, not required)
In his first prediction for Earth 2050, Eugene Kaspersky believes that AI digital intuition will deliver cyberimmunity by 2050. Do YOU agree?
What Is Earth 2050
Earth 2050 is a Kaspersky social media project – an open crowdsourced platform, where everyone can share their visions of the future.
So far, there are nearly 400 predictions from 70+ visionaries, from futurologist Ian Pearson, astrophysicist Martin Rees, venture capitalist Steven Hoffman, architect-engineer Carlo Ratti, writer James Kunstler and sci-fi writer David Brin.
Eugene himself dabbles in cyberdivination, and shares with us, a future of cyberimmunity created by AI digital intuition!
Eugene Kaspersky : From Digital Intuition To Cyberimmunity!
In recent years, digital systems have moved up to a whole new level. No longer assistants making life easier for us mere mortals, they’ve become the basis of civilization — the very framework keeping the world functioning properly in 2050.
This quantum leap forward has generated new requirements for the reliability and stability of artificial intelligence. Although some cyberthreats still haven’t become extinct since the romantic era around the turn of the century, they’re now dangerous only to outliers who for some reason reject modern standards of digital immunity.
The situation in many ways resembles the fight against human diseases. Thanks to the success of vaccines, the terrible epidemics that once devastated entire cities in the twentieth century are a thing of the past.
However, that’s where the resemblance ends. For humans, diseases like the plague or smallpox have been replaced by new, highly resistant “post-vaccination” diseases; but for the machines, things have turned out much better.
This is largely because the initial designers of digital immunity made all the right preparations for it in advance. In doing so, what helped them in particular was borrowing the systemic approaches of living systems and humans.
One of the pillars of cyber-immunity today is digital intuition, the ability of AI systems to make the right decisions in conditions where the source data are clearly insufficient to make a rational choice.
But there’s no mysticism here: Digital intuition is merely the logical continuation of the idea of machine learning. When the number and complexity of related self-learning systems exceeds a certain threshold, the quality of decision-making rises to a whole new level — a level that’s completely elusive to rational understanding.
An “intuitive solution” results from the superimposition of the experience of a huge number of machine-learning models, much like the result of the calculations of a quantum computer.
So, as you can see, it has been digital intuition, with its ability to instantly, correctly respond to unknown challenges that has helped build the digital security standards of this new era.
In partnership with Intel, Dell Technologies announced the launch of five Dell AI Experience Zones across the APJ region!
Here is a quick primer on the new Dell AI Experience Zones, and what they mean for organisations in the APJ region!
The APJ Region – Ripe For Artificial Intelligence
According to the Dell Technologies Digital Transformation Index, Artificial Intelligence (AI) will be amongst the top spending priorities for business leaders in APJ.
Half of those surveyed plan to invest in AI in the next one to three years, as part of their digital transformation strategy. However, 95% of companies face a lack of in-house expertise in AI.
This is where the five new Dell AI Experience Zones come in…
The Dell AI Experience Zones
The new AI Experience Zones are designed to offer both customers and partners a comprehensive look at the latest AI technologies and solutions.
Built into the existing Dell Technologies Customer Solution Centres, they will showcase how the Dell EMC High-Performance Computing (HPC) and AI ecosystem can help them address business challenges and seize opportunities.
All five AI Experience Zones are equipped with technology demonstrations built around the latest Dell EMC PowerEdge servers. Powered by the latest Intel Xeon Scalable processors, they are paired with advanced, open-source AI software like VINO, as well as Dell EMC networking and storage technologies.
Customers and partners who choose to leverage the new AI Experience Zones will receive help in kickstarting their AI initiatives, from design and AI expert engagements, to masterclass training, installation and maintenance.
“The timely adoption of AI will create new opportunities that will deliver concrete business advantages across all industries and business functions,” says Chris Kelly, vice president, Infrastructure Solutions Group, Dell Technologies, APJ.
“Companies looking to thrive in a data drive era need to understand that investments in AI are no longer optional – they are business critical. Whilst complex in nature, it is imperative that companies quickly start moving from theoretical AI strategies to practical deployments to stay ahead of the curve.”
Dell AI Experience Zones In APJ
The five new AI Experience Zones that Dell Technologies and Intel announced are located within the Dell Technologies Customer Solution Centres in these cities :
At the Dell Technologies World 2019, we were lucky enough to snag a seat at the talk by MIT Professor Erik Brynjolfsson; and MIT alumni and Affectiva CEO, Rana el Kaliouby, on human-machine partnership.
We managed to record the incredibly insightful session for everyone who could not make it for this exclusive guru session. This is a video you must not miss!
The DTW 2019 Guru Sessions
One of the best reasons to attend Dell Technologies World 2019 are the guru sessions. If you are lucky enough to reserve a seat, you will have the opportunity to listen to some of the world’s most brilliant thinkers and doers.
The Human-Machine Partnership
The talk on human-machine partnership by Professor Brynjolfsson and Ms. Rana was the first of several guru sessions at Dell Technologies World 2019.
Entitled “How Emerging Technologies & Human Machine Partnerships Will Transform the Economy“, it focused on how technology changed human society, and what the burgeoning efforts in artificial intelligence will mean for humanity.
Here are the key points from their guru session on the human-machine partnership :
Erik Brynjolfsson (00:05 to 22:05) on the Human-Machine Partnership
You cannot replace old technologies with new technologies, without rethinking the organisation or institution.
We are now undergoing a triple revolution
– a rebalancing of mind and machine through Big Data and Artificial Intelligence
– a shift from products to (digital) platforms
– a shift from the core to crowd-based decision making
Shifting to data-driven decision-making based on Big Data results in higher productivity and greater profitability.
Since 2015, computers can now recognise objects better than humans, thanks to rapid advances in machine learning.
Even machine-based speech recognition has become as accurate as humans from 2017 onwards.
While new AI capabilities are opening up new possibilities in many fields, they are also drastically reducing or eliminating the need for humans.
Unlike platforms of the past, the new digital networks leverage “two-sided networks“. In many cases, one network is used to subsidise the other network, or make it free-to-use.
Shifting to crowd-based decision-making introduces diversity in the ways of thinking, gaining new perspectives and breakthroughs in problem-solving.
Digital innovations have greatly expanded the economy, but it doesn’t mean that everyone will benefit. In fact, there has been a great decoupling between the productivity and median income of the American worker in the past few decades.
Rana el Kaliouby (22:08 to 45:05) on the Human-Machine Partnership
Human communication is mostly conveyed indirectly – 93% is non-verbal. Half of that are facial expression and gestures, the other half is vocal intonation.
Affectiva has the world’s largest emotion repository, with 5 billion frames of 8 million faces from 87 countries.
Facial expressions are largely universal, but there is a need diversity of their data to avoid bias in their models. For example, there are gender differences that vary by culture.
They use computer vision, machine learning and deep learning to create an Emotional AI model that learns from all those facial expressions to accurately determine a person’s emotions.
Emotional artificial intelligence has many real-world or potential uses
– detecting dangerous driving, allowing for proactive measures to be taken
– personalising the ride in a future robot-taxi or autonomous car
– the creation of more engaging and effective social robots in retail and hospitality industries
– help autistic children understand how facial expressions correspond to emotions, and learn social cues.
Erik Brynjolfsson + Rana el Kaliouby
Professor Erik Brynjolfsson holds many hats. He is currently :
Professor at the MIT Sloan School of Management,
Director of the MIT Initiative on the Digital Economy,
Director of the MIT Center for Digital Business, and
Research Associate at the National Bureau of Economic Research
Rana el Kaliouby was formerly a computer scientist at MIT, helping to form their Autism & Communication Technology Initiative. She currently serves as CEO of Affectiva, a spin-off from MIT’s Media Lab that focuses on emotion recognition technology.
AMD and Cray just unveiled the Frontier supercomputer, which will deliver exascale performance! Here is a primer on the world’s fastest supercomputer!
The Frontier Supercomputer – Designed By Cray, Powered By AMD
AMD announced that it is joining Cray, the U.S Department Of Energy and Oak Ridge National Laboratory to develop the Frontier supercomputer. It will be the fastest in the world, delivering exascale performance.
Developed at a cost of over US$600 million, the Frontier supercomputer will deliver over 1.5 exaflops of processing power when it comes online in the year 2021!
AMD Contributions To The Frontier Supercomputer
AMD is not just a provider of hardware – the CPUs and GPUs – for the Frontier supercomputer. They will contribute their years of experience in High Performance Computing and Artificial Intelligence :
Experience in High Performance Computing (HPC) and Artificial Intelligence (AI)
Custom AMD EPYC CPU
Purpose-built Radeon Instinct GPU
High Bandwith Memory (HBM)
Tightly integrated 4:1 GPU to CPU ratio
Custom, high speed coherent Infinity Fabric connection
Enhanced, open ROCm programming environment for AMD CPUs and GPUs support
Frontier Supercomputer And The Future Of Exascale Computing
With the development of the Frontier supercomputer, AMD and Cray will usher in a new era of exascale computing. It will lay the foundation for advanced and high performance of Artificial Intelligence (AI), analytics and simulation.
The use of this super-fast supercomputer by the U.S Department of Energy will further boost the limits of scientific discovery for the U.S and the world.
Microsoft and IDC Asia Pacific just unveiled the results of their latest study of the AI growth potential for Malaysia. Here is a video of their briefing, and a summary of their key findings!
The Microsoft-IDC Report On AI Growth Potential For Malaysia
The Microsoft-IDC report on AI growth potential for Malaysia is based on their 2018 survey of 100 business leaders and 100 workers in Malaysia.
Presenting the key findings were K Raman, Managing Director of Microsoft Malaysia, and Jun-Fwu Chin, Research Director for IDC Asia Pacific Datacenter Group.
Increased Innovation + Productivity
Titled “Future Reader Business: Assessing Asia Pacific’s Growth Potential Through AI“, it revealed that Artificial Intelligence (AI) will almost double the rate of innovation, and boost employee productivity by 60% by 2021.
Low Uptake Of AI So Far
Even though 70% of the business leaders surveyed believe that AI is instrumental for their organisation’s competitiveness, only 26% of organisations in Malaysia have begun their AI initiatives.
The Top 5 Reasons For Adopting AI
For those companies who have already started their AI initiatives, these are their top 5 reasons :
Better customer engagements (31%)
Higher competitiveness (31%)
Accelerated innovation (12%)
Improved efficiency (12%)
More productive employees (8%)
Initial Results Of AI Initiatives
For those companies, their AI initiatives have resulted in some tangible improvements of between 17% to 34% in those 5 areas. They forecast a further boost of 60% to 130% over a three-year horizon.
Malaysia Not Prepared
The study also evaluated the six dimensions critical to developing Malaysia’s AI growth potential, and found them wanting. In particular, Malaysia is weak in data and investments.
Top Three Challenges
Business leaders who are already adopting AI cited these three top challenges in realising their companies’ AI growth potential :
Lack of thought leadership and commitment to invest in AI
Lack of skills, resources and continuous learning programs
Lack of advanced analytics or infrastructure and tools to develop actionable insights
Leaders + Workers Are Positive About AI
The study also found that 67% of business leaders and 64% of workers in Malaysia are positive about AI’s impact on the future of jobs.
In addition, the study claims that workers are MORE optimistic about AI creating jobs than replacing them, than business leaders!
Editor’s Note :We find the high favourability by workers to be highly questionable, and have requested more information about the type of workers surveyed by IDC.
It is possible that the workers they surveyed are high-level executives who see AI as a useful tool that will enhance their jobs, rather than the job killers that many low-level executives and blue-collar workers are worried about.
The 2019 Dell EMC Global Data Protection Index is out! Here is a summary of its key findings!
The 2019 Dell EMC Global Data Protection Index
The 2019 Dell EMC Global Data Protection Index is the third survey conducted by Dell EMC in collaboration with Vanson Bourne.
The survey involved 2,200 IT decision makers from public and private organisations (of 250+ employees) across 18 countries and 11 industries. It was designed to reveal the state of data protection in the Asia Pacific and Japan region.
What Did The 2019 Data Protection Index Reveal?
The 2019 Dell EMC Global Data Protection Index revealed a large increase in the amount of data managed – from 1.68 petabytes in 2016 to a staggering 8.13 petabytes in 2018.
They also saw a corresponding increase in awareness about the value of data, with 90% of the respondents aware about the value of the data they manage. However, only 35% are monetising their data.
The Index also noted that despite an impressive jump in the number of data protection leaders (from 1% to 13%) and “adopters” (from 8% to 53%) since 2016, most of the survey respondents still face challenges in implementing the right data protection measures.
Organisations in Asia Pacific & Japan managed 8.13 PB of data in 2018 – an explosive growth of 384% compared to the 1.68 PB managed in 2016
90% of businesses see the potential value of data but only 35% are monetising it
94% face data protection challenges, and 43% struggle to find suitable data protection solutions for newer technologies like artificial intelligence and machine learning
More than a third (34%) of respondents are very confident that their data protection infrastructure is compliant with regional regulations, but only 18% believe their data protection solutions will meet all future challenges
The State Of Data Protection In APJ
Data disruptions and data loss happen more frequently in APJ organisations, than the global average. Some 80% of the APJ respondents reported experiencing some type of disruption over the last 12 months.
This is higher than the global average of 76%. Even worse – 32% were unable to recover their data using existing data protection solutions.
Although system downtime is a problem, the loss of data is particularly expensive. On average, 20 hours of downtime cost businesses US$ 494,869. The average data loss of 2.04 terabytes, on the other hand, costs nearly twice as much at US$ 939,703.
Challenges To Data Protection In APJ
The vast majority of respondents (some 94%) report that they encounter at least one barrier to data protection. The top three challenges in APJ was determined to be :
The inability to keep track of and protect all data because of growth of DevOps and cloud development – 46% agree
The complexity of configuring and operating data protection software/hardware – 45.6% agree
The lack of data protection solutions for emerging technologies– 43.4% agree
They also struggled to find adequate data protection solutions for newer technologies :
Artificial intelligence and machine learning data – 54% agree
Cloud-native applications – 49% agree
Internet of Things – 40% agree
Cloud Is Changing The Data Protection Landscape
According to the 2019 Dell EMC Global Data Protection Index, organisations have increase their use of public cloud services – up from 27% in 2016 to 41% in 2018.
Nearly all of those organisations (99%) using public cloud, are leveraging it as part of their data protection strategy. The top use case – backup or snapshot services to protect data and workloads.
More than 60% of the respondents also consider the scalability of data protection solutions important, in anticipation of the inevitable boom of cloud workloads.
Regulation Is Not A Key Concern
Compliance with data privacy regulations like the UE’s General Data Protection Regulation (GDPR) is not a key concern for most of these organisations. Only 36% listed it as a top data protection challenge.
At its first artificial intelligence (AI) developer conference in Beijing on Nov. 14 and 15, Intel kicked off the event with the launch of the Intel Neural Compute Stick 2 (Intel NCS 2). NCS 2 is designed to build smarter AI algorithms and for prototyping computer vision at the network edge.
Intel Neural Compute Stick 2 (NCS 2) Launched!
Based on the Intel Movidius Myriad X vision processing unit (VPU) and supported by the Intel Distribution of OpenVINO toolkit, the Intel NCS 2 speeds the development of deep neural networks inference applications while delivering a performance boost over the previous generation neural compute stick.
The Intel NCS 2 enables deep neural network testing, tuning and prototyping. Developers can go from prototyping into production leveraging a range of Intel vision accelerator form factors in real-world applications.
“The first-generation Intel Neural Compute Stick sparked an entire community of AI developers into action with a form factor and price that didn’t exist before. We’re excited to see what the community creates next with the strong enhancement to compute power enabled with the new Intel Neural Compute Stick 2.” – Naveen Rao, Intel corporate vice president and general manager of the AI Products Group
What Does The Intel Neural Compute Stick 2 Do?
Connecting computer vision and AI to Internet of Things (IoT) and edge device prototypes is made easy with the enhanced capabilities of the Intel NCS 2. It will also enable developers working on a smart camera, a drone, an industrial robot or the next must-have smart home device to prototype faster and smarter.
It may look like a standard USB thumb drive but it is so much more inside. The Intel NCS 2 is powered by the latest generation of Intel VPU – the Intel Movidius Myriad X VPU. Its the first to feature a neural compute engine – a dedicated hardware neural network inference accelerator delivering additional performance.
Combined with the Intel Distribution of the OpenVINO toolkit supporting more networks, the Intel NCS 2 offers developers greater prototyping flexibility. Additionally, the Intel AI: In Production ecosystem allows developers to port their Intel NCS 2 prototypes to other form factors and productize their designs.
How The Intel Neural Compute Stick 2 Works
With just a laptop and the Intel NCS 2, developers can have their AI and computer vision applications running in minutes. The Intel NCS 2 runs on a standard USB 3.0 port and requires no additional hardware. This enables users to seamlessly convert and then deploy PC-trained models to a wide range of devices natively and without internet or cloud connectivity.
The first-generation Intel NCS which was launched in July 2017 has fueled a community of tens of thousands of developers. It has been featured in more than 700 developer videos and has been utilized in dozens of research papers. With NCS 2, Intel is empowering the AI community to develop many more ambitious applications.
Where To Buy Intel Neural Compute Stick 2
Here are some direct links to purchase the Intel Neural Compute Stick 2 :
The #FindDali marketing campaign has left many netizens in Kuala Lumpur puzzled for weeks. Who is Dali??? Why should we find Dali? All that was revealed at a smashing party last night. Check out the full story below!
Find Dali – The SOCAR-Powered Transportation Aide
For weeks, Kuala Lumpur has been inundated with advertisements to find Dali. People wondered about the ads asking, “Would you rather brake in traffic or take a break?” or “Who would free you from your car loans?“.
Leon Foong, CEO of SOCAR revealed last night that Dali is Kuala Lumpur’s first smart transportation aide. Designed to give you access to your mobility-related information – from car loans and the cheapest car parks and best eating places, it will help get around KL, whether it’s with public transport, car sharing, e-hailing or even your own car!
Dali is powered by the car-sharing startup, SOCAR and its multi-flex community, and backed by an AI engine. Dali is more than just an AI bot. It’s meant to offer the multi-flex community the ability to start threads on various topics, and use their travel experiences to guide other commuters asking the same questions, or facing the same situations.
“To put it simply, Dali is YOU and every other person who is a part of the multiflex transportation matrix. It is a community first platform built for the community of multiflexers and is powered by the community” said Leon Foong, CEO of SOCAR. “We spend a sizeable amount of time each day lamenting on why we waste so much time on commuting and getting around. However things aren’t going to get better if we just internalise this problem.”
“WIth Dali, we want to give people a voice, a voice to change things, a voice to shape the future of mobility by expressing what we truly want and allowing the community to come together to build solutions that will make our cities more liveable. SOCAR believes in the power of the collective, and FindDali.my is the tool that allows us to harness the power of that collective.”
The #FindDali platform was officially launched at a smashing party, going live at the stroke of midnight. Starting today (7 September 2018), everyone can use FindDali. Not just SOCAR members. EVERYONE.
Samsung Research, the advanced Research & Development (R&D) hub of Samsung Electronics, has dedicated substantial effort in creating ground-breaking AI technologies. And it has succeeded with the ConZNet algorithm. Here’s the low down!
Samsung ConZNet Algorithm Tops Two AI Challenges
The Samsung Research R&D team used their ConZNet algorithm to rank first in the MAchine Reading COmprehension (MS MARCO by MS Microsoft) competition, and won “Best Performance” in TriviaQA which was hosted by the University of Washington.
MS MARCO and TriviaQA are among the most actively researched and used machine reading comprehension competitions in the world. In these competitions, AI algorithms are tested in their capabilities of processing natural language in human question and answers, while also providing written text in various types of documents such as news articles and blog posts.
Competitions such as MS MARCO and TriviaQA allow contestants to participate at any time, and rankings are altered according to real-time test results.
What Is The ConZNet Algorithm?
The Samsung Research’s ConZNet algorithm advances machine intelligence by giving reasonable feedback for outcomes, similar to a stick-and-carrot (or reinforcement) strategy in the learning process. ConZNet takes natural language into account such as how people deliver queries and answers online which was the key factor in determining the winners of these competitions.
What Are The Potential Uses Of ConZNet?
With this, there is very high potential in introducing Samsung Research’s AI algorithm to other departments in Samsung Electronics such as Home Appliances and Smartphones.
Apart from that, departments dealing with customer services are also showing high interest in the AI, especially since AI-based customer services like chatbots have emerged as hot topics in recent times.
Samsung AI Centers
Samsung also revealed that they have begun launching global AI Centers, to collaborate with leading AI experts. Eventually, they hope the AI technologies developed by Samsung Research will be adopted and integrated into Samsung Electronics products and services.