Tag Archives: Machine learning

Jeff Clarke : Tech Predictions For 2020 + Next Data Decade!

Jeff Clarke : Tech Predictions For 2020 + Next Data Decade!

Dell Technologies COO and Vice Chairman, Jeff Clarke, reveals his tech predictions for 2020, the start of what Dell Technologies considers as the Next Data Decade!

 

Jeff Clarke : Tech Predictions For 2020 + Next Data Decade!

It’s hard to believe that we’re heading into the year 2020 – a year that many have marked as a milestone in technology. Autonomous cars lining our streets, virtual assistants predicting our needs and taking our requests, connected and intelligent everything across every industry.

When I stop to think about what has been accomplished over the last decade – it’s quite remarkable.  While we don’t have fully autonomous cars zipping back and forth across major freeways with ease, automakers are getting closer to deploying autonomous fleets in the next few years.

Many of the every-day devices, systems and applications we use are connected and intelligent – including healthcare applications, industrial machines and financial systems – forming what is now deemed as “the edge.”

At the root of all that innovation and advancement are massive amounts of data and compute power, and the capacity across edge, cloud and core data center infrastructure to put data through its paces. And with the amount of data coming our way in the next 10 years – we can only imagine what the world around us will look like in 2030, with apps and services we haven’t even thought of yet.

2020 marks the beginning of what we at Dell Technologies are calling the Next Data Decade, and we are no doubt entering this era with new – and rather high – expectations of what technology can make possible for how we live, work and play. So what new breakthroughs and technology trends will set the tone for what’s to come over the next 10 years? Here are my top predictions for the year ahead.

2020 proves it’s time to keep IT simple

We’ve got a lot of data on our hands…big data, meta data, structured and unstructured data – data living in clouds, in devices at the edge, in core data centers…it’s everywhere. But organisations are struggling to ensure the right data is moving to the right place at the right time. They lack data visibility – the ability for IT teams to quickly access and analyse the right data – because there are too many systems and services woven throughout their IT infrastructure. As we kick off 2020, CIOs will make data visibility a top IT imperative because after all, data is what makes the flywheel of innovation spin.

We’ll see organisations accelerate their digital transformation by simplifying and automating their IT infrastructure and consolidating systems and services into holistic solutions that enable more control and clarity. Consistency in architectures, orchestration and service agreements will open new doors for data management – and that ultimately gives data the ability be used as part of AI and Machine Learning to fuel IT automation.  And all of that enables better, faster business outcomes that the innovation of the next decade will thrive on.

Cloud co-existence sees rolling thunder

The idea that public and private clouds can and will co-exist becomes a clear reality in 2020. Multi-cloud IT strategies supported by hybrid cloud architectures will play a key role in ensuing organisations have better data management and visibility, while also ensuring that their data remains accessible and secure.  In fact, IDC predicted that by 2021, over 90% of enterprises worldwide will rely on a mix of on-premises/dedicated private clouds, several public clouds, and legacy platforms to meet their infrastructure needs.

But private clouds won’t simply exist within the heart of the data center. As 5G and edge deployments continue to rollout, private hybrid clouds will exist at the edge to ensure the real-time visibility and management of data everywhere it lives.

That means organisations will expect more of their cloud and service providers to ensure they can support their hybrid cloud demands across all environments. Further, we’ll see security and data protection become deeply integrated as part of hybrid cloud environments, notably where containers and Kubernetes continue to gain momentum for app development. Bolting security measures onto cloud infrastructure will be a non-starter…it’s got to be inherently built into the fiber of the overall data management strategy edge to core to cloud.

What you get is what you pay

One of the biggest hurdles for IT decision makers driving transformation is resources. CapEx and OpEx can often be limiting factors when trying to plan and predict for compute and consumption needs for the year ahead…never mind the next three-five years. SaaS and cloud consumption models have increased in adoption and popularity, providing organisations with the flexibility to pay for what they use, as they go.

In 2020, flexible consumption and as-a-service options will accelerate rapidly as organisations seize the opportunity to transform into software-defined and cloud-enabled IT. As a result – they’ll be able to choose the right economic model for their business to take advantage of end-to-end IT solutions that enable data mobility and visibility, and crunch even the most intensive AI and Machine Learning workloads when needed.

“The Edge” rapidly expands into the enterprise

The “Edge” continues to evolve – with many working hard to define exactly what it is and where it exists.   Once limited to the Internet of Things (IoT), it’s hard to find any systems, applications, services – people and places – that aren’t connected. The edge is emerging in many places and it’s going to expand with enterprise organisations leading the way, delivering the IT infrastructure to support it.

5G connectivity is creating new use cases and possibilities for healthcare, financial services, education and industrial manufacturing. As a result, SD-WAN and software-defined networking solutions become a core thread of a holistic IT infrastructure solution – ensuring massive data workloads can travel at speed – securely – between edge, core and cloud environments. Open networking solutions will prevail over proprietary as organisations recognise the only way to successfully manage and secure data for the long haul requires the flexibility and agility that only open software defined networking can deliver.

Intelligent devices change the way you work and collaborate

PC innovation continues to push new boundaries every year – screens are more immersive and bigger than ever, yet the form factor becomes smaller and thinner. But more and more, it’s what is running at the heart of that PC that is more transformational than ever. Software applications that use AI and machine learning create systems that now know where and when to optimise power and compute based on your usage patterns. With biometrics, PCs know it’s you from the moment you gaze at the screen. And now, AI and machine learning applications are smart enough to give your system the ability to dial up the sound and colour based on the content you’re watching or the game you’re playing.

Over the next year, these advancements in AI and machine learning will turn our PCs into even smarter and more collaborative companions. They’ll have the ability to optimise power and battery life for our most productive moments – and even become self-sufficient machines that can self-heal and self-advocate for repair – reducing the burden on the user and of course, reducing the number of IT incidents filed. That’s a huge increase in happiness and productivity for both the end users and the IT groups that support them.

Innovating with integrity, sourcing sustainably

Sustainable innovation will continue to take center stage, as organisations like ours want to ensure the impact they have in the world doesn’t come with a dangerous one on the planet. Greater investments in reuse and recycling for closed-loop innovation will accelerate – hardware becomes smaller and more efficient and built with recycled and reclaimed goods – minimising eWaste and maximising already existing materials. At Dell Technologies, we met our Legacy of Good 2020 goals ahead of schedule – so we’ve retired them and set new goals for 2030 to recycle an equivalent product for every product a customer buys, lead the circular economy with more than half of all product content being made from recycled or renewable material, and use 100% recycled or renewable material in all packaging.

As we enter the Next Data Decade, I’m optimistic and excited about what the future holds. The steps our customers will take in the next year to get the most out of their data will set forth new breakthroughs in technology that everyone will experience in some way – whether it’s a more powerful device, faster medical treatment, more accessible education, less waste and cleaner air. And before we know it, we’ll be looking forward to what the following 10 years will have in store.

 

Recommended Reading

Go Back To > Business + Enterprise | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


DiDi Adopts NVIDIA AI + GPUs For Self-Driving Cars!

At GTC China 2019, DiDi announced that they will adopt NVIDIA GPUs and AI technologies to develop self-driving cars, as well as their cloud computing solutions.

 

DiDi Adopts NVIDIA AI + GPUs For Self-Driving Cars!

This announcement comes after DiDi spliced out their autonomous driving unit as an independent company in August 2019.

In their announcement, DiDi confirmed that they will use NVIDIA technologies in both their data centres and onboard their self-driving cars :

  • NVIDIA GPUs will be used to train machine learning algorithms in the data center
  • NVIDIA DRIVE will be used for inference in their Level 4 self-driving cars

NVIDIA DRIVE will fuse data from all types of sensors – cameras, LIDAR, radar, etc – and use numerous deep neural networks (DNNs) to understand the surrounding area, so the self-driving car can plan a safe way forward.

Those DNNs (deep neural networks) will require prior training using NVIDIA GPU data centre servers, and machine learning algorithms.

Recommended : NVIDIA DRIVE AGX Orin for Autonomous Vehicles Revealed!

 

DiDi Cloud Computing Will Use NVIDIA Tech Too

DiDi also announced that DiDi Cloud will adopt and launch new vGPU (virtual GPU) cloud servers based on NVIDIA GPUs.

The new vGPU licence mode will offer more affordable and flexible GPU cloud computing services for remote computing, rendering and gaming.

 

Recommended Reading

Go Back To > Automotive | Business | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


The Alibaba Hanguang 800 (含光 800) AI NPU Explained!

At the Apsara Computing Conference 2019, Alibaba Group unveiled details of their first AI inference NPU – the Hanguang 800 (含光 800).

Here is EVERYTHING you need to know about the Alibaba Hanguang 800 AI inference NPU!

Updated @ 2019-09-27 : Added more details, including a performance comparison against its main competitors.

Originally posted @ 2019-09-25

 

What Is The Alibaba Hanguang 800?

The Alibaba Hanguang 800 is a neural processing unit (NPU) for AI inference applications. It was specifically designed to accelerate machine learning and AI inference tasks.

 

What Does Hanguang Mean?

The name 含光 (Hanguang) literally means “contains light“.

While the name may suggest that it uses photonics, that light-based technology is still at least a decade from commercialisation.

 

What Are The Hanguang 800 Specifications?

Not much is known about the Hanguang 800, other than that it has 17 billion transistors, and is fabricated on the 12 nm process technology.

Also, it is designed for inferencing only, unlike the HUAWEI Ascend 910 AI chip which can handle both training and inference.

Recommended : 3rd Gen X-Dragon Architecture by Alibaba Cloud Explained!

 

Who Designed The Hanguang 800?

The Hanguang 800 was developed over a period of 7 months, by Alibaba’s research unit, T-Head, followed by a 3-month tape-out.

T-Head, whose Chinese name is Pintougehoney badger in English, is responsible for designing chips for cloud and edge computing under Alibaba Cloud / Aliyun.

Earlier this year, T-Head revealed a high-performance IoT processor called XuanTie 910.

Based on the RISC-V open-source instruction set, 16-core XuanTie 910 is targeted at heavy-duty IoT applications like edge servers, networking gateways, and self-driving automobiles.

 

How Fast Is Hanguang 800?

Alibaba claims that the Hanguang 800 “largely” outpaces the industry average performance, with image processing efficiency about 12X better than GPUs :

  • Single chip performance : 78,563 images per second (IPS)
  • Computational efficiency : 500 IPS per watt (Resnet-50 Inference Test)
Hanguang 800 Habana Goya Cambricon MLU270 NVIDIA T4 NVIDIA P4
Fab Process 12 nm 16 nm 16 nm 12 nm 16 nm
Transistors 17 billion NA NA 13.6 billion 7.2 billion
Performance
(ResNet-50)
78,563 IPS 15,433 IPS 10,000 IPS 5,402 IPS 1,721 IPS
Peak Efficiency
(ResNet-50)
500 IPS/W 150 IPS/W 143 IPS/W 78 IPS/W 52 IPS/W

Recommended : 2nd Gen EPYC – Everything You Need To Know Summarised!

 

Where Will Hanguang 800 Be Used?

The Hanguang 800 chip will be used exclusively by Alibaba to power their own business operations, especially in product search and automatic translation, personalised recommendations and advertising.

According to Alibaba, merchants upload a billion product images to Taobao every day. It used to take their previous platform an hour to categorise those pictures, and then tailor search and personalise recommendations for millions of Taobao customers.

With Hanguang 800, they claim that the Taboo platform now takes just 5 minutes to complete the task – a 12X reduction in time!

Alibaba Cloud will also be using it in their smart city projects. They are already using it in Hangzhou, where they previously used 40 GPUs to process video feeds with a latency of 300 ms.

After migrating to four Hanguang 800 NPUs, they were able to process the same video feeds with half the latency – just 150 ms.

 

Can We Buy Or Rent The Hanguang 800?

No, Alibaba will not be selling the Hanguang 800 NPU. Instead, they are offering it as a new AI cloud computing service.

Developers can now make a request for a Hanguang 800 cloud compute quota, which Alibaba Cloud claims is 100% more cost-effective than traditional GPUs.

 

Are There No Other Alternatives For Alibaba?

In our opinion, this is Alibaba’s way of preparing for an escalation of the US-Chinese trade war that has already savaged HUAWEI.

While Alibaba certainly have a few AI inference accelerator alternatives, from AMD and NVIDIA for example, it makes sense for them to spend money and time developing their own AI inference chip.

In the long term, the Chinese government wants to build a domestic capability to design and fabricate their own computer chips for national security reasons.

Recommended : The HUAWEI Trump Ban – Everything You Need To Know!

 

Recommended Reading

Go Back To > Business + Enterprise | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


Kambyan ManUsIA + AleX Laser Cutting Drone Technology!

Kambyan Network recently invited us to a demonstration of their AleX laser cutting drone, which is designed to harvest oil palm fruits.

They also invited David Cirulli from the Embry Riddle Aeronautical University, and Associate Professor Sagaya Amalathas from Taylors University, to talk about the ManUsIA digital agriculture technology and the future jobs available to young teens today.

 

Kambyan ManUsIA Digital Agriculture

Kambyan Network has been working with David Cirulli of the Embry Riddle Aeronautical University in Singapore to develop what they call the ManUsIA digital agriculture technology.

Manusia is actually a Malay word for human, and it is an apt moniker because according to David Cirulli, ManUsIA stands for Man Using Intelligent Applications.

ManUsIA is a digital agriculture platform that Kambyan is developing as a SPaaS (Solution Platform as a Service) offering to improve yield and reduce manpower in agriculture.

It combines the use of drones with artificial intelligence and machine learning capabilities on the cloud to make use of surveillance data and weather information to maximise yield and reduce manpower requirements for dirty, difficult and dangerous jobs.

ManUsIA will start with drones that are remotely controlled using mobile device integration, and eventually hope to integrate intelligent drones that work independently.

 

Future Jobs For Teens Today

Kambyan also invited Associate Professor Dr. Sagaya Amalathas, a Programme Director at Taylors University to talk about future jobs that teens today should consider.

She points out that the future will be highly dependent on new digital skills in the areas of Big Data Analytics and Artificial Intelligence, as well as Blockchain technology, and the Internet of Things.

She also shared some really useful information on what careers will remain stable in the fast-changing times, and what jobs will be lost and what new opportunities will arise.

 

The reason why Kambyan invited her was because their training arm, Adroit College offers a Drone Operator & Robotics course.

The Professional Certificate in Robotic Process Automation – Field Operations (RPA-FO) course combines a 5-week intensive workshop with an apprenticeship and internship program at Kambyan, allowing the student to graduate with a Professional Certificate in 11 months.

 

Kambyan AleX Laser Cutting Drone Demonstration

The star of the event was the Kambyan AleX laser cutting drone – the Airborne Laser Cutter Mark 1.

Designed to be a laser harvesting drone for the oil palm industry, it weighs 3 kilograms and is approximately 70 cm in diameter.

Powered by a 150 watt pulsed laser in the operational model, it is capable of cutting through 6 inches of plant material.

Piloted remotely by a drone operator in the current iteration, it will be used to trim the fronds of the oil palm trees and cut through the stem of oil palm fruit bunches to harvest them.

Using drones will not only reduce manpower, it will allow plantations to let their oil palm trees grow much higher, reducing the need to cut them down so often.

This will increase profit over the long term, while reducing the oil palm industry’s impact on the environment… in particular their contribution to the slash and burn activity that results in terrible haze in Southeast Asia.

In the demo, they used a less powerful laser for safety reasons. But as this video shows, that itself is a danger!

 t

Fortunately, the operational drone uses a much more powerful laser to cut at a safer distance. This would prevent the drone from getting hit by falling oil palm fruits or flying debris.

 

Recommended Reading

Go Back To > Business + Enterprise | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


Why AI Digital Intuition Will Deliver Cyberimmunity By 2050!

In his first prediction for Earth 2050, Eugene Kaspersky believes that AI digital intuition will deliver cyberimmunity by 2050. Do YOU agree?

 

What Is Earth 2050

Earth 2050 is a Kaspersky social media project – an open crowdsourced platform, where everyone can share their visions of the future.

So far, there are nearly 400 predictions from 70+ visionaries, from futurologist Ian Pearson, astrophysicist Martin Rees, venture capitalist Steven Hoffman, architect-engineer Carlo Ratti, writer James Kunstler and sci-fi writer David Brin.

Eugene himself dabbles in cyberdivination, and shares with us, a future of cyberimmunity created by AI digital intuition!

 

Eugene Kaspersky : From Digital Intuition To Cyberimmunity!

In recent years, digital systems have moved up to a whole new level. No longer assistants making life easier for us mere mortals, they’ve become the basis of civilization — the very framework keeping the world functioning properly in 2050.

This quantum leap forward has generated new requirements for the reliability and stability of artificial intelligence. Although some cyberthreats still haven’t become extinct since the romantic era around the turn of the century, they’re now dangerous only to outliers who for some reason reject modern standards of digital immunity.

The situation in many ways resembles the fight against human diseases. Thanks to the success of vaccines, the terrible epidemics that once devastated entire cities in the twentieth century are a thing of the past.

 

However, that’s where the resemblance ends. For humans, diseases like the plague or smallpox have been replaced by new, highly resistant “post-vaccination” diseases; but for the machines, things have turned out much better.

This is largely because the initial designers of digital immunity made all the right preparations for it in advance. In doing so, what helped them in particular was borrowing the systemic approaches of living systems and humans.

One of the pillars of cyber-immunity today is digital intuition, the ability of AI systems to make the right decisions in conditions where the source data are clearly insufficient to make a rational choice.

But there’s no mysticism here: Digital intuition is merely the logical continuation of the idea of machine learning. When the number and complexity of related self-learning systems exceeds a certain threshold, the quality of decision-making rises to a whole new level — a level that’s completely elusive to rational understanding.

An “intuitive solution” results from the superimposition of the experience of a huge number of machine-learning models, much like the result of the calculations of a quantum computer.

So, as you can see, it has been digital intuition, with its ability to instantly, correctly respond to unknown challenges that has helped build the digital security standards of this new era.

 

Recommended Reading

Go Back To > Cybersecurity | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


The Human-Machine Partnership by Erik Brynjolfsson + Rana el Kaliouby

At the Dell Technologies World 2019, we were lucky enough to snag a seat at the talk by MIT Professor Erik Brynjolfsson; and MIT alumni and Affectiva CEO, Rana el Kaliouby, on human-machine partnership.

We managed to record the incredibly insightful session for everyone who could not make it for this exclusive guru session. This is a video you must not miss!

 

The DTW 2019 Guru Sessions

One of the best reasons to attend Dell Technologies World 2019 are the guru sessions. If you are lucky enough to reserve a seat, you will have the opportunity to listen to some of the world’s most brilliant thinkers and doers.

 

The Human-Machine Partnership

The talk on human-machine partnership by Professor Brynjolfsson and Ms. Rana was the first of several guru sessions at Dell Technologies World 2019.

Entitled “How Emerging Technologies & Human Machine Partnerships Will Transform the Economy“, it focused on how technology changed human society, and what the burgeoning efforts in artificial intelligence will mean for humanity.

Here are the key points from their guru session on the human-machine partnership :

Erik Brynjolfsson (00:05 to 22:05) on the Human-Machine Partnership

  • You cannot replace old technologies with new technologies, without rethinking the organisation or institution.
  • We are now undergoing a triple revolution
    – a rebalancing of mind and machine through Big Data and Artificial Intelligence
    – a shift from products to (digital) platforms
    – a shift from the core to crowd-based decision making
  • Shifting to data-driven decision-making based on Big Data results in higher productivity and greater profitability.
  • Since 2015, computers can now recognise objects better than humans, thanks to rapid advances in machine learning.
  • Even machine-based speech recognition has become as accurate as humans from 2017 onwards.
  • While new AI capabilities are opening up new possibilities in many fields, they are also drastically reducing or eliminating the need for humans.
  • Unlike platforms of the past, the new digital networks leverage “two-sided networks“. In many cases, one network is used to subsidise the other network, or make it free-to-use.
  • Shifting to crowd-based decision-making introduces diversity in the ways of thinking, gaining new perspectives and breakthroughs in problem-solving.
  • Digital innovations have greatly expanded the economy, but it doesn’t mean that everyone will benefit. In fact, there has been a great decoupling between the productivity and median income of the American worker in the past few decades.

Rana el Kaliouby (22:08 to 45:05) on the Human-Machine Partnership

  • Human communication is mostly conveyed indirectly – 93% is non-verbal. Half of that are facial expression and gestures, the other half is vocal intonation.
  • Affectiva has the world’s largest emotion repository, with 5 billion frames of 8 million faces from 87 countries.
  • Facial expressions are largely universal, but there is a need diversity of their data to avoid bias in their models. For example, there are gender differences that vary by culture.
  • They use computer vision, machine learning and deep learning to create an Emotional AI model that learns from all those facial expressions to accurately determine a person’s emotions.
  • Emotional artificial intelligence has many real-world or potential uses
    – detecting dangerous driving, allowing for proactive measures to be taken
    – personalising the ride in a future robot-taxi or autonomous car
    – the creation of more engaging and effective social robots in retail and hospitality industries
    – help autistic children understand how facial expressions correspond to emotions, and learn social cues.

 

Erik Brynjolfsson + Rana el Kaliouby

Professor Erik Brynjolfsson holds many hats. He is currently :

  • Professor at the MIT Sloan School of Management,
  • Director of the MIT Initiative on the Digital Economy,
  • Director of the MIT Center for Digital Business, and
  • Research Associate at the National Bureau of Economic Research

Rana el Kaliouby was formerly a computer scientist at MIT, helping to form their Autism & Communication Technology Initiative. She currently serves as CEO of Affectiva, a spin-off from MIT’s Media Lab that focuses on emotion recognition technology.

 

Recommended Reading

Go Back To > Enterprise + Business | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


Microsoft Build 2019 : New Azure Technologies Unveiled!

A host of new Microsoft Azure technologies for developers have been announced at the Microsoft Build 2019 conference, which took place in Seattle. Here is a primer on what they announced!

 

Microsoft Build 2019 : New Azure Technologies Unveiled!

With nearly 6,000 developers and content creators attending Microsoft Build 2019 in Seattle, Microsoft announced a series of new Azure services like hybrid loud and edge computing to support them. They include advanced technologies such as,

  • Artificial Intelligence (AI)
  • Mixed reality
  • IoT (Internet of Things)
  • Blockchain

 

Microsoft Build 2019 : New Azure AI Technologies

First of all, they unveiled a new set of Microsoft Azure AI technologies to help developers and data scientists utilize AI as a solution :

  • Azure Cognitive Services, which iwll enable applications to see, hear, respond, translate, reason and more.
  • Microsoft will add the “Decision” function to Cognitive Services to help users make decisions through highly specific and customized recommendations.
  • Azure Search will also be further enhanced with an AI feature.

 

Microsoft Build 2019 : New Microsoft Azure Machine Learning Innovations

Microsoft Azure Machine Learning has been enhanced with new machine learning innovations designed to simplify the building, training and deployment of machine learning models. They include :

  • MLOps capabilities with Azure DevOps
  • Automated ML advancements
  • Visual machine learning interface

Microsoft Build 2019 : New Edge Computing Solutions

Microsoft also aims to boost edge computing by introducing these new solutions:

  • Azure SQL Database Edge
  • IoT Plug and Play
  • HoloLens 2 Developer Bundle
  • Unreal Engine 4

Microsoft Build 2019 : Azure Blockchain Service

The Azure Blockchain Workbench, which Microsoft released last year to support development of blockchain applications, has been further enhanced this year with the Azure Blockchain Service.

Azure Blockchain Service is a tool that simplifies the formation and management of consortium blockchain networks so companies only need to focus on app development.

J.P Morgan’s Ethereum platform was introduced by Microsoft as the first ledger available in the Azure Blockchain Service.

 

Recommended Reading

Go Back To > Business + Enterprise | Home

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


The 2019 Dell EMC Global Data Protection Index Summarised!

The 2019 Dell EMC Global Data Protection Index is out! Here is a summary of its key findings!

 

The 2019 Dell EMC Global Data Protection Index

The 2019 Dell EMC Global Data Protection Index is the third survey conducted by Dell EMC in collaboration with Vanson Bourne.

The survey involved 2,200 IT decision makers from public and private organisations (of 250+ employees) across 18 countries and 11 industries. It was designed to reveal the state of data protection in the Asia Pacific and Japan region.

 

What Did The 2019 Data Protection Index Reveal?

The 2019 Dell EMC Global Data Protection Index revealed a large increase in the amount of data managed – from 1.68 petabytes in 2016 to a staggering 8.13 petabytes in 2018.

They also saw a corresponding increase in awareness about the value of data, with 90% of the respondents aware about the value of the data they manage. However, only 35% are monetising their data.

The Index also noted that despite an impressive jump in the number of data protection leaders (from 1% to 13%) and “adopters” (from 8% to 53%) since 2016, most of the survey respondents still face challenges in implementing the right data protection measures.

  • Organisations in Asia Pacific & Japan managed 8.13 PB of data in 2018 – an explosive growth of 384% compared to the 1.68 PB managed in 2016
  • 90% of businesses see the potential value of data but only 35% are monetising it
  • 94% face data protection challenges, and 43% struggle to find suitable data protection solutions for newer technologies like artificial intelligence and machine learning
  • More than a third (34%) of respondents are very confident that their data protection infrastructure is compliant with regional regulations, but only 18% believe their data protection solutions will meet all future challenges

 

The State Of Data Protection In APJ

Data disruptions and data loss happen more frequently in APJ organisations, than the global average. Some 80% of the APJ respondents reported experiencing some type of disruption over the last 12 months.

This is higher than the global average of 76%. Even worse – 32% were unable to recover their data using existing data protection solutions.

Although system downtime is a problem, the loss of data is particularly expensive. On average, 20 hours of downtime cost businesses US$ 494,869. The average data loss of 2.04 terabytes, on the other hand, costs nearly twice as much at US$ 939,703.

 

Challenges To Data Protection In APJ

The vast majority of respondents (some 94%) report that they encounter at least one barrier to data protection. The top three challenges in APJ was determined to be  :

  1. The inability to keep track of and protect all data because of growth of DevOps and cloud development – 46% agree
  2. The complexity of configuring and operating data protection software/hardware – 45.6% agree
  3. The lack of data protection solutions for emerging technologies– 43.4% agree

They also struggled to find adequate data protection solutions for newer technologies :

  • Artificial intelligence and machine learning data – 54% agree
  • Cloud-native applications – 49% agree
  • Internet of Things – 40% agree

 

Cloud Is Changing The Data Protection Landscape

[adrotate group=”2″]

According to the 2019 Dell EMC Global Data Protection Index, organisations have increase their use of public cloud services – up from 27% in 2016 to 41% in 2018.

Nearly all of those organisations (99%) using public cloud, are leveraging it as part of their data protection strategy. The top use case – backup or snapshot services to protect data and workloads.

More than 60% of the respondents also consider the scalability of data protection solutions important, in anticipation of the inevitable boom of cloud workloads.

 

Regulation Is Not A Key Concern

Compliance with data privacy regulations like the UE’s General Data Protection Regulation (GDPR) is not a key concern for most of these organisations. Only 36% listed it as a top data protection challenge.

 

Recommended Reading

Go Back To > Enterprise + Business | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


SUFECS – Imagine Cup Asia 2019 People’s Choice Winner!

Team Caeli from India may have won the 2019 Imagine Cup Asia, but another team stood out from the crowd of truly excellent teamsTeam SUFECS from Malaysia. In fact, they impressed Microsoft Ignite Sydney participants so much, they gave the team an award!

Find out what the SUFECS project is all about, and why the team won the Microsoft Ignite People’s Choice Award!

Updated @ 2019-02-25 : Added a new video of Team SUFECS’s pitch at the Imagine Cup Asia 2019, with additional details on the SUFECS system.

Originally posted @ 2019-02-20

 

Team SUFECS From UTHM, Malaysia

The SUFECS team was formed by four students from Universiti Tun Hussein Onn Malaysia (UTHM).

  • Seah Choon Sen : Team Leader & Data Scientist
  • Tan Wei Yang : Technology Developer
  • Mek Zi Cong : Operations
  • Muhammad Adam bin Mazlan : Web Developer

 

What Is SUFECS All About?

Short for Smart Urban Farming with Automated Environmental Controlled System, SUFECS was developed to make urban farming easier and more accessible to both entrepreneurs and the common people.

Built to automate hydroponic vertical farming, it will continuously monitor and automatically adjust critical environmental parameters to ensure optimal growth. To do that, this enterprising team leveraged IoT devices, cloud computing and machine learning.

  • Multiple electronic sensors monitor conditions within the hydroponic system
  • A control module sends the environmental parameters to Microsoft Azure for processing and logging
  • Deviations from optimal conditions are automatically detected and corrected

The team has demonstrated that SUFECS does not only make it easy for anyone to grow their own crops in an urban, indoor environment, they are able to significantly speed up growth, reducing the time between planting and harvest.

They also showed that SUFECS has successfully increased the size of the plant, and the yield of the crops. This means their system will not only make it easier to setup and maintain an urban hydroponic farm of any size, it will create bigger and better crops.

 

What Can SUFECS Monitor & Control?

SUFECS is currently able to monitor and control six environmental parameters :

  • ambient temperature,
  • water temperature,
  • humidity level,
  • water level,
  • pH value, and
  • ambient light.

It will also monitor plant height, and alert you when it is ready for harvest.

The team is working to add the ability to monitor and control two additional environmental parameters 

  • electrical conductivity
  • dissolved oxygen level

The team has, so far, developed datasets for five different plant types using machine learning.

 

SUFECS Wins People’s Choice Award @ Microsoft Ignite Sydney

Microsoft held the Microsoft Ignite Sydney a day after Imagine Cup Asia, where they showcased the projects of the top 12 Asian teams.

At the end of Ignite, Team SUFECS received the most vote, and won the People’s Choice Award! Congratulations!

 

Recommended Reading

[adrotate group=”2″]

Go Back To > BusinessSoftware | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


These 2019 Imagine Cup Asia Winners Will Change The World

Asia has chosen their representatives to the 2019 Imagine Cup World Championships – Team Caeli from India! They also honoured Team RailinNova from China and Team AidUSC from the Philippines. Congratulations!

 

What Is The Imagine Cup?

Held and sponsored by Microsoft since 2003, the Imagine Cup is the world’s premier student technology competition.

Teams of students from across the globe work together with mentors and industry leaders to bring their biggest and boldest ideas to life.

The five Imagine Cup judges evaluated the merits of these 12 top teams in a three-step process :

  • a 3-minute product presentation
  • 5 minutes of hands-on time
  • a Q&A session

To avoid bias and group-think, the judges evaluated the teams independently, and even used an application to anonymously tabulate the results.

 

The 2019 Imagine Cup Asia Winners

The 2019 Imagine Cup Asia Champion : Team Caeli from India

Team Caeli won over the hearts of the judges with their automated Anti-Pollution and Drug delivery mask specifically designed for patients that are suffering from asthma and other chronic respiratory conditions.

Their face mask had breakthrough features that will significantly improve the quality of life for patients with respiratory issues living in polluted areas.

Team Caeli will receive USD15,000 and head to the 2019 Imagine Cup World Championship, which will be held in Seattle in May.

If they win the World Championship, they will win USD100,000 in cash, a USD50,000 Azure grant, and a mentoring session with Microsoft CEO Satya Nadella!

1st Runner-Up : Team RailinNova from China

Team RailinNova developed the Rail Component Inspection Robot, which determines and identifies rail defects through multi-sensor monitoring. The solution strives to help railway companies to solve any issues more efficiently, and more economically.

For their excellence, Team RailinNova will receive a USD5,000 cash prize.

2nd Runner-Up : Team AidUSC from the Philippines

Team AidUSC came up with Aqua Check, a water contamination mobile application that enables detection of contamination by taking a photo of a water sample through a microscope. It won them third place, and a USD1,000 cash prize.

 

Special Mention : The Other 2019 Imagine Cup Asia Teams

Having watched all 12 teams in action, we must point out that all 12 Asian teams were truly exceptional. They impressed us, not only with the depth of knowledge, but also their poise and ability to think out-of-the-box in the conception and execution of their projects.

We also admire how their projects were created with social impact in mind. This gives us great hope in the new generation of entrepreneurs, who are seeking to improve society as a whole. We are all winners with young entrepreneurs of their caliber.

 

Recommended Reading

[adrotate group=”2″]

Go Back To > Software | Business | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


The 2019 Imagine Cup Asia Teams Introduce Themselves!

On the eve of the 2019 Imagine Cup Asia competition in Sydney, we met with the top 12 Asian teams that will compete for a coveted spot in the 2019 Imagine Cup 2019 World Championship!

Let’s take a look at the twelve awesome Asian teams, and see the innovative ideas they will be pitching in the 2019 Imagine Cup Asia Regional Finals!

 

What Is The Imagine Cup?

Held and sponsored by Microsoft since 2003, the Imagine Cup is the world’s premier student technology competition. Teams of students from across the globe work together with mentors and industry leaders to bring their biggest and boldest ideas to life.

 

The 2019 Imagine Cup Asia Regional Finals

This year, Microsoft organised the 2019 Imagine Cup Asia Regional Finals in Sydney, Australia. Hundreds of teams from 17 Asian countries submitted their projects, but just twelve great teams won a shot to participate in the Asia Regional Finals.

These twelve teams will compete for US$20,000 in prizes on 12 February, but only one team will win the ultimate prize – an all-expenses paid trip to the World Finals in Seattle!

There, the 2019 Imagine Cup Asia Regional Champion will participate along the best and brightest teams from across the globe to claim the title of World Champion, US$100,000 cash prize, and the chance to take home the Imagine Cup!

 

The 2019 Imagine Cup Asia Regional Finalists


RailinNova

Country : China

Project : Rail Component Inspection Robot

Their Rail Component Inspection Robot (which combines AI and IoT) operates through automatic positioning, and identifies various defects through multi-sensor fusion in order to realise the replacement of workers in a rail inspection project.


Alpha-India

Country : India

Project : Spot – AR Based Product Filtering

Spot allows you to recognise packaged foods and check if it contains a certain ingredient or exhibits a certain character.

If a tourist visits India, he is unaware of what he can eat because packets have information written in a foreign language.


Caeli

Country : India

Project : Caeli – Breathe Freely

Caeli is a smart automated Anti-Pollution and Drug delivery mask specifically designed for Asthmatic and Chronic Respiratory Patients.

Caeli implements breakthrough features to improve the quality of life for respiratory patients living in polluted areas.


RVSAFE

Country : India

Project : RVSAFE

Disasters often strike, when we are least prepared to face them. They leave behind a trail of destruction, adversely affecting human life, and property.

The loss caused by disasters can be significantly reduced with better communication and proper management. Keeping this in mind, we designed RVSAFE, a one-stop solution for effectively handling any kind of disasters (natural or man-made).


CodeSell

Country : Indonesia

Project : Selection – Social Media

Sellution is a software as a service (SaaS) to help SMEs to perform social media marketing, not just in an easy way, but is also effective and efficient.

Sellution’s main features are optimizing marketing content, help finding the right audience, and recommendations.


Fhisherman

Country : Korea

Project : Fishing Phishing

Fishing Phishing by the Fhisherman team from Korea is a smartphone application that uses Machine Learning to analyse call voices in real-time.

It is designed to detect scam calls and warn the users!


SUFECS

Country : Malaysia

Project : Smart Urban Farming with Automated Environmental Controlled Systems (SUFECS)

SUFECS was developed to transform the farming experiences of urban farmer.

With SUFECS, farmers can monitor and control the artificial environment to achieve the most suitable environment for crops.


LookUP

Country : New Zealand

Project : LookUP

It is estimated one in five people in the world are dyslexic. However, most QnA platforms are completely text-based.

LookUP is a medium in which the dyslexic and non-dyslexic communities can effectively collaborate and learn from one another.


AidUSC

Country : The Philippines

Project : Aqua Check – Water Contamination Mobile Application

Aqua Check utilises Microsoft Azure’s Custom Vision to empower anyone to analyse for contamination by taking a photo of a water sample through a microscope.

Using Azure Web and Azure Maps, we are able to map the contamination locations.


InclusiveAR

Country : Singapore

Project : Mobile Augmented Reality Navigation Application for Wheelchair Users

This project aims to develop a mobile application, InclusiveAR, to assist wheelchair users in travelling.

InclusiveAR will map out wheelchair-accessible routes and provide visual guidance to direct wheelchair users to their destinations using AR.


The Straw Hats

Country : Sri Lanka

Project : Mind Probe

Our project aims to help people with disabilities like ALS, DMD, etc. which impair their ability to communicate.

We tap into their brain waves and use that to predict the number they are thinking and use that information to interface with a smartphone.


Maker Playground

Country : Thailand

Project : Maker Playground

Maker Playground is a next-generation IDE for IoT project development from developing device firmware, generating circuit diagram, programming your device, and designing an IoT dashboard all in one software.

 

See You @ The 2019 Imagine Cup Asia!

Congratulations to the 12 awesome teams!

Later today,, they will present their projects at the 2019 Imagine Cup Asia Regional Finals… and by 5 PM, we will find out who the 2019 Imagine Cup Asia Regional Champion will be!

 

Recommended Reading

[adrotate group=”2″]

Go Back To > Software | Business | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


Learn How Expedia Finds The Best Flights In Just 3 Seconds!

Expedia is the world’s largest travel company, and you might be wondering – how on earth do they scan through quadrillions of flight plans every day, to deliver the best possible flight in mere seconds? Let’s find out!

 

Learn How Expedia Finds The Best Flights In Just 3 Seconds!

You may be familiar with how easy it is to book a flight on Expedia. Just key in your flight requirements and in mere seconds, Expedia delivers the best flight options for you. But that simplicity is back by serious data science.

Gabriel Garcia, who doubles as Expedia’s Global Head of Mobile Apps Marketing, and their APAC Head of Marketing, flew into town to give us a look at how Expedia performs its magic. And this is magic of the scientific sort.

Here are some key points from from his presentation and Q&A session with us :

  • The Expedia mobile app was downloaded more than 250 million times by Q4 2017.
  • Expedia gets about 30 billion searches per year, or about 82 million searches per day.
  • More than 50% of the traffic on Expedia is from mobile devices.
  • 1 in 3 bookings are made using mobile devices.
  • Expedia uses data science and machine learning to make sure every search delivers the best possible options, out of the billions of possibilities.
  • In particular, they use the Best Fare Search (BFS) algorithm to narrow down millions of possibilities for every search to the 1000 most relevant options… in just 3 seconds.
  • They have hundreds of data scientists and analysts who develop new analytical and predictive algorithms.
  • They have also migrated to an agile development process, with weekly incremental changes, instead of quarterly updates. This allows them to test new ideas and adapt to changes much faster.

 

Recommended Reading

[adrotate group=”2″]

Go Back To > Business + Enterprise | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


Samsung ConZNet Algorithm Tops Two AI Challenges!

Samsung Research, the advanced Research & Development (R&D) hub of Samsung Electronics, has dedicated substantial effort in creating ground-breaking AI technologies. And it has succeeded with the ConZNet algorithm. Here’s the low down!

 

Samsung ConZNet Algorithm Tops Two AI Challenges

The Samsung Research R&D team used their ConZNet algorithm to rank first in the MAchine Reading COmprehension (MS MARCO by MS Microsoft) competition, and won “Best Performance” in TriviaQA which was hosted by the University of Washington.

MS MARCO and TriviaQA are among the most actively researched and used machine reading comprehension competitions in the world. In these competitions, AI algorithms are tested in their capabilities of processing natural language in human question and answers, while also providing written text in various types of documents such as news articles and blog posts.

Competitions such as MS MARCO and TriviaQA allow contestants to participate at any time, and rankings are altered according to real-time test results.

What Is The ConZNet Algorithm?

The Samsung Research’s ConZNet algorithm advances machine intelligence by giving reasonable feedback for outcomes, similar to a stick-and-carrot (or reinforcement) strategy in the learning process. ConZNet takes natural language into account such as how people deliver queries and answers online which was the key factor in determining the winners of these competitions.

What Are The Potential Uses Of ConZNet?

[adrotate group=”2″]

With this, there is very high potential in introducing Samsung Research’s AI algorithm to other departments in Samsung Electronics such as Home Appliances and Smartphones.

Apart from that, departments dealing with customer services are also showing high interest in the AI, especially since AI-based customer services like chatbots have emerged as hot topics in recent times.

Samsung AI Centers

Samsung also revealed that they have begun launching global AI Centers, to collaborate with leading AI experts. Eventually, they hope the AI technologies developed by Samsung Research will be adopted and integrated into Samsung Electronics products and services.

Go Back To > Enterprise + Business | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

The SAP Leonardo Digital Innovation System Explained!

Great companies are created on the back of innovative products or services. But to maintain their greatness, these companies need to keep innovating. This is becoming increasingly difficult because the development of new technologies is outpacing the ability of many companies to adapt, much less innovate. This is where SAP Leonardo comes in.

Join us for exclusive in-depth presentations on SAP Leonardo. Find out what SAP Leonardo is all about, and how it can help companies innovate faster and better!

 

What Is SAP Leonardo?

SAP introduced Leonardo at the 2017 SAPPHIRE NOW conference, combining six key digital innovation technologies into a single “digital innovation system” – machine learning, blockchain technology, data intelligence, Big Data, Internet of Things, and Analytics.

In addition to those technologies, Leonardo also offers Design Thinking methodologies, data intelligence tools, benchmarking and more. For speedy implementation, SAP even offers “Leonardo Accelerator Packages” that are tailored for specific industries and core functions, such as IoT.

 

SAP Leonardo Centers

Since its introduction in 2017, SAP has created five SAP Leonardo Centers worldwide, with SAP Leonardo Center Singapore as the latest addition. These are interconnected hubs that serve as points of contact for companies and startups interested in SAP Leonardo.

They are part showcase, and part collaborative centers. Their real-world examples of digital innovation using Leonardo serve as inspiration for SAP customers and partners. They also offer a place for businesses and startups to experiment and innovate, jointly or in collaborative efforts.

 

Innovating With Purpose

Scott Russell, President of SAP APJ, kicked things off with his keynote speech, Asia Soaring : Innovating with Purpose.

 

Tour & Real-World Examples

For those who missed our coverage of SAP Leonardo Center Singapore, check out this tour of the SAP Leonardo demos by Thorsten Vieth, Director of Industry Innovation at SAP South East Asia. These are actual real-world examples of Leonardo at work.

[adrotate group=”1″]

Becoming An Intelligent Enterprise With SAP Leonardo

Mala Anand, President of SAP Leonardo and Analytics, now explains how to become an Intelligent Enterprise with Leonardo. If you want to understand what SAP Leonardo is about, and how it can help your company, you must watch this!

 

Innovation, Integration & Scaling For CFOs

This presentation by Richard McLean, CFO of SAP APJ, is specific for CFOs (Chief Financial Officers). He brought in Anja Langhoff, Head of Finance Shared Services (SAP Singapore), who shared how machine learning technology helped improve the efficiency and employee satisfaction of her team. Manik Saha, CIO of SAP APJ, and McLean, also shared about the use of Leonardo analytics in the finance department.

 

Improving Customer Experience

Maggie Buggie, SVP and Global Head of SAP Leonardo Services, explains how SAP Leonardo technologies can help businesses improve customer experience.

 

The Hanon Systems Story

Hanon Systems, a manufacturer of automotive thermal and energy management solutions, was one of the first SAP Leonardo customers in South Korea. Robert Oh, their Global CIO & Business Transformation Executive, explained how Hanon Systems plans to launch a pilot program of Leonardo to achieve real-time visibility and monitoring capabilities, for better productivity as well as preventive maintenance.

Hanon Systems also plans to use Leonardo to automate data analysis of product detects to mitigate risks, and improve production quality. In short, Leonardo is expected to help them to create smart factories through real-time monitoring of sensing data, while improving their products using predictive analytics of their product quality data.

 

Connecting Innovation To The Digital Core

Paul Marriott, SVP of Digital Core & Industry Solutions, SAP APJ, summed up all the key points of the previous presentations on Leonardo, before demonstrating how artificial intelligence allows businesses to easier access business intelligence through digital assistants.

Go Back To > Enterprise & Business | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

Exclusive Tour Of SAP Leonardo Center Singapore

On 8 May 2018, SAP SE (NYSE: SAP) announced the establishment of the SAP Leonardo Center Singapore, to help their customers, partners, and foster faster innovation throughout the broader ecosystem of universities and start-ups across the Asia Pacific and Japan (APJ) region.

Join us for an exclusive look at the new SAP Leonardo Center Singapore, including a tour of the digital innovations that SAP Leonardo offers enterprises and businesses.

 

SAP Leonardo Center Singapore

SAP Leonardo Centers are interconnected hubs that serve as points of contact for companies and startups interested in SAP Leonardo. They are part showcase, and part collaborative centers. Their real-world examples of digital innovation using SAP Leonardo serve as inspiration for SAP customers and partners. They also offer a place for businesses and startups to experiment and innovate, jointly or in collaborative efforts.

Leonardo Center Singapore expands SAP’s presence in the APJ region, which already boasts three SAP Innovation Centers and four SAP Labs. It will be the fifth in the global network of SAP Leonardo Centers, serving a “front-end” to assist SAP customers in transforming their businesses with SAP Leonardo and Design Thinking.

SAP Leonardo brings together the Internet of Things (IoT), Machine Learning, Blockchain technology, Big Data Analytics and Data Intelligence on the SAP Cloud Platform. It also applies SAP’s technological capabilities and deep knowledge of 25 industries to deliver the promise of Intelligent Enterprise.

“The SAP Leonardo Center in Singapore will showcase the art of the possible in digital innovation and help our customers scale quickly, easily and effectively,” said Scott Russell, President, SAP APJ. “Together with our customers and partners, we aim to leverage the SAP Leonardo Center Singapore as a think tank to drive purpose-led innovation that will ultimately improve the lives of one billion people and deliver the Intelligent Enterprise for over 70,000 customers in APJ by 2022. The SAP Leonardo Center in Singapore will play a key role in realizing our growth strategy and drive customer success in the new Intelligence era.”

 

Collaborative Hub

[adrotate group=”2″]

The SAP Leonardo Center Singapore aims to foster a collaborative environment for businesses, start-ups, small and medium-sized enterprises to experiment and innovate.

It will also serve as a hub for SAP’s broader digital technology ecosystem including universities, startups, tech communities and accelerators.

Through the SAP University Alliances program, SAP APJ helps to educate 1.7 million students in educational institutions across the region. SAP APJ has also established 13 labs in APJ with plans to open more in the future.

 

Tour Of SAP Leonardo Center Singapore

Thorsten Vieth, Director of Industry Innovation at SAP South East Asia, took us on a tour of SAP Leonardo Center Singapore. SAP created various booths to showcase real world examples of SAP Leonardo at work.

They include traffic management of an entire city, how a smart stadium manages crowds, predictive maintenance of smart trains, and ensuring safety of drivers and passengers in smart buses.

They also showed how machine learning can be used to simplify the maintenance of home appliances with IoT capability, while providing a transparent and secure service record using blockchain technology. Such capabilities not only improve the customer experience, they also reduce costs for businesses.

Go Back To > Enterprise & Business | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

Sophos Intercept X with Predictive Protection Explained!

Sophos today announced the availability of Intercept X with malware detection powered by advanced deep learning neural networks. Join us for a briefing by Sumit Bansal, Sophos Managing Director for ASEAN and Korea!

 

Sophos Intercept X with Predictive Protection

Combined with new active-hacker mitigation, advanced application lockdown, and enhanced ransomware protection, this latest release of the Sophos Intercept X endpoint protection delivers previously unseen levels of detection and prevention.

Deep learning is the latest evolution of machine learning. It delivers a massively scalable detection model that is able to learn the entire observable threat landscape. With the ability to process hundreds of millions of samples, deep learning can make more accurate predictions at a faster rate with far fewer false-positives when compared to traditional machine learning.

This new version of Sophos Intercept X also includes innovations in anti-ransomware and exploit prevention, and active-hacker mitigations such as credential theft protection. As anti-malware has improved, attacks have increasingly focused on stealing credentials in order to move around systems and networks as a legitimate user, and Intercept X detects and prevents this behavior.

Deployed through the cloud-based management platform Sophos Central, Intercept X can be installed alongside existing endpoint security software from any vendor, immediately boosting endpoint protection. When used with the Sophos XG Firewall, Intercept X can introduce synchronized security capabilities to further enhance protection.

 

New Sophos Intercept X Features

Deep Learning Malware Detection

  • Deep learning model detects known and unknown malware and potentially unwanted applications (PUAs) before they execute, without relying on signatures
  • The model is less than 20 MB and requires infrequent updates

Active Adversary Mitigations

  • Credential theft protection – Preventing theft of authentication passwords and hash information from memory, registry, and persistent storage, as leveraged by such attacks as Mimikatz
  • Code cave utilization – Detects the presence of code deployed into another application, often used for persistence and antivirus avoidance
  • APC protection – Detects abuse of Application Procedure Calls (APC) often used as part of the AtomBombing code injection technique and more recently used as the method of spreading the WannaCry worm and NotPetya wiper via EternalBlue and DoublePulsar (adversaries abuse these calls to get another process to execute malicious code)

New and Enhanced Exploit Prevention Techniques

[adrotate group=”2″]

  • Malicious process migration – Detects remote reflective DLL injection used by adversaries to move between processes running on the system
  • Process privilege escalation – Prevents a low-privilege process from being escalated to a higher privilege, a tactic used to gain elevated system access

Enhanced Application Lockdown

  • Browser behavior lockdown – Intercept X prevents the malicious use of PowerShell from browsers as a basic behavior lockdown
  • HTA application lockdown – HTML applications loaded by the browser will have the lockdown mitigations applied as if they were a browser

Go Back To > Events | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

Dr. Min Wanli : Alibaba Cloud’s ET City Brain & Tianchi Platform

At the MDEC-Alibaba Cloud announcement of their collaboration to implement the Malaysia City Brain, Dr. Min Wanli gave a quick overview of the ET City Brain and the Tianchi platform for crowd intelligence.

 

Who Is Dr. Min Wanli?

Dr. Min Wanli (also known as Dr. Wanli Min) is the Chief Scientist of Machine Intelligence at Alibaba Cloud.

He holds a Ph.D in statistics from the University of Chicago, was a researcher at IBM’s T.J. Watson Research Center and a senior statistician at Google. He now oversees Alibaba Cloud’s artificial intelligence projects.

 

ET City Brain & The Tianchi Platform

ET is an Alibaba Cloud designation that refers to artificial intelligence services that “can be broadly applied to different areas in society”. It accomplishes this by leveraging the crowd intelligence capabilities of the Tianchi platform.

Dr. Min Wanli showcased how the Hangzhou City Brain master plan eases traffic congestion in the city, using an ambulance as an example. The Hangzhou City Brain will predict the traffic conditions for the next 30-60 minutes and determine the most efficient route for the ambulance.

It also synchronises the traffic signals so that they will turn green 10 seconds before the ambulance arrives, allowing it to pass without stopping. This not only cuts the ambulance’s arrival time, it also reduces risk of accidents.

 

The Malaysia City Brain

Dr. Min Wanli was here as part of the Alibaba Cloud delegation to announce their collaboration with the Malaysia Digital Economy Corporation (MDEC) to introduce the Malaysia City Brain.

[adrotate group=”2″]

In the first phase of implementation, the Malaysia City Brain will be used in Kuala Lumpur’s traffic management. It will begin with a base of 382 cameras, and input from 281 traffic light junctions – all located in central Kuala Lumpur.

Using cloud computing and big data processing capabilities, the Malaysia City Brain will be able to optimise the flow of vehicles and timing of traffic signals.

It will also be able to generate structured summaries of data like traffic volume, and speed according to lanes, which can be used to facilitate other tasks, including traffic accident detection.

Go Back To > Events | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

NVIDIA TITAN V – The First Desktop Volta Graphics Card!

NVIDIA CEO Jensen Huang (recently anointed as Fortune 2017 Businessperson of the Year) made as surprise reveal at the NIPS conference – the NVIDIA TITAN V. This is the first desktop graphics card to be built on the latest NVIDIA Volta microarchitecture, and the first to use HBM2 memory.

In this article, we will share with you everything we know about the NVIDIA TITAN V, and how it compares against its TITANic predecessors. We will also share with you what we think could be a future NVIDIA TITAN Vp graphics card!

Updated @ 2017-12-10 : Added a section on gaming with the NVIDIA TITAN V [1].

Originally posted @ 2017-12-09


 

NVIDIA Volta

NVIDIA Volta isn’t exactly new. Back in GTC 2017, NVIDIA revealed NVIDIA Volta, the NVIDIA GV100 GPU and the first NVIDIA Volta-powered product – the NVIDIA Tesla V100. Jensen even highlighted the Tesla V100 in his Computex 2017 keynote, more than 6 months ago!

Yet there has been no desktop GPU built around NVIDIA Volta. NVIDIA continued to churn out new graphics cards built around the Pascal architecture – GeForce GTX 1080 Ti and GeForce GTX 1070 Ti. That changed with the NVIDIA TITAN V.

 

NVIDIA GV100

The NVIDIA GV100 is the first NVIDIA Volta-based GPU, and the largest they have ever built. Even using the latest 12 nm FFN (FinFET NVIDIA) process, it is still a massive chip at 815 mm²! Compare that to the GP100 (610 mm² @ 16 nm FinFET) and GK110 (552 mm² @ 28 nm).

That’s because the GV100 is built using a whooping 21.1 billion transistors. In addition to 5376 CUDA cores and 336 Texture Units, it boasts 672 Tensor cores and 6 MB of L2 cache. All those transistors require a whole lot more power – to the tune of 300 W.

[adrotate group=”1″]

 

The NVIDIA TITAN V

That’s V for Volta… not the Roman numeral V or V for Vendetta. Powered by the NVIDIA GV100 GPU, the TITAN V has 5120 CUDA cores, 320 Texture Units, 640 Tensor cores, and a 4.5 MB L2 cache. It is paired with 12 GB of HBM2 memory (3 x 4GB stacks) running at 850 MHz.

The blowout picture of the NVIDIA TITAN V reveals even more details :

  • It has 3 DisplayPorts and one HDMI port.
  • It has 6-pin + 8-pin PCIe power inputs.
  • It has 16 power phases, and what appears to be the Founders Edition copper heatsink and vapour chamber cooler, with a gold-coloured shroud.
  • There is no SLI connector, only what appears to be an NVLink connector.

Here are more pictures of the NVIDIA TITAN V, courtesy of NVIDIA.

 

Can You Game On The NVIDIA TITAN V? New!

Right after Jensen announced the TITAN V, the inevitable question was raised on the Internet – can it run Crysis / PUBG?

The NVIDIA TITAN V is the most powerful GPU for the desktop PC, but that does not mean you can actually use it to play games. NVIDIA notably did not mention anything about gaming, only that the TITAN V is “ideal for developers who want to use their PCs to do work in AI, deep learning and high performance computing.

[adrotate group=”2″]

In fact, the TITAN V is not listed in their GeForce Gaming section. The most powerful graphics card in the GeForce Gaming section remains the TITAN Xp.

Then again, the TITAN V uses the same NVIDIA Game Ready Driver as GeForce gaming cards, starting with version 388.59. Even so, it is possible that some or many games may not run well or properly on the TITAN V.

Of course, all this is speculative in nature. All that remains to crack this mystery is for someone to buy the TITAN V and use it to play some games!

Next Page > Specification Comparison, NVIDIA TITAN Vp?, The Official Press Release

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

The NVIDIA TITAN V Specification Comparison

Let’s take a look at the known specifications of the NVIDIA TITAN V, compared to the TITAN Xp (launched earlier this year), and the TITAN X (launched late last year). We also inserted the specifications of a hypothetical NVIDIA TITAN Vp, based on a full GV100.

SpecificationsFuture TITAN Vp?NVIDIA TITAN VNVIDIA TITAN XpNVIDIA TITAN X
MicroarchitectureNVIDIA VoltaNVIDIA VoltaNVIDIA PascalNVIDIA Pascal
GPUGV100GV100GP102-400GP102-400
Process Technology12 nm FinFET+12 nm FinFET+16 nm FinFET16 nm FinFET
Die Size815 mm²815 mm²471 mm²471 mm²
Tensor Cores672640NoneNone
CUDA Cores5376512038403584
Texture Units336320240224
ROPsNANA9696
L2 Cache Size6 MB4.5 MB3 MB4 MB
GPU Core ClockNA1200 MHz1405 MHz1417 MHz
GPU Boost ClockNA1455 MHz1582 MHz1531 MHz
Texture FillrateNA384.0 GT/s
to
465.6 GT/s
355.2 GT/s
to
379.7 GT/s
317.4 GT/s
to
342.9 GT/s
Pixel FillrateNANA142.1 GP/s
to
151.9 GP/s
136.0 GP/s
to
147.0 GP/s
Memory TypeHBM2HBM2GDDR5XGDDR5X
Memory SizeNA12 GB12 GB12 GB
Memory Bus3072-bit3072-bit384-bit384-bit
Memory ClockNA850 MHz1426 MHz1250 MHz
Memory BandwidthNA652.8 GB/s547.7 GB/s480.0 GB/s
TDP300 watts250 watts250 watts250 watts
Multi GPU CapabilityNVLinkNVLinkSLISLI
Launch PriceNAUS$ 2999US$ 1200US$ 1200

 

The NVIDIA TITAN Vp?

In case you are wondering, the TITAN Vp does not exist. It is merely a hypothetical future model that we think NVIDIA may introduce mid-cycle, like the NVIDIA TITAN Xp.

Our TITAN Vp is based on the full capabilities of the NVIDIA GV100 GPU. That means it will have 5376 CUDA cores with 336 Texture Units, 672 Tensor cores and 6 MB of L2 cache. It will also have a higher TDP of 300 watts.

[adrotate group=”1″]

 

The Official NVIDIA TITAN V Press Release

December 9, 2017—NVIDIA today introduced TITAN V, the world’s most powerful GPU for the PC, driven by the world’s most advanced GPU architecture, NVIDIA Volta .

Announced by NVIDIA founder and CEO Jensen Huang at the annual NIPS conference, TITAN V excels at computational processing for scientific simulation. Its 21.1 billion transistors deliver 110 teraflops of raw horsepower, 9x that of its predecessor, and extreme energy efficiency.

“Our vision for Volta was to push the outer limits of high performance computing and AI. We broke new ground with its new processor architecture, instructions, numerical formats, memory architecture and processor links,” said Huang. “With TITAN V, we are putting Volta into the hands of researchers and scientists all over the world. I can’t wait to see their breakthrough discoveries.”

NVIDIA Supercomputing GPU Architecture, Now for the PC

TITAN V’s Volta architecture features a major redesign of the streaming multiprocessor that is at the center of the GPU. It doubles the energy efficiency of the previous generation Pascal design, enabling dramatic boosts in performance in the same power envelope.

New Tensor Cores designed specifically for deep learning deliver up to 9x higher peak teraflops. With independent parallel integer and floating-point data paths, Volta is also much more efficient on workloads with a mix of computation and addressing calculations. Its new combined L1 data cache and shared memory unit significantly improve performance while also simplifying programming.

Fabricated on a new TSMC 12-nanometer FFN high-performance manufacturing process customised for NVIDIA, TITAN V also incorporates Volta’s highly tuned 12GB HBM2 memory subsystem for advanced memory bandwidth utilisation.

 

Free AI Software on NVIDIA GPU Cloud

[adrotate group=”2″]

TITAN V’s incredible power is ideal for developers who want to use their PCs to do work in AI, deep learning and high performance computing.

Users of TITAN V can gain immediate access to the latest GPU-optimised AI, deep learning and HPC software by signing up at no charge for an NVIDIA GPU Cloud account. This container registry includes NVIDIA-optimised deep learning frameworks, third-party managed HPC applications, NVIDIA HPC visualisation tools and the NVIDIA TensorRT inferencing optimiser.

More Details : Now Everyone Can Use NVIDIA GPU Cloud!

 

Immediate Availability

TITAN V is available to purchase today for US$2,999 from the NVIDIA store in participating countries.

Go Back To > First PageArticles | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

The AWS Masterclass on Artificial Intelligence by Olivier Klein

Just before we flew to Computex 2017, we attended the AWS Masterclass on Artificial Intelligence. It offered us an in-depth look at AI concepts like machine learning, deep learning and neural networks. We also saw how Amazon Web Services (AWS) uses all that to create easy-to-use tools for developers to create their own AI applications at low cost and virtually no capital outlay.

 

The AWS Masterclass on Artificial Intelligence

AWS Malaysia flew in Olivier Klein, the AWS Asia Pacific Solutions Architect, to conduct the AWS Masterclass. During the two-hour session, he conveyed the ease by which the various AWS services and tools allow virtually anyone to create their own AI applications at lower cost and virtually no capital outlay.

The topic on artificial intelligence is rather wide-ranging, covering from the basic AI concepts all the way to demonstrations on how to use AWS services like Amazon Polly and Amazon Rekognition to easily and quickly create AI applications. We present to you – the complete AWS Masterclass on Artificial Intelligence!

The AWS Masterclass on AI is actually made up of 5 main topics. Here is a summary of those topics :

Topic Duration Remark
AWS Cloud and An Introduction to Artificial Intelligence, Machine Learning, Deep Learning 15 minutes An overview on Amazon Web Services and the latest innovation in the data analytics, machine learning, deep learning and AI space.
The Road to Artificial Intelligence 20 minutes Demystifying AI concepts and related terminologies, as well as the underlying technologies.

Let’s dive deeper into the concepts of machine learning, deep learning models, such as the neural networks, and how this leads to artificial intelligence.

Connecting Things and Sensing the Real World 30 minutes As part of an AI that aligns with our physical world, we need to understand how Internet-of-Things (IoT) space helps to create natural interaction channels.

We will walk through real world examples and demonstration that include interactions with voice through Amazon Lex, Amazon Polly and the Alexa Voice Services, as well as understand visual recognitions with services such as Amazon Rekognition.

We will also bridge this with real-time data that is sensed from the physical world via AWS IoT.

Retrospective and Real-Time Data Analytics 30 minutes Every AI must continuously “learn” and be “trained”” through past performance and feedback data. Retrospective and real-time data analytics are crucial to building intelligence model.

We will dive into some of the new trends and concepts, which our customers are using to perform fast and cost-effective analytics on AWS.

In the next two pages, we will dissect the video and share with you the key points from each segment of this AWS Masterclass.

Next Page > Introduction To AWS Cloud & Artificial Intelligence, The Road To AI

[adrotate group=”1″]

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

The AWS Masterclass on AI Key Points (Part 1)

Here is an exhaustive list of key takeaway points from the AWS Masterclass on Artificial Intelligence, with their individual timestamps in the video :

Introduction To AWS Cloud

  • AWS has 16 regions around the world (0:51), with two or more availability zones per region (1:37), and 76 edge locations (1:56) to accelerate end connectivity to AWS services.
  • AWS offers 90+ cloud services (3:45), all of which use the On-Demand Model (4:38) – you pay only for what you use, whether that’s a GB of storage or transfer, or execution time for a computational process.
  • You don’t even need to plan for your requirements or inform AWS how much capacity you need (5:05). Just use and pay what you need.
  • AWS has a practice of passing their cost savings to their customers (5:59), cutting prices 61 times since 2006.
  • AWS keeps adding new services over the years (6:19), with over a thousand new services introduced in 2016 (7:03).

[adrotate group=”1″]

Introduction to Artificial Intelligence, Machine Learning, Deep Learning

  • Artificial intelligence is based on unsupervised machine learning (7:45), specifically deep learning models.
  • Insurance companies like AON use it for actuarial calculations (7:59), and services like Netflix use it to generate recommendations (8:04).
  • A lot of AI models have been built specifically around natural language understanding, and using vision to interact with customers, as well as predicting and understanding customer behaviour (9:23).
  • Here is a quick look at what the AWS services management console looks like (9:58).
  • This is how you launch 10 compute instances (virtual servers) in AWS (11:40).
  • The ability to access multiple instances quickly is very useful for AI training (12:40), because it gives the user access to large amounts of computational power, which can be quickly terminated (13:10).
  • Machine learning, or specifically artificial intelligence, is not new to Amazon.com, the parent company of AWS (14:14).
  • Amazon.com uses a lot of AI models (14:34) for recommendations and demand forecasting.
  • The visual search feature in Amazon app uses visual recognition and AI models to identify a picture you take (15:33).
  • Olivier introduces Amazon Go (16:07), a prototype grocery store in Seattle.

[adrotate group=”1″]

The Road to Artificial Intelligence

  • The first component of any artificial intelligence is the “ability to sense the real world” (18:46), connecting everything together.
  • Cheaper bandwidth (19:26) now allows more devices to be connected to the cloud, allowing more data to be collected for the purpose of training AI models.
  • Cloud computing platforms like AWS allow the storage and processing of all that sensor data in real time (19:53).
  • All of that information can be used in deep learning models (20:14) to create an artificial intelligence that understands, in a natural way, what we are doing, and what we want or need.
  • Olivier shows how machine learning can quickly solve a Rubik’s cube (20:47), which has 43 quintillion unique combinations.
  • You can even build a Raspberry Pi-powered machine (24:33) that can solve a Rubik’s cube puzzle in 0.9 seconds.
  • Some of these deep learning models are available on Amazon AI (25:11), which is a combination of different services (25:44).
  • Olivier shows what it means to “train a deep learning model” (28:19) using a neural network (29:15).
  • Deep learning is computationally-intensive (30:39), but once it derives a model that works well, the predictive aspect is not computationally-intensive (30:52).
  • A pre-trained AI model can be loaded into a low-powered device (31:02), allowing it to perform AI functions without requiring large amounts of bandwidth or computational power.
  • Olivier demonstrates the YOLO (You Only Look Once) project, which pre-trained an AI model with pictures of objects (31:58), which allows it to detect objects in any video.
  • The identification of objects is the baseline for autonomous driving systems (34:19), as used by Tu Simple.
  • Tu Simple also used a similar model to train a drone to detect and follow a person (35:28).

Next Page > Sensing The Real World, Retrospective & Real-Time Analysis

[adrotate group=”1″]

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

The AWS Masterclass on AI Key Points (Part 2)

Connecting Things and Sensing the Real World

  • Cloud services like AWS IoT (37:35) allow you to securely connect billions of IoT (Internet of Things) devices.
  • Olivier prefers to think of IoT as Intelligent Orchestrated Technology (37:52).
  • Olivier demonstrates how the combination of multiple data sources (maps, vehicle GPS, real-time weather reports) in Bangkok can be used to predict traffic as well as road conditions to create optimal routes (39:07), reducing traffic congestion by 30%.
  • The PetaBencana service in Jakarta uses picture recognition and IoT sensors to identify flooded roads (42:21) for better emergency response and disaster management.
  • Olivier demonstrates how easy it is to connect an IoT devices to the AWS IoT service (43:46), and use them to sense the environment and interact with.
  • Olivier shows how the capabilities of the Amazon Echo can be extended by creating an Alexa Skill using the AWS Lambda function (59:07).
  • Developers can create and publish Alexa Skills for sale in the Amazon marketplace (1:03:30).
  • Amazon Polly (1:04:10) renders life-like speech, while the Amazon Lex conversational engine (1:04:17) has natural language understanding and automatic speech recognition. Amazon Rekognition (1:04:29) performs image analysis.
  • Amazon Polly (1:04:50) turns text into life-like speech using deep learning to change the pitch and intonation according to the context. Olivier demonstrates Amazon Polly’s capabilities at 1:06:25.
  • Amazon Lex (1:11:06) is a web service that allows you to build conversational interfaces using natural language understanding (NLU) and automatic speech recognition (ASR) models like Alexa.
  • Amazon Lex does not just support spoken natural language understanding, it also recognises text (1:12:09), which makes it useful for chatbots.
  • Olivier demonstrates that text recognition capabilities in a chatbot demo (1:13:50) of a customer applying for a credit card through Facebook.
  • Amazon Rekognition (1:21:37) is an image recognition and analysis service, which uses deep learning to identify objects in pictures.
  • Amazon Rekognition can even detect facial landmarks and sentiments (1:22:41), as well as image quality and other attributes.
  • You can actually try Amazon Rekognition out (1:23:24) by uploading photos at CodeFor.Cloud/image.

[adrotate group=”1″]

Retrospective and Real-Time Data Analytics

  • AI is a combination of 3 types of data analytics (1:28:10) – retrospective analysis and reporting + real-time processing + predictions to enable smart apps.
  • Cloud computing is extremely useful for machine learning (1:29:57) because it allows you to decouple storage and compute requirements for much lower costs.
  • Amazon Athena (1:31:56) allows you to query data stored in Amazon S3, without creating a compute instance to do it. You only pay for the TB of data that is processed by that query.
  • Best of all, you will get the same fast results even if your data set grows (1:32:31), because Amazon Athena will automatically parallelise your queries across your data set internally.
  • Olivier demonstrates (1:33:14) how Amazon Athena can be used to run queries on data stored in Amazon S3, as well as generate reports using Amazon QuickSight.
  • When it comes to data analytics, cloud computing allows you to quickly bring massive computing power to bear, achieving much faster results without additional cost (1:41:40).
  • The insurance company AON used this ability (1:42:44) to reduce an actuarial simulation that would normally take 10 days, to just 10 minutes.
  • Amazon Kinesis and Amazon Kinesis Analytics (1:45:10) allows the processing of real-time data.
  • A company called Dash is using this capability to analyse OBD data in real-time (1:47:23) to help improve fuel efficiency and predict potential breakdowns. It also notifies emergency services in case of a crash.

Go Back To > First PageArticles | Home

[adrotate group=”1″]

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

AMD Vega Memory Architecture Q&A With Jeffrey Cheng

At the AMD Computex 2017 Press Conference, AMD President & CEO Dr. Lisa Su announced that AMD will launch the Radeon Vega Frontier Edition on 27 June 2017, and the Radeon RX Vega graphics cards at the end of July 2017. We figured this is a great time to revisit the new AMD Vega memory architecture.

Now, who better to tell us all about it than AMD Senior Fellow Jeffrey Cheng, who built the AMD Vega memory architecture? Check out this exclusive Q&A session from the AMD Tech Summit in Sonoma!

Updated @ 2017-06-11 : We clarified the difference between the AMD Vega’s 64-bit flat address space, and the 512 TB addressable memory. We also added new key points, and time stamps for the key points.

Originally posted @ 2017-02-04

Don’t forget to also check out the following AMD Vega-related articles :

 

The AMD Vega Memory Architecture

Jeffrey Cheng is an AMD Senior Fellow in the area of memory architecture. The AMD Vega memory architecture refers to how the AMD Vega GPU manages memory utilisation and handles large datasets. It does not deal with the AMD Vega memory hardware design, which includes the High Bandwidth Cache and HBM2 technology.

 

AMD Vega Memory Architecture Q&A Summary

Here are the key takeaway points from the Q&A session with Jeffrey Cheng :

  • Large amounts of DRAM can be used to handle big datasets, but this is not the best solution because DRAM is costly and consumes lots of power (see 2:54).
  • AMD chose to design a heterogenous memory architecture to support various memory technologies like HBM2 and even non-volatile memory (e.g. Radeon Solid State Graphics) (see 4:40 and 8:13).[adrotate group=”2″]
  • At any given moment, the amount of data processed by the GPU is limited, so it doesn’t make sense to store a large dataset in DRAM. It would be better to cache the data required by the GPU on very fast memory (e.g. HBM2), and intelligently move them according to the GPU’s requirements (see 5:40).
  • The AMD Vega’s heterogenous memory architecture allows for easy integration of future memory technologies like storage-class memory (flash memory that can be accessed in bytes, instead of blocks) (see 8:13).
  • The AMD Vega has a 64-bit flat address space for its shaders (see 12:0812:36 and 18:21), but like NVIDIA, AMD is (very likely) limiting the addressable memory to 49-bits, giving it 512 TB of addressable memory.
  • AMD Vega has full access to the CPU’s 48-bit address space, with additional bits beyond that used to handle its own internal memory, storage and registers (see 12:16). This ties back to the High Bandwidth Cache Controller and heterogenous memory architecture, which allows the use of different memory and storage types.

  • Game developers currently try to manage data and memory usage, often extremely conservatively to support graphics cards with limited amounts of graphics memory (see 16:29).
  • With the introduction of AMD Vega, AMD wants game developers to leave data and memory management to the GPU. Its High Bandwidth Cache Controller and heterogenous memory system will automatically handle it for them (see 17:19).
  • The memory architectural advantages of AMD Vega will initially have little impact on gaming performance (due to the current conservative approach of game developers). This will change when developers hand over data and memory management to the GPU. (see 24:42).[adrotate group=”2″]
  • The improved memory architecture in AMD Vega will mainly benefit AI applications (e.g. deep machine learning) with their large datasets (see 24:52).

Don’t forget to also check out the following AMD Vega-related articles :

Go Back To > Computer Hardware + Systems | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

The Complete AMD Radeon Instinct Tech Briefing Rev. 3.0

The AMD Tech Summit held in Sonoma, California from December 7-9, 2016 was not only very exclusive, it was highly secretive. The first major announcement we have been allowed to reveal is the new AMD Radeon Instinct heterogenous computing platform.

In this article, you will hear from AMD what the Radeon Instinct platform is all about. As usual, we have a ton of videos from the event, so it will be as if you were there with us. Enjoy! 🙂

Originally published @ 2016-12-12

Updated @ 2017-01-11 : Two of the videos were edited to comply with the NDA. Now that the NDA on AMD Vega has been lifted, we replaced the two videos with their full, unedited versions. We also made other changes, including adding links to the other AMD Tech Summit articles.

Updated @ 2017-01-20 : Replaced an incorrect slide, and a video featuring that slide. Made other small updates to the article.

 

The AMD Radeon Instinct Platform Summarised

For those who want the quick low-down on AMD Radeon Instinct, here are the key takeaway points :

  • The AMD Radeon Instinct platform is made up of two components – hardware and software.
  • The hardware components are the AMD Radeon Instinct accelerators built around the current Polaris and the upcoming Vega GPUs.
  • The software component is the AMD Radeon Open Compute (ROCm) platform, which includes the new MIOpen open-source deep learning library.
  • The first three Radeon Instinct accelerator cards are the MI6, MI8 and MI25 Vega with NCU.
  • The AMD Radeon Instinct MI6 is a passively-cooled inference accelerator with 5.7 TFLOPS of FP16 processing power, 224 GB/s of memory bandwidth, and a TDP of <150 W. It will come with 16 GB of GDDR5 memory.
  • The AMD Radeon Instinct MI8 is a small form-factor (SFF) accelerator with 8.2 TFLOPS of processing power, 512 GB/s of memory bandwidth, and a TDP of <175 W. It will come with 4 GB of HBM memory.
  • The AMD Radeon Instinct MI25 Vega with NCU is a passively-cooled training accelerator with 25 TFLOPS of processing power, support for 2X packed math, a High Bandwidth Cache and Controller, and a TDP of <300 W.
  • The Radeon Instinct accelerators will all be built exclusively by AMD.
  • The Radeon Instinct accelerators will all support MxGPU SRIOV hardware virtualisation.
  • The Radeon Instinct accelerators are all passively cooled.
  • The Radeon Instinct accelerators will all have large BAR (Base Address Register) support for multiple GPUs.
  • The upcoming AMD Zen “Naples” server platform is designed to supported multiple Radeon Instinct accelerators through a high-speed network fabric.
  • The ROCm platform is not only open source, it will support a multitude of standards in addition to MIOpen.
  • The MIOpen deep learning library is open source, and will be available in Q1 2017.
  • The MIOpen deep learning library is optimised for Radeon Instinct, allowing for 3X better performance in machine learning.
  • AMD Radeon Instinct accelerators will be significantly faster than NVIDIA Titan X GPUs based on the Maxwell and Pascal architectures.

In the subsequent pages, we will give you the full low-down on the Radeon Instinct platform, with the following presentations by AMD :

[adrotate banner=”4″]

We also prepared the complete video and slides of the Radeon Instinct tech briefing for your perusal :

Next Page > Heterogenous Computing, The Radeon Instinct Accelerators, MIOpen, Performance

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

Why Is Heterogenous Computing Important?

Dr. Lisa Su, kicked things off with an inside look at her two-year long journey as AMD President and CEO. Then she revealed why Heterogenous Computing is an important part of AMD’s future going forward. She also mentioned the success of the recently-released Radeon Software Crimson ReLive Edition.

 

Here Are The New AMD Radeon Instinct Accelerators!

Next, Raja Koduri, Senior Vice President and Chief Architect of the Radeon Technologies Group, officially revealed the new AMD Radeon Instinct accelerators.

 

The MIOpen Deep Learning Library For Radeon Instinct

MIOpen is a new deep learning library optimised for Radeon Instinct. It is open source and will become part of the Radeon Open Compute (ROCm) platform. It will be available in Q1 2017.

[adrotate banner=”5″]

 

The Performance Advantage Of Radeon Instinct & MIOpen

MIOpen is optimised for Radeon Instinct, offering 3X better performance in machine learning. It allows the Radeon Instinct accelerators to be significantly faster than NVIDIA Titan X GPUs based on the Maxwell and Pascal architectures.

Next Page > Radeon Instinct MI25 & MI8 Demos, Zen “Naples” Platform, The First Servers, ROCm Discussion

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

The Radeon Instinct MI25 Training Demonstration

Raja Koduri roped in Ben Sander, Senior Fellow at AMD, to show off the Radeon Instinct MI25 running a training demo.

 

The Radeon Instinct MI8 Visual Inference Demonstration

The visual inference demo is probably much easier to grasp, as it is visual in nature. AMD used the Radeon Instinct MI8 in this example.

 

The Radeon Instinct On The Zen “Naples” Platform

The upcoming AMD Zen “Naples” server platform is designed to supported multiple AMD Radeon Instinct accelerators through a high-speed network fabric.

[adrotate banner=”5″]

 

The First Radeon Instinct Servers

This is not a vapourware launch. Raja Koduri revealed the first slew of Radeon Instinct servers that will hit the market in H1 2017.

 

The Radeon Open Compute (ROCm) Platform Discussion

To illustrate the importance of heterogenous computing on Radeon Instinct, Greg Stoner (ROCm Senior Director at AMD), hosted a panel of AMD partners and early adopters in using the Radeon Open Compute (ROCm) platform.

Next Page > Closing Remarks On Radeon Instinct, The Complete Radeon Instinct Tech Briefing Video & Slides

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

Closing Remarks On Radeon Instinct

Finally, Raja Koduri concluded the launch of the Radeon Instinct Initiative with some closing remarks on the recent Radeon Software Crimson ReLive Edition.

 

The Complete AMD Radeon Instinct Tech Briefing

This is the complete AMD Radeon Instinct tech briefing. Our earlier video was edited to comply with the AMD Vega NDA (which has now expired).

[adrotate banner=”5″]

 

The Complete AMD Radeon Instinct Tech Briefing Slides

Here are the Radeon Instinct presentation slides for your perusal.

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

Becoming A Data Ninja With Microsoft SQL Server 2016

Malaysia is at the cusp of a data revolution. Cloud computing as well as big data analytics and intelligence will play a central role in this data revolution. Data science will transform entire industries in a new world in which data is the new currency. To support the transition to a data-centric world, the Microsoft SQL Server has evolved beyond its database roots, into a full-fledged business analytics and data management platform.

On the 8th of April, 2016, Microsoft Malaysia officially launched the new Microsoft SQL Server 2016 to support this transition to a data-centric world. Four key figures from Microsoft and the Multimedia Development Corporation came together to address the expansion of data science in Malaysia, and how SQL Server 2016 will help companies achieve their goals faster and easier than ever before.

 

Microsoft Malaysia On The Launch Of SQL Server 2016

In this video, Michal Golebiewski, Chief Marketing & Operations Officer, Microsoft Malaysia explains Microsoft’s mission and how they aim to help companies transform their operations to a mobile-first, data-first environment with Microsoft SQL Server 2016.

 

Dr. Karl Ng (MDeC) On Data Science In Malaysia

Ir Dr. Karl Ng Kah Hou, Director, Innovation Capital Division, The Multimedia Development Corporation (MDeC) then spoke on the push for data science in Malaysia. He also addressed questions about MDeC’s initiatives on data science in Malaysia. Check it out :

 

How To Become A Data Scientist

Microsoft even brought in a real-life data scientist – Julian Lee. An Advance Analytics Technical Solutions Professional in Microsoft Asia, he gave us a riveting account of his experience as a data scientist in Malaysia. A must-watch for those who are thinking of taking this exciting new career path.

 

Microsoft SQL Server 2016’s Key Features

Ending this exclusive event was Darmadi Komo, the Director of Microsoft’s Data Group Marketing. He flew in all the way from the Microsoft campus at Redmond to share with us the key features of the new Microsoft SQL Server 2016.

He also shared with us the fabulous offer from Microsoft – free SQL Server 2016 licences for companies that want to migrate from Oracle. The offer’s good up to June 2016, so contact Microsoft Malaysia ASAP if you are interested!

The new Microsoft SQL Server 2016 supports hybrid transactional / analytical processing, advanced analytics and machine learning, mobile Business Intelligence, data integration, always encrypted query processing capabilities and in-memory transactions with persistence.

  • Stretch Database technology is an industry first and allows customers to dynamically stretch warm and cold transactional data to Azure so operational data is always accessible. This also enables customers to have a cost-effective data strategy while protecting their customer’s sensitive data.
  • Advanced analytics using our new R support that enables customers to do real-time predictive analytics on both operational and analytic data.
  • Ground-breaking security encryption capabilities that enable data to always be encrypted at rest, in motion and in-memory to deliver maximum security protection.
  • In-memory database support for every workload with performance increases up to 30-100x.
  • Business Intelligence for every employee on every device – including new mobile BI support for iOS, Android and Windows Phone devices.
  • Available on Linux in private preview, making SQL Server 2016 more accessible to a broader set of users
  • Unique cloud capabilities that enable customers to deploy hybrid architectures that partition data workloads across on-premises and cloud based systems to save costs and increase agility

[adrotate banner=”5″]

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participate in the Tech ARP Forums, or even donate to our fund. Any help you can render is greatly appreciated!

NVIDIA GPUs Power Facebook’s Deep Machine Learning

Dec. 11, 2015—NVIDIA today announced that Facebook will power its next-generation computing system with the NVIDIA® Tesla® Accelerated Computing Platform, enabling it to drive a broad range of machine learning applications.

While training complex deep neural networks to conduct machine learning can take days or weeks on even the fastest computers, the Tesla platform can slash this by 10-20x. As a result, developers can innovate more quickly and train networks that are more sophisticated, delivering improved capabilities to consumers.

Facebook is the first company to adopt NVIDIA Tesla M40 GPU accelerators, introduced last month, to train deep neural networks. They will play a key role in the new “Big Sur” computing platform, Facebook AI Research’s (FAIR) purpose-built system designed specifically for neural network training.

“Deep learning has started a new era in computing,” said Ian Buck, vice president of accelerated computing at NVIDIA. “Enabled by big data and powerful GPUs, deep learning algorithms can solve problems never possible before. Huge industries from web services and retail to healthcare and cars will be revolutionised. We are thrilled that NVIDIA GPUs have been adopted as the engine of deep learning. Our goal is to provide researchers and companies with the most productive platform to advance this exciting work.”

In addition to reducing neural network training time, GPUs offer a number of other advantages. Their architectural compatibility from generation to generation provides seamless speed-ups for future GPU upgrades. And the Tesla platform’s growing global adoption facilitates open collaboration with researchers around the world, fueling new waves of discovery and innovation in the machine learning field.

 

Big Sur Optimised for Machine Learning

NVIDIA worked with Facebook engineers on the design of Big Sur, optimising it to deliver maximum performance for machine learning workloads, including the training of large neural networks across multiple Tesla GPUs.

[adrotate banner=”4″]Two times faster than Facebook’s existing system, Big Sur will enable the company to train twice as many neural networks – and to create neural networks that are twice as large – which will help develop more accurate models and new classes of advanced applications.

“The key to unlocking the knowledge necessary to develop more intelligent machines lies in the capability of our computing systems,” said Serkan Piantino, engineering director for FAIR. “Most of the major advances in machine learning and AI in the past few years have been contingent on tapping into powerful GPUs and huge data sets to build and train advanced models.”

The addition of Tesla M40 GPUs will help Facebook make new advancements in machine learning research and enable teams across its organisation to use deep neural networks in a variety of products and services.

 

First Open Sourced AI Computing Architecture

Big Sur represents the first time a computing system specifically designed for machine learning and artificial intelligence (AI) research will be released as an open source solution.

Committed to doing its AI work in the open and sharing its findings with the community, Facebook intends to work with its partners to open source Big Sur specifications via the Open Compute Project. This unique approach will make it easier for AI researchers worldwide to share and improve techniques, enabling future innovation in machine learning by harnessing the power of GPU accelerated computing.