Tag Archives: Artificial intelligence

Why Xiaomi's ROIDMI Eve Plus Is So Awesome!

Why Xiaomi’s ROIDMI Eve Plus Is So Awesome!

The Xiaomi ROIDMI Eve Plus robot vacuum is really, REALLY AWESOME!

Find out why every home should have a little ROIDMI Eve Plus running around!

 

Why Xiaomi’s ROIDMI Eve Plus Is So Awesome!

One of the hottest gadgets for homeowners is a robot vacuum, which can be incredibly useful or a real pain in the ass.

What most homeowners in the market for a robot vacuum don’t understand is that many models don’t clean well, and are a hassle to maintain.

That’s why Xiaomi’s ROIDMI Eve Plus robot vacuum is so awesome – it is not only a cleaning powerhouse, it does away with almost all of the maintenance hassle!

Self-Emptying System

One of the biggest hassle in maintaining a robot vacuum is the constant need to empty the dust box.

This limitation prevents ordinary robot vacuums from automatically and continuously cleaning your house for days on end.

This is not a problem for the ROIDMI Eve Plus. When it returns to its base station to recharge its battery, its dust box is automatically emptied into the base station!

This allows the Eve Plus to continuously recharge and empty its dust box, and clean your house for weeks on end!

The base station’s large 3L dust bag can hold enough dust for 60 days, before you need to clear it.

Self-Sanitising System

The ROIDMI Eve Plus not only comes with a HEPA filter – a common feature in robot vacuums, it boasts a pioneering Deodorizing Particle Generator in the base station.

The Deodorizing Particle Generator uses Active Oxygen technology to sterilise the collected dust, eliminating dust mites as well as microbes and mold like E. coli, Candida albicans and Staphylococcus aureus.

It can also eliminate toxic chemicals like formaldehyde, ammonia, benzene and TVOC, as well as remove the smell of cigarettes and perfume.

4th Generation Super-Sensing LDS Sensor

The ROIDMI Eve Plus comes with their 4th generation laser distance sensor, which allows for highly-precise scanning of its environment.

It is constantly scanning for obstacles, so it will quickly detect people or pets nearby and avoid them.

AI Smart Room Mapping

The ROIDMI Eve Plus can precisely map and later remember every room in your house, across multiple levels!

Its built-in AI mapping and path planning algorithms allow it to better and faster clean the rooms, and resume its cleaning duties after charging or emptying its dust box.

The room maps it generates also allows you to demarcate forbidden areas, or schedule more frequent cleaning for certain rooms.

Separate maps can be saved for each floor, and the settings are automatically matched and switched, when switching floors.

4+ Hours Of Vacuum + Mopping!

The ROIDMI Eve Plus has a powerful digital brushless vacuum with 2,700 Pa of suction power, with side brushes and a flexible inlet that adjusts to the floor condition.

And thanks to its large 5,200 mAh battery, it can keep on cleaning for over 4 hours (250 minutes) before it needs to return to the base station!

If you like squeaky clean floors, the Eve Plus has a mopping module and a 250 ml water tank, to give you just that.

It simulates hand-mopping with its 3-stage Y-route (not available in the US) or U-route mopping

Obstacle Avoidance

The ROIDMI Eve Plus will never get stuck under your bed, sofa or cabinets, because it can automatically sense their height.

It is also smart enough to detect low obstacles like door strips or cables (up to 2 cm in height) and crawl over them.

It can also sense and climb up ramps with slopes of up to 20°, which lesser robot vacuums will balk at.

 

Xiaomi ROIDMI Eve Plus : What’s Inside The Box?

The Xiaomi ROIDMI Eve Plus comes in two separate boxes – one for the robot vacuum cleaner, and the other for its base station.

In this video, we take a look at what’s inside both boxes!

Once unboxed, you should find these items :

  • ROIDMI Eve Plus robot vacuum cleaner + mop module
  • ROIDMI Eve Plus dust collector base station
  • User guide + warranty card
  • Disposable mop wipes + filters + dust bags
  • Power cord

For more information, you can visit the official ROIDMI Eve Plus website.

 

Xiaomi ROIDMI Eve Plus : Specifications

Specifications Xiaomi ROIDMI Eve Plus
Type Robot vacuum
Voice Assistant Alexa, Google Assistant
Cleaning Modes Sweep + vacuum + mop
Suction Power 2700 Pa (maximum)
Recommended
Coverage
Area
250 m²
2690 square feet
Robot Container Dust : 300 ml
Water : 250 ml
Base Station
Container
Dust : 3 L
Robot Noise Level 60 dB (A)
Base Station
Noise Level
< 82 dB (A)
Hygiene Features HEPA Filter
Active Oxygen Technology
Navigation 4th Gen LDS SLAM
Automatic Partition Yes
Where To Clean Yes
Virtual Wall App-based Virtual Wall
Obstacle Clearance Slope : up to 20°
Height : up to 20 mm
Battery 5,200 mAh lithium-ion
Battery Life 250 minutes
Charging Time 250 minutes
Power Consumption 50 watt (robot)
850 watt (base station)
Robot Dimensions 350 x 350 x 98 mm
Robot Weight 3.6 kg
Base Station
Dimensions
358 x 350 x 175 mm
Base Station
Weight
2.7 kg

 

Xiaomi ROIDMI Eve Plus : Price + Deals

The ROIDMI Eve Plus + dust collector base station kit is surprisingly affordable.

You can now purchase the full ROIDMI Eve Plus set in Malaysia at these incredible prices :

  • Standard 1 Year Warranty : RM1,588
  • 18 Month Warranty : RM1,688
  • 2 Year Warranty : RM1,788
  • Premium 1 Year Warranty : RM1,988 (1 to 1 Exchange)

In addition, you can save more by purchasing these Add-on Deals :

  • Replacement parts like dust bag, mop wipes, roller brush, side brush, filter, etc.
  • Extended 1 year warranty on the battery

But on 11/11, the ROIDMI Eve Plus goes on sale at only RM1,458 for just TWO HOURS – from 12 AM until 2 AM!

So don’t miss this offer. Grab it during those two hours!

 

Please Support My Work!

Support my work through a bank transfer /  PayPal / credit card!

Name : Adrian Wong
Bank Transfer : CIMB 7064555917 (Swift Code : CIBBMYKL)
Credit Card / Paypal : https://paypal.me/techarp

Dr. Adrian Wong has been writing about tech and science since 1997, even publishing a book with Prentice Hall called Breaking Through The BIOS Barrier (ISBN 978-0131455368) while in medical school.

He continues to devote countless hours every day writing about tech, medicine and science, in his pursuit of facts in a post-truth world.

 

Recommended Reading

Go Back To > Home TechTech ARP

 

Support Tech ARP!

Please support us by visiting our sponsors, participating in the Tech ARP Forums, or donating to our fund. Thank you!

Dell UltraSharp Webcam (WB7022) : All You Need To Know!

Dell just introduced the UltraSharp Webcam (WB7022) for the new Work From Home normal!

Here is what you need to know about the new Dell WB7022 UltraSharp Webcam!

 

Dell UltraSharp Webcam (WB7022) : All You Need To Know!

More than 18 months into the COVID-19 pandemic, the Work From Home (WFH) normal appears to be here to stay, at least for now.

To ensure that those who need to remotely work have the best possible video conferencing webcam, Dell just introduced the UltraSharp Webcam.

With nine patent-pending technologies, the Dell UltraSharp Webcam offers 4K image quality, built-in AI tracking and other video conferencing features.

4K Webcam With World’s Best Image Quality

Dell claims that the UltraSharp Webcam offers the world’s best image quality, courtesy of the large 4K Sony STARVIS CMOS sensor and the multi-element lens that captures more light to deliver “crystal-clear video”.

It supports frame rates of 24 and 30 fps at the full 4K UHD resolution. It also supports the higher 60 fps frame rate, at the lower 1080p and 720p resolutions.

It’s not just about the hardware. It’s also about the software embedded into the UltraSharp Webcam.

Its Digital Overlap HDR capability lets the Dell UltraSharp Webcam preserve true-to-life colours and balance exposure.

Meanwhile, its 2D/3D video noise reduction capability eliminates grainy images, making you look good even in low light conditions.

AI Auto-Framing

The Dell UltraSharp Webcam has AI Auto-Framing capability built-in. This feature uses Artificial Intelligence to keep you focused and centred in the frame.

You don’t have to manually adjust the webcam. It will track your movements and shift the frame to keep you centered, even if you stand up!

5X Digital Zoom + Three Fields of View

The Dell UltraSharp Webcam supports up to 5X digital zoom, as well as three fields of view – 65°, 78° and 90°.

This allows you to fit multiple people in the same conference call, or focus exclusively on you in a crowded place.

Smart Security Features

The Dell UltraSharp Webcam supports Windows Hello, allowing you to sign-in quickly and securely usual facial recognition.

It also has Dell ExpressSign-in capability built-in, allowing Dell PCs to detect your presence as you approach and automatically log you out when you step away, offering you an extra level of security.

Finally, it comes with a magnetic privacy cover that securely snaps onto the lens, or the back of the webcam.

Webcam Mounting Solutions

The Dell UltraSharp Webcam comes with two mounts in the box :

  • a magnetic mount
  • a tripod adaptor

The magnetic mount makes it easy to attach to the top of any desktop or laptop monitor, without obscuring the display.

And if you need to mount it on a tripod, simply slide the webcam off the magnetic mount and slide it into the tripod adaptor!

 

Dell UltraSharp Webcam (WB7022) : Specifications

Specifications Dell UltraSharp Webcam
Model WB7022
Camera Resolution
+ Frame Rates
4K : 24 / 30 fps
1080p : 24 / 30 / 60 fps
720p : 24 / 30 / 60 fps
Sensor 8.3 MP Sony STARVIS CMOS
Field of View 65, 78, 90 degrees
HD Digital Zoom Up to 5X
Autofocus Yes
Auto-Light
Correction
Advanced Digital Overlap (DOL) HDR
3D + 2D Video Noise Reduction
– Temporal Noise (3DNR)
– Spatial Noise Reduction (2DNR)
Auto White Balance Yes
AI Auto-Framing Yes
Microphone No
Lens Cap Yes (Magnetic)
Windows Hello Yes
Dell on Dell ExpressSign-In
Certification Microsoft Teams, Zoom
Other Optimised Apps Skype for Business
Go To Meeting
Google Meet
Google Hangout
Blue Jeans
Slack, Lifesize
OS Support Windows 10, macOS
Plug & Play Support Yes
Connectivity USB C
– comes with USB-A to USB-C cable
Material Anodised Aluminium
Webcam Dimensions 42 mm wide
90 mm long
Mount Dimensions 32 mm wide
64 mm deep
9.4 mm high
Warranty 3 years

 

Dell UltraSharp Webcam (WB7022) : Price + Availability

The Dell UltraSharp Webcam (WB7022) is available for purchase at the official price of US$199 / RM 1,122 (about £144 / A$266 / S$268).

 

Recommended Reading

Go Back To > ComputerBusiness | Tech ARP

 

Support Tech ARP!

If you like our work, please support us by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

Dell EMC Ready Solutions for AI + vHPC on VMware vSphere!

Dell Technologies just introduced Dell EMC Ready solutions for both AI and virtualised HPC workloads on VMware vSphere 7!

Join us for the tech briefing on both new Dell EMC computing solutions for VMware, and find out how it can simplify your advanced computing needs!

 

Simplified Advanced Computing With Dell EMC Ready Solutions

Let’s start with the Dell Technologies briefing on the two new Dell EMC Ready solutions for both AI and virtualised HPC workloads.

Based on VMware Cloud Foundation, they are designed to make AI easier to deploy and consume, with new features from VMware vSphere 7, including Bitfusion.

 

 

Dell EMC Ready Solutions for AI : GPU-as-a-Service (GaaS)

GPUs in individual workstations or servers are often under-utilised at less than 15% of capacity. The new Dell EMC Ready Solutions for AI : GPU-as-a-Service fixes that and maximises your investment with virtual GPU pools.

The newest design includes the latest VMware vSphere 7 with Bitfusion, making it possible to virtualise GPUs on-premise. Factory-installed by Dell, VMware vSphere 7 with Bitfusion will let developers and data scientists pool IT resources and share them across datacenters.

Dell EMC Ready Solutions for AI : GPU-as-a-Service also uses the latest VMware Cloud Foundation with VMware vSphere 7 support for Kubernetes and containerised applications to run AI workloads anywhere. Containers make it easier to bring cloud-native applications into production, with the ability to move workloads.

 

Dell EMC Ready Solutions for Virtualised HPC

Most HPC workloads run on dedicated systems that require specialised skills to deploy and manage. Dell EMC Ready Solutions for Virtualised HPC can include VMware Cloud Foundation with VMware vSphere 7 featuring Bitfusion.

That should make it simpler and more economical to use VMware environments for HPC and AI applications in computational chemistry, bioinformatics and computer-aided engineering. IT teams can quickly provision hardware as needed, speed up initial deployment and configuration, saving time with simpler centralised management and security.

For very large HPC implementations, Dell EMC Ready Solutions for vHPC can include VMware vSphere Scale-Out Edition for additional cost savings.

 

Dell EMC OpenManage for Dell EMC Ready Solutions

The new Dell EMC Ready Solutions for AI and Virtualised HPC ship with the Dell EMC OpenManage systems management software, which helps administrators improve system uptime, keep data insights flowing and prepare for AI operations.

New Dell EMC OpenManage improvements include :

  • OpenManage Integration for VMware vCenter, supporting vSphere Lifecycle Manager, automates software, driver and firmware updates holistically to save time and simplify operations.
  • The enhanced OpenManage Mobile app gives administrators the ability to view power and thermal policies, perform emergency power reduction and monitor internal storage from anywhere in the world.

 

Recommended Reading

Go Back To > Enterprise | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


Cyber Crime WhatsApp Warning Hoax Debunked!

The Cyber Crime hoax on WhatsApp is circulating, warning people that they are being monitored by the government.

Well, not to worry – unless you are living in China – this is yet another Internet hoax. Here is why we know that…

 

The Cyber Crime WhatsApp Warning Hoax

This is the Cyber Crime warning hoax that has been circulating on WhatsApp :

From tomorrow onwards there are new communication regulations.

All calls are recorded

All phone call recordings saved

WhatsApp is monitored

Twitter is monitored

Facebook is monitored

All social media and forums are monitored

Inform those who do not know.

Your devices are connected to ministry systems.

Take care not to send unnecessary messages

Inform your children, Relatives and friends about this to take care

​​Don’t forward any posts or videos etc., you receive regarding politics/present situation about Government/PM etc.​​

Police have put out a notification termed ..Cyber Crime … and action will be taken…just don’t delete …

Inform your friends & others too.

Writing or forwarding any msg on any political & religious debate is an offence now….arrest without warrant…

This is very serious, plz let it be known to all our groups and individual members as group admin can b in deep trouble.

Take care not to send unnecessary messages.
Inform everyone about this to take care.

Please share it; it’s very much true. Groups please be careful.

Note that it’s generic enough that it can apply to almost any government in the world.

 

The Cyber Crime WhatsApp Warning Hoax Debunked!

And here is why this is nothing more than yet another Internet hoax :

Only China Is Capable Of Doing This

The only country that has accomplished most of what was shared above is China, but it took them decades to erect the Great Firewall of China.

It’s not just the massive infrastructure that needs to be created, it also requires legislation to be enacted, and considerable manpower and resources to maintain such a system.

That’s why China is leaning heavily on AI and cloud computing capabilities to automatically and quickly censor information deemed “sensitive”.

However, no other country has come close to spending the money and resources on a similar scale, although Cuba, Vietnam, Zimbabwe and Belarus have imported some surveillance technology from China.

WhatsApp, Instagram + Facebook Messenger Have End-to-End Encryption

All three Facebook-owned apps are now running on the same common platform, which provides end-to-end encryption.

End-to-end encryption protects messages as they travel through the Internet, and specifically prevents anyone (bad guys or your friendly government censor) from snooping into your conversation.

That is also why all three are banned in China…

The Police Cannot Enact Laws

There are cybercrime laws in most, if not every, country in the world. But they are all enacted by legislative bodies of some sort, not the police.

The police is the executive arm in a country, empowered to enforce the law. They do not have the power to create a law, and then act on it.

Even The Government Has Debunked It!

Just in case you are still not convinced, even the Malaysian government issued a fact check on this hoax, debunking it as fake news :

Basically, it states “The Ministry of Home Affairs has NEVER recorded telephone calls or monitored social media in this country“.

 

Recommended Reading

Go Back To > Cybersecurity | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

Jeff Clarke : Tech Predictions For 2020 + Next Data Decade!

Dell Technologies COO and Vice Chairman, Jeff Clarke, reveals his tech predictions for 2020, the start of what Dell Technologies considers as the Next Data Decade!

 

Jeff Clarke : Tech Predictions For 2020 + Next Data Decade!

It’s hard to believe that we’re heading into the year 2020 – a year that many have marked as a milestone in technology. Autonomous cars lining our streets, virtual assistants predicting our needs and taking our requests, connected and intelligent everything across every industry.

When I stop to think about what has been accomplished over the last decade – it’s quite remarkable.  While we don’t have fully autonomous cars zipping back and forth across major freeways with ease, automakers are getting closer to deploying autonomous fleets in the next few years.

Many of the every-day devices, systems and applications we use are connected and intelligent – including healthcare applications, industrial machines and financial systems – forming what is now deemed as “the edge.”

At the root of all that innovation and advancement are massive amounts of data and compute power, and the capacity across edge, cloud and core data center infrastructure to put data through its paces. And with the amount of data coming our way in the next 10 years – we can only imagine what the world around us will look like in 2030, with apps and services we haven’t even thought of yet.

2020 marks the beginning of what we at Dell Technologies are calling the Next Data Decade, and we are no doubt entering this era with new – and rather high – expectations of what technology can make possible for how we live, work and play. So what new breakthroughs and technology trends will set the tone for what’s to come over the next 10 years? Here are my top predictions for the year ahead.

2020 proves it’s time to keep IT simple

We’ve got a lot of data on our hands…big data, meta data, structured and unstructured data – data living in clouds, in devices at the edge, in core data centers…it’s everywhere. But organisations are struggling to ensure the right data is moving to the right place at the right time. They lack data visibility – the ability for IT teams to quickly access and analyse the right data – because there are too many systems and services woven throughout their IT infrastructure. As we kick off 2020, CIOs will make data visibility a top IT imperative because after all, data is what makes the flywheel of innovation spin.

We’ll see organisations accelerate their digital transformation by simplifying and automating their IT infrastructure and consolidating systems and services into holistic solutions that enable more control and clarity. Consistency in architectures, orchestration and service agreements will open new doors for data management – and that ultimately gives data the ability be used as part of AI and Machine Learning to fuel IT automation.  And all of that enables better, faster business outcomes that the innovation of the next decade will thrive on.

Cloud co-existence sees rolling thunder

The idea that public and private clouds can and will co-exist becomes a clear reality in 2020. Multi-cloud IT strategies supported by hybrid cloud architectures will play a key role in ensuing organisations have better data management and visibility, while also ensuring that their data remains accessible and secure.  In fact, IDC predicted that by 2021, over 90% of enterprises worldwide will rely on a mix of on-premises/dedicated private clouds, several public clouds, and legacy platforms to meet their infrastructure needs.

But private clouds won’t simply exist within the heart of the data center. As 5G and edge deployments continue to rollout, private hybrid clouds will exist at the edge to ensure the real-time visibility and management of data everywhere it lives.

That means organisations will expect more of their cloud and service providers to ensure they can support their hybrid cloud demands across all environments. Further, we’ll see security and data protection become deeply integrated as part of hybrid cloud environments, notably where containers and Kubernetes continue to gain momentum for app development. Bolting security measures onto cloud infrastructure will be a non-starter…it’s got to be inherently built into the fiber of the overall data management strategy edge to core to cloud.

What you get is what you pay

One of the biggest hurdles for IT decision makers driving transformation is resources. CapEx and OpEx can often be limiting factors when trying to plan and predict for compute and consumption needs for the year ahead…never mind the next three-five years. SaaS and cloud consumption models have increased in adoption and popularity, providing organisations with the flexibility to pay for what they use, as they go.

In 2020, flexible consumption and as-a-service options will accelerate rapidly as organisations seize the opportunity to transform into software-defined and cloud-enabled IT. As a result – they’ll be able to choose the right economic model for their business to take advantage of end-to-end IT solutions that enable data mobility and visibility, and crunch even the most intensive AI and Machine Learning workloads when needed.

“The Edge” rapidly expands into the enterprise

The “Edge” continues to evolve – with many working hard to define exactly what it is and where it exists.   Once limited to the Internet of Things (IoT), it’s hard to find any systems, applications, services – people and places – that aren’t connected. The edge is emerging in many places and it’s going to expand with enterprise organisations leading the way, delivering the IT infrastructure to support it.

5G connectivity is creating new use cases and possibilities for healthcare, financial services, education and industrial manufacturing. As a result, SD-WAN and software-defined networking solutions become a core thread of a holistic IT infrastructure solution – ensuring massive data workloads can travel at speed – securely – between edge, core and cloud environments. Open networking solutions will prevail over proprietary as organisations recognise the only way to successfully manage and secure data for the long haul requires the flexibility and agility that only open software defined networking can deliver.

Intelligent devices change the way you work and collaborate

PC innovation continues to push new boundaries every year – screens are more immersive and bigger than ever, yet the form factor becomes smaller and thinner. But more and more, it’s what is running at the heart of that PC that is more transformational than ever. Software applications that use AI and machine learning create systems that now know where and when to optimise power and compute based on your usage patterns. With biometrics, PCs know it’s you from the moment you gaze at the screen. And now, AI and machine learning applications are smart enough to give your system the ability to dial up the sound and colour based on the content you’re watching or the game you’re playing.

Over the next year, these advancements in AI and machine learning will turn our PCs into even smarter and more collaborative companions. They’ll have the ability to optimise power and battery life for our most productive moments – and even become self-sufficient machines that can self-heal and self-advocate for repair – reducing the burden on the user and of course, reducing the number of IT incidents filed. That’s a huge increase in happiness and productivity for both the end users and the IT groups that support them.

Innovating with integrity, sourcing sustainably

Sustainable innovation will continue to take center stage, as organisations like ours want to ensure the impact they have in the world doesn’t come with a dangerous one on the planet. Greater investments in reuse and recycling for closed-loop innovation will accelerate – hardware becomes smaller and more efficient and built with recycled and reclaimed goods – minimising eWaste and maximising already existing materials. At Dell Technologies, we met our Legacy of Good 2020 goals ahead of schedule – so we’ve retired them and set new goals for 2030 to recycle an equivalent product for every product a customer buys, lead the circular economy with more than half of all product content being made from recycled or renewable material, and use 100% recycled or renewable material in all packaging.

As we enter the Next Data Decade, I’m optimistic and excited about what the future holds. The steps our customers will take in the next year to get the most out of their data will set forth new breakthroughs in technology that everyone will experience in some way – whether it’s a more powerful device, faster medical treatment, more accessible education, less waste and cleaner air. And before we know it, we’ll be looking forward to what the following 10 years will have in store.

 

Recommended Reading

Go Back To > Business + Enterprise | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


Here Are The Top 10 Tech Trends In 2020 From Alibaba!

Alibaba, specifically its research institute – the Alibaba DAMO Academy, just published their top 10 tech trends in 2020.

Here are the highlights from those top 10 tech trends that they are predicting will go big in 2020!

 

Here Are The Top 10 Tech Trends In 2020 From Alibaba!

Tech Trend #1 : AI Evolves From Perceptual Intelligence To Cognitive Intelligence

Artificial intelligence has reached or surpassed humans in the areas of perceptual intelligence such as speech to text, natural language processing, video understanding etc; but in the field of cognitive intelligence that requires external knowledge, logical reasoning, or domain migration, it is still in its infancy.

Cognitive intelligence will draw inspiration from cognitive psychology, brain science, and human social history, combined with techniques such as cross domain knowledge graph, causality inference, and continuous learning to establish effective mechanisms for stable acquisition and expression of knowledge.

These make machines to understand and utilize knowledge, achieving key breakthroughs from perceptual intelligence to cognitive intelligence.

Tech Trend #2 : In-Memory Computing Addresses Memory Wall Challenge In AI Computer

In Von Neumann architecture, memory and processor are separate and the computation requires data to be moved back and forth.

With the rapid development of data-driven AI algorithms in recent years, it has come to a point where the hardware becomes the bottleneck in the explorations of more advanced algorithms.

In Processing-in-Memory (PIM) architecture, in contrast to the Von Neumann architecture, memory and processor are fused together and computations are performed where data is stored with minimal data movement.

As such, computation parallelism and power efficiency can be significantly improved. We believe the innovations on PIM architecture are the tickets to next-generation AI.

Tech Trend #3 : Industrial IoT Will Power Digital Transformation

In 2020, 5G, rapid development of IoT devices, cloud computing and edge computing will accelerate the fusion of information system, communication system, and industrial control system.

Through advanced Industrial IoT, manufacturing companies can achieve automation of machines, in-factory logistics, and production scheduling, as a way to realise C2B smart manufacturing.

In addition, interconnected industrial system can adjust and coordinate the production capability of both upstream and downstream vendors.

Ultimately it will significantly increase the manufacturers’ productivity and profitability. For manufacturers with production goods that value hundreds of trillion RMB, If the productivity increases 5-10%, it means additional trillions of RMB.

Tech Trend #4 : Large Scale Collaboration Between Machines Become Possible

Traditional single intelligence cannot meet the real-time perception and decision of large-scale intelligent devices.

The development of collaborative sensing technology of Internet of things and 5G communication technology will realise the collaboration among multiple agents – machines cooperate with each other and compete with each other to complete the target tasks.

The group intelligence brought by the cooperation of multiple intelligent bodies will further amplify the value of the intelligent system:

  • large-scale intelligent traffic light dispatching will realise dynamic and real-time adjustment,
  • warehouse robots will work together to complete cargo sorting more efficiently,
  • driverless cars can perceive the overall traffic conditions on the road, and
  • group unmanned aerial vehicle (UAV) collaboration will get through the last-mile delivery more efficiently.

Tech Trend #5 : Modular Chiplet Design Makes Chips Easier & Faster To Create

Traditional model of chip design cannot efficiently respond to the fast evolving, fragmented and customised needs of chip production.

The open source SoC chip design based on RISC-V, high-level hardware description language, and IP-based modular chip design methods have accelerated the rapid development of agile design methods and the ecosystem of open source chips.

In addition, the modular design method based on chiplets uses advanced packaging methods to package the chiplets with different functions together, which can quickly customise and deliver chips that meet specific requirements of different applications.

Tech Trend #6 : Large Scale Blockchain Applications Will Gain Mass Adoption

BaaS (Blockchain-as-a-Service) will further reduce the barriers of entry for enterprise blockchain applications.

A variety of hardware chips embedded with core algorithms used in edge, cloud and designed specifically for blockchain will also emerge, allowing assets in the physical world to be mapped to assets on blockchain, further expanding the boundaries of the Internet of Value and realising “multi-chain interconnection”.

In the future, a large number of innovative blockchain application scenarios with multi-dimensional collaboration across different industries and ecosystems will emerge, and large-scale production-grade blockchain applications with more than 10 million DAI (Daily Active Items) will gain mass adoption.

Tech Trend #7 : A Critical Period Before Large-Scale Quantum Computing

In 2019, the race in reaching “Quantum Supremacy” brought the focus back to quantum computing. The demonstration, using superconducting circuits, boosts the overall confidence on superconducting quantum computing for the realisation of a large-scale quantum computer.

In 2020, the field of quantum computing will receive increasing investment, which comes with enhanced competitions. The field is also expected to experience a speed-up in industrialization and the gradual formation of an eco-system.

In the coming years, the next milestones will be the realization of fault-tolerant quantum computing and the demonstration of quantum advantages in real-world problems. Either is of a great challenge given the present knowledge. Quantum computing is entering a critical period.

Tech Trend #8 : New Materials Will Revolutionise Semiconductor Devices

Under the pressure of both Moore’s Law and the explosive demand of computing power and storage, it is difficult for classic Si based transistors to maintain sustainable development of the semiconductor industry.

Until now, major semiconductor manufacturers still have no clear answer and option to chips beyond 3nm. New materials will make new logic, storage, and interconnection devices through new physical mechanisms, driving continuous innovation in the semiconductor industry.

For example, topological insulators, two-dimensional superconducting materials, etc. that can achieve lossless transport of electron and spin can become the basis for new high-performance logic and interconnect devices; while new magnetic materials and new resistive switching materials can realize high-performance magnetics Memory such as SOT-MRAM and resistive memory.

Tech Trend #9 : Growing Adoption Of AI Technologies That Protect Data Privacy

The compliance costs demanded by the recent data protection laws and regulations related to data transfer are getting increasingly higher than ever before.

In light of this, there have been growing interests in using AI technologies to protect data privacy. The essence is to enable the data user to compute a function over input data from different data providers while keeping those data private.

Such AI technologies promise to solve the problems of data silos and lack of trust in today’s data sharing practices, and will truly unleash the value of data in the foreseeable future.

Tech Trend #10 : Cloud Becomes The Center Of IT Innovation

With the ongoing development of cloud computing technology, the cloud has grown far beyond the scope of IT infrastructure, and gradually evolved into the center of all IT technology innovations.

Cloud has close relationship with almost all IT technologies, including new chips, new databases, self-driving adaptive networks, big data, AI, IoT, blockchain, quantum computing and so forth.

Meanwhile, it creates new technologies, such as serverless computing, cloud-native software architecture, software-hardware integrated design, as well as intelligent automated operation.

Cloud computing is redefining every aspect of IT, making new IT technologies more accessible for the public. Cloud has become the backbone of the entire digital economy.

 

Recommended Reading

Go Back To > Business + Enterprise | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


NVIDIA TensorRT 7 with Real-Time Conversational AI!

NVIDIA just launched TensorRT 7, introducing the capability for Real-Time Conversational AI!

Here is a primer on the NVIDIA TensorRT 7, and the new real-time conversational AI capability!

 

NVIDIA TensorRT 7 with Real-Time Conversational AI

NVIDIA TensorRT 7 is their seventh-generation inference software development kit. It introduces the capability for real-time conversational AI, opening the door for human-to-AI interactions.

TensorRT 7 features a new deep learning compiler designed to automatically optimise and accelerate the increasingly complex recurrent and transformer-based neural networks needed for AI speech applications.

This boosts the performance of conversational AI components by more than 10X, compared to running them on CPUs. This drives down the latency below the 300 millisecond (0.3 second) threshold considered necessary for real-time interactions.

 

TensorRT 7 Targets Recurrent Neural Networks

TensorRT 7 is designed to speed up AI models that are used to make predictions on time-series, sequence-data scenarios that use recurrent loop structures (RNNs).

RNNs are used not only for conversational AI speed networks, they also help with arrival time planning for cars and satellites, predictions of events in electronic medical records, financial asset forecasting and fraud detection.

The use of RNN has hitherto been limited to a few companies with the talent and manpower to hand-optimise the code to meet real-time performance requirements.

With TensorRT 7’s new deep learning compiler, developers now have the ability to automatically optimise these neural networks to deliver the best possible performance and lowest latencies.

The new compiler also optimises transformer-based models like BERT for natural language processing.

 

TensorRT 7 Availability

NVIDIA TensorRT 7 will be made available in the coming days for development and deployment for free to members of the NVIDIA Developer program.

The latest versions of plug-ins, parsers and samples are also available as open source from the TensorRT GitHub repository.

 

Recommended Reading

Go Back To > Software | Business | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


NVIDIA DRIVE Deep Neural Networks : Access Granted!

NVIDIA just announced that they will be providing the transportation industry access to their NVIDIA DRIVE Deep Neural Networks (DNNs) for autonomous vehicle development! Here are the details!

 

NVIDIA DRIVE Deep Neural Networks : Access Granted!

To accelerate the adoption of NVIDIA DRIVE by the transportation industry for autonomous vehicle development, NVIDIA is providing access to the NVIDIA DRIVE Deep Neural Networks.

What this means is autonomous vehicle developers will now be able to access all of NVIDIA”s pre-trained AI models and training code, and use them to improve their self-driving systems.

Using AI is central to the development of safe, self-driving cars. AI lets autonomous vehicles perceive and react to obstacles and potential dangers, or even changes in their surroundings.

Powering every self-driving car are dozens of Deep Neural Networks (DNNs) that tackle redundant and diverse tasks, to ensure accurate perception, localisation and path planning.

These DNNs cover tasks like traffic light and sign detection, object detection for vehicles, pedestrians and bicycles, and path perception, as well as gaze detection and gesture recognition within the vehicle.

 

Advanced NVIDIA DRIVE Tools

In addition to providing access to their DRIVE DNNs, NVIDIA also made available a suite of advanced NVIDIA DRIVE tools.

These NVIDIA DRIVE tools allow autonomous vehicle developers to customise and enhance the NVIDIA DRIVE DNNs using their own datasets and target feature set.

  • Active Learning improves model accuracy and reduces data collection costs by automating data selection using AI, rather than manual curation.
  • Federated Learning lets developers utilise datasets across countries, and with other developers while maintaining data privacy and protecting their own intellectual property.

  • Transfer Learning gives NVIDIA DRIVE customers the ability to speed up development of their own perception software by leveraging NVIDIA’s own autonomous vehicle development.

 

Recommended Reading

Go Back To > Automotive | Business | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


DiDi Adopts NVIDIA AI + GPUs For Self-Driving Cars!

At GTC China 2019, DiDi announced that they will adopt NVIDIA GPUs and AI technologies to develop self-driving cars, as well as their cloud computing solutions.

 

DiDi Adopts NVIDIA AI + GPUs For Self-Driving Cars!

This announcement comes after DiDi spliced out their autonomous driving unit as an independent company in August 2019.

In their announcement, DiDi confirmed that they will use NVIDIA technologies in both their data centres and onboard their self-driving cars :

  • NVIDIA GPUs will be used to train machine learning algorithms in the data center
  • NVIDIA DRIVE will be used for inference in their Level 4 self-driving cars

NVIDIA DRIVE will fuse data from all types of sensors – cameras, LIDAR, radar, etc – and use numerous deep neural networks (DNNs) to understand the surrounding area, so the self-driving car can plan a safe way forward.

Those DNNs (deep neural networks) will require prior training using NVIDIA GPU data centre servers, and machine learning algorithms.

Recommended : NVIDIA DRIVE AGX Orin for Autonomous Vehicles Revealed!

 

DiDi Cloud Computing Will Use NVIDIA Tech Too

DiDi also announced that DiDi Cloud will adopt and launch new vGPU (virtual GPU) cloud servers based on NVIDIA GPUs.

The new vGPU licence mode will offer more affordable and flexible GPU cloud computing services for remote computing, rendering and gaming.

 

Recommended Reading

Go Back To > Automotive | Business | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


Intel oneAPI Unified Programming Model Overview!

At Supercomputing 2019, Intel unveiled their oneAPI initiative for heterogenous computing, promising to deliver a unified programming experience for developers.

Here is an overview of the Intel oneAPI unified programming model, and what it means for programmers!

 

The Need For Intel oneAPI

The modern computing environment is now a lot less CPU-centric, with the greater adoption of GPUs, FGPAs and custom-built accelerators (like the Alibaba Hanguang 800).

Their different scalar, vector, matrix and spatial architectures require different APIs and code bases, which complicates attempts to utilise a mix of those capabilities.

 

Intel oneAPI For Heterogenous Computing

Intel oneAPI promises to change all that, offering a unified programming model for those different architectures.

It allows developers to create workloads and applications for multiple architectures on their platform of choice, without the need to develop and maintain separate code bases, tools and workflow.

Intel oneAPI comprises of two components – the open industry initiative, and the Intel oneAPI beta toolkit :

oneAPI Initiative

This is a cross-architecture development model based on industry standards, and an open specification, to encourage broader adoption.

Intel oneAPI Beta Toolkit

This beta toolkit offers the Intel oneAPI specification components with direct programming (Data Parallel C++), API-based programming with performance libraries, advanced analysis and debug tools.

Developers can test code and workloads in the Intel DevCloud for oneAPI on multiple Intel architectures.

 

What Processors + Accelerators Are Supported By Intel oneAPI?

The beta Intel oneAPI reference implementation currently supports these Intel platforms :

  • Intel Xeon Scalable processors
  • Intel Core and Atom processors
  • Intel processor graphics (as a proxy for future Intel discrete data centre GPUs)
  • Intel FPGAs (Intel Arria, Stratix)

The oneAPI specification is designed to support a broad range of CPUs and accelerators from multiple vendors. However, it is up to those vendors to create their own oneAPI implementations and optimise them for their own hardware.

 

Are oneAPI Elements Open-Sourced?

Many oneAPI libraries and components are already, or will soon be open sourced.

 

What Companies Are Participating In The oneAPI Initiative?

According to Intel, more than 30 vendors and research organisations support the oneAPI initiative, including CERN openlab, SAP and the University of Cambridge.

Companies that create their own implementation of oneAPI and complete a self-certification process will be allowed to use the oneAPI initiative brand and logo.

 

Available Intel oneAPI Toolkits

At the time of its launch (17 November 2019), here are the toolkits that Intel has made available for developers to download and use :

Intel oneAPI Base Toolkit (Beta)

This foundational kit enables developers of all types to build, test, and deploy performance-driven, data-centric applications across CPUs, GPUs, and FPGAs. Comes with :

[adrotate group=”2″]
  • Intel oneAPI Data Parallel C++ Compiler
  • Intel Distribution for Python
  • Multiple optimized libraries
  • Advanced analysis and debugging tools

Domain Specific oneAPI Toolkits for Specialised Workloads :

  • oneAPI HPC Toolkit (beta) : Deliver fast C++, Fortran, OpenMP, and MPI applications that scale.
  • oneAPI DL Framework Developer Toolkit (beta) : Build deep learning frameworks or customize existing ones.
  • oneAPI IoT Toolkit (beta) : Build high-performing, efficient, reliable solutions that run at the network’s edge.
  • oneAPI Rendering Toolkit (beta) : Create high-performance, high-fidelity visualization applications.

Additional Toolkits, Powered by oneAPI

  • Intel AI Analytics Toolkit (beta) : Speed AI development with tools for DL training, inference, and data analytics.
  • Intel Distribution of OpenVINO Toolkit : Deploy high-performance inference applications from device to cloud.
  • Intel System Bring-Up Toolkit (beta) : Debug and tune systems for power and performance.

You can download all of those toolkits here.

 

Recommended Reading

Go Back To > Business + Enterprise | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


Dell Forecasts The Future of Connected Living In 2030!

Dell Technologies just shared with us the key findings from their research that explore the future of connected living by the year 2030!

Find out how emerging technologies will transform how our lives will change by the year 2030!

 

Dell On The Future of Connected Living In 2030!

Dell Technologies conducted their research in partnership with the Institute for the Future (IFTF) and Vanson Bourne, surveying 1,100 business leaders across ten countries in Asia Pacific and Japan.

Let’s take a look at their key findings, and find out why they believe the future is brimming with opportunity thanks to emerging technologies.

 

Technological Shifts Transforming The Future By 2030

IFTF and a forum of global experts forecast that emerging technologies like edge computing, 5G, AI, Extended Reality (XR) and IoT will create these five major shifts in society :

1. Networked Reality

Over the next decade, the line between the virtual and the real will vanish. Cyberspace will become an overlay on top of our existing reality as our digital environment extends beyond televisions, smartphones and other displays.

This transformation will be driven by the deployment of 5G networks that enable high bandwidth, low-latency connections for streaming, interactive services, and multi-user media content.

2. Connected Mobility and Networked Matter

The vehicles of tomorrow will essentially be mobile computers, with the transportation system resembling packet-switched networks that power the Internet.

We will trust them to take us where we need to go in the physical world as we interact in the virtual spaces available to us wherever we are.

3. From Digital Cities to Sentient Cities

More than half of the world’s population live in urban areas. This will increase to 68% over the next three decades, according to the United Nations.

This level of growth presents both huge challenges and great opportunities for businesses, governments and citizens.

Cities will quite literally come to life through their own networked infrastructure of smart objects, self-reporting systems and AI-powered analytics.

4. Agents and Algorithms

Our 2030 future will see everyone supported by a highly personalised “operating system for living” that is able to anticipate our needs and proactively support our day-to-day activities to free up time.

Such a Life Operating System (Life OS) will be context-aware, anticipating our needs and behaving proactively.

Instead of interacting with different apps today, the intelligent agent of the future will understand what you need and liaise with various web services, other bots and networked objects to get the job done.

5. Robot with Social Lives

Within 10 years, we will have personal robots that will become our partners in life – enhancing our skills and extending our abilities.

In some cases, they will replace us, but this can mean freeing us to do the things we are good at, and enjoy.

In most cases, they can become our collaborators, helping to crowdsource innovations and accelerate progress through robot social networks.

 

Preparing For The Future Of Connected Living By 2030

Anticipating Change

Many businesses in APJ are already preparing for these shifts, with business leaders expressing these perceptions :

  • 80% (82% in Malaysia) will restructure the way they spend their time by automating more tasks
  • 70% (83% in Malaysia) welcome people partnering with machines/robots to surpass our human limitations
  • More than half of businesses anticipate Networked Reality becoming commonplace
    – 63% (67% in Malaysia) say they welcome day-to-day immersion in virtual and augmented realities
    – 62% (63% in Malaysia) say they welcome people being fitted with brain computer interfaces

Navigating Challenges

These technological shifts are seismic in nature, leaving people and organisations grappling with change. Organisations that want to harness these emerging technologies will need to collect, process and make use of the data, while addressing public concerns about data privacy.

APJ business leaders are already anticipating some of these challenges :

  • 78% (88% in Malaysia) will be more concerned about their own privacy by 2030 than they are today
  • 74% (83% in Malaysia) consider data privacy to be a top societal-scale challenge that must be solved
  • 49% (56% in Malaysia) would welcome self-aware machines
  • 49% (43% in Malaysia) call for regulation and clarity on how AI is used
  • 84% (85% in Malaysia) believe that digital transformation should be more widespread throughout their organisation

 

Recommended Reading

Go Back To > Business + EnterpriseHome

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


Intel Nervana NNP-I1000 PCIe + M.2 Cards Revealed!

The new Intel Nervana NNP-I1000 neural network processor comes in PCIe and M.2 card options designed for AI inference acceleration.

Here is EVERYTHING you need to know about the Intel Nervana NNP-I1000 PCIe and M.2 card options!

 

Intel Nervana Neural Network Processors

Intel Nervana neural network processors, NNPs for short, are designed to accelerated two key deep learning technologies – training and inference.

To target these two different tasks, Intel created two AI accelerator families – Nervana NNP-T that’s optimised for training, and Nervana NNP-I that’s optimised for inference.

They are both paired with a full software stack, developed with open components and deep learning framework integration.

Recommended : Intel Nervana AI Accelerators : Everything You Need To Know!

 

Intel Nervana NNP-I1000

The Intel Nervana NNP-I1000, on the other hand, is optimised for multi-modal inferencing of near-real-time, high-volume compute.

Each Nervana NNP-I1000 features 12 Inference Compute Engines (ICE), which are paired with two Intel CPU cores, a large on-die 75 MB SRAM cache and an on-die Network-on-Chip (NoC).

It offers mixed-precision support, with a special focus on low-precision applications for near-real-time performance.

Like the NNP-T, the NNP-I comes with a full software stack that is built with open components, including direct integration with deep learning frameworks.

 

Intel Nervana NNP-I1000 Models

The Nervana NNP-I1000 comes in a M.2 form factor, or a PCI Express card, to accommodate exponentially larger and more complex models, or to run dozens of models and networks in parallel.

Specifications Intel Nervana NNP-I1100 Intel Nervana NNP-I1300
Form Factor M.2 Card PCI Express Card
Compute 1 x Intel Nervana NNP-I1000 2 x Intel Nervana NNP-I1000
SRAM 75 MB 2 x 75 MB
Int8 Performance Up to 50 TOPS Up to 170 TOPS
TDP 12 W 75 W

 

Intel Nervana NNP-I1000 PCIe Card

This is what the Intel Nervana NNP-I1000 (also known as the NNP-I1100) PCIe card looks like :

 

Intel Nervana NNP-I1000 M.2 Card

This is what the Intel Nervana NNP-I1000 (also known as the NNP-I1300) M.2 card looks like :

 

Recommended Reading

[adrotate group=”2″]

Go Back To > Business + Enterprise | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


Intel Nervana NNP-T1000 PCIe + Mezzanine Cards Revealed!

The new Intel Nervana NNP-T1000 neural network processor comes in PCIe and Mezzanine card options designed for AI training acceleration.

Here is EVERYTHING you need to know about the Intel Nervana NNP-T1000 PCIe and Mezzanine card options!

 

Intel Nervana Neural Network Processors

Intel Nervana neural network processors, NNPs for short, are designed to accelerated two key deep learning technologies – training and inference.

To target these two different tasks, Intel created two AI accelerator families – Nervana NNP-T that’s optimised for training, and Nervana NNP-I that’s optimised for inference.

They are both paired with a full software stack, developed with open components and deep learning framework integration.

Recommended : Intel Nervana AI Accelerators : Everything You Need To Know!

 

Intel Nervana NNP-T1000

The Intel Nervana NNP-T1000 is not only capable of training even the most complex deep learning models, it is highly scalable – offering near linear scaling and efficiency.

By combining compute, memory and networking capabilities in a single ASIC, it allows for maximum efficiency with flexible and simple scaling.

Each Nervana NNP-T1000 is powered by up to 24 Tensor Processing Clusters (TPCs), and comes with 16 bi-directional Inter-Chip Links (ICL).

Its TPC supports 32-bit floating point (FP32) and brain floating point (bfloat16) formats, allowing for multiple deep learning primitives with maximum processing efficiency.

Its high-speed ICL communication fabric allows for near-linear scaling, directly connecting multiple NNP-T cards within servers, between servers and even inside and across racks.

  • High compute utilisation using Tensor Processing Clusters (TPC) with bfloat16 numeric format
  • Both on-die SRAM and on-package High-Bandwidth Memory (HBM) keep data local, reducing movement
  • Its Inter-Chip Links (ICL) glueless fabric architecture and fully-programmable router achieves near-linear scaling across multiple cards, systems and PODs
  • Available in PCIe and OCP Open Accelerator Module (OAM) form factors
  • Offers a programmable Tensor-based instruction set architecture (ISA)
  • Supports common open-source deep learning frameworks like TensorFlow, PaddlePaddle and PyTorch

 

Intel Nervana NNP-T1000 Models

The Intel Nervana NNP-T1000 is currently available in two form factors – a dual-slot PCI Express card, and a OAM Mezzanine Card, with these specifications :

Specifications Intel Nervana NNP-T1300 Intel Nervana NNP-T1400
Form Factor Dual-slot PCIe Card OAM Mezzanine Card
Compliance PCIe CEM OAM 1.0
Compute Cores 22 TPCs 24 TPCs
Frequency 950 MHz 1100 MHz
SRAM 55 MB on-chip, with ECC 60 MB on-chip, with ECC
Memory 32 GB HBM2, with ECC 32 GB HBM2, with ECC
Memory Bandwidth 2.4 Gbps (300 MB/s)
Inter-Chip Link (ICL) 16 x 112 Gbps (448 GB/s)
ICL Topology Ring Ring, Hybrid Cube Mesh,
Fully Connected
Multi-Chassis Scaling Yes Yes
Multi-Rack Scaling Yes Yes
I/O to Host CPU PCIe Gen3 / Gen4 x16
Thermal Solution Passive, Integrated Passive Cooling
TDP 300 W 375 W
Dimensions 265.32 mm x 111.15 mm 165 mm x 102 mm

 

Intel Nervana NNP-T1000 PCIe Card

This is what the Intel Nervana NNP-T1000 (also known as the NNP-T1300) PCIe card looks like :

 

Intel Nervana NNP-T1000 OAM Mezzanine Card

This is what the Intel Nervana NNP-T1000 (also known as NNP-T1400) Mezzanine card looks like :

 

Recommended Reading

[adrotate group=”2″]

Go Back To > Business + Enterprise | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


Intel Nervana AI Accelerators : Everything You Need To Know!

Intel just introduced their Nervana AI accelerators – the Nervana NNP-T1000 for training, and Nervana NNP-I1000 for inference!

Here is EVERYTHING you need to know about these two new Intel Nervana AI accelerators!

 

Intel Nervana Neural Network Processors

Intel Nervana neural network processors, NNPs for short, are designed to accelerated two key deep learning technologies – training and inference.

To target these two different tasks, Intel created two AI accelerator families – Nervana NNP-T that’s optimised for training, and Nervana NNP-I that’s optimised for inference.

They are both paired with a full software stack, developed with open components and deep learning framework integration.

 

Nervana NNP-T For Training

The Intel Nervana NNP-T1000 is not only capable of training even the most complex deep learning models, it is highly scalable – offering near linear scaling and efficiency.

By combining compute, memory and networking capabilities in a single ASIC, it allows for maximum efficiency with flexible and simple scaling.

Recommended : Intel NNP-T1000 PCIe + Mezzanine Cards Revealed!

Each Nervana NNP-T1000 is powered by up to 24 Tensor Processing Clusters (TPCs), and comes with 16 bi-directional Inter-Chip Links (ICL).

Its TPC supports 32-bit floating point (FP32) and brain floating point (bfloat16) formats, allowing for multiple deep learning primitives with maximum processing efficiency.

Its high-speed ICL communication fabric allows for near-linear scaling, directly connecting multiple NNP-T cards within servers, between servers and even inside and across racks.

  • High compute utilisation using Tensor Processing Clusters (TPC) with bfloat16 numeric format
  • Both on-die SRAM and on-package High-Bandwidth Memory (HBM) keep data local, reducing movement
  • Its Inter-Chip Links (ICL) glueless fabric architecture and fully-programmable router achieves near-linear scaling across multiple cards, systems and PODs
  • Available in PCIe and OCP Open Accelerator Module (OAM) form factors
  • Offers a programmable Tensor-based instruction set architecture (ISA)
  • Supports common open-source deep learning frameworks like TensorFlow, PaddlePaddle and PyTorch

 

Intel Nervana NNP-T Accelerator Models

The Intel Nervana NNP-T is currently available in two form factors – a dual-slot PCI Express card, and a OAM Mezzanine Card, with these specifications :

Specifications Intel Nervana NNP-T1300 Intel Nervana NNP-T1400
Form Factor Dual-slot PCIe Card OAM Mezzanine Card
Compliance PCIe CEM OAM 1.0
Compute Cores 22 TPCs 24 TPCs
Frequency 950 MHz 1100 MHz
SRAM 55 MB on-chip, with ECC 60 MB on-chip, with ECC
Memory 32 GB HBM2, with ECC 32 GB HBM2, with ECC
Memory Bandwidth 2.4 Gbps (300 MB/s)
Inter-Chip Link (ICL) 16 x 112 Gbps (448 GB/s)
ICL Topology Ring Ring, Hybrid Cube Mesh,
Fully Connected
Multi-Chassis Scaling Yes Yes
Multi-Rack Scaling Yes Yes
I/O to Host CPU PCIe Gen3 / Gen4 x16
Thermal Solution Passive, Integrated Passive Cooling
TDP 300 W 375 W
Dimensions 265.32 mm x 111.15 mm 165 mm x 102 mm

 

Nervana NNP-I For Inference

The Intel Nervana NNP-I1000, on the other hand, is optimised for multi-modal inferencing of near-real-time, high-volume compute.

Each Nervana NNP-I1000 features 12 Inference Compute Engines (ICE), which are paired with two Intel CPU cores, a large on-die 75 MB SRAM cache and an on-die Network-on-Chip (NoC).

Recommended : Intel NNP-I1000 PCIe + M.2 Cards Revealed!

Intel Nervana NNP-I1000 PCIe + M.2 Cards Revealed!

It offers mixed-precision support, with a special focus on low-precision applications for near-real-time performance.

Like the NNP-T, the NNP-I comes with a full software stack that is built with open components, including direct integration with deep learning frameworks.

Intel Nervana NNP-I Accelerator Models

The NNP-I1000 comes in a 12 W M.2 form factor, or a 75 W PCI Express card, to accommodate exponentially larger and more complex models, or to run dozens of models and networks in parallel.

Specifications Intel Nervana NNP-I1100 Intel Nervana NNP-I1300
Form Factor M.2 Card PCI Express Card
Compute 1 x Intel Nervana NNP-I1000 2 x Intel Nervana NNP-I1000
SRAM 75 MB 2 x 75 MB
Int8 Performance Up to 50 TOPS Up to 170 TOPS
TDP 12 W 75 W

 

Recommended Reading

[adrotate group=”2″]

Go Back To > Business + Enterprise | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


NVIDIA Jetson Xavier NX : World’s Smallest AI Supercomputer

On 7 November 2019, NVIDIA introduced the Jetson Xavier NX – the world’s smallest AI supercomputer designed for robotics and embedded computing applications at the edge!

Here is EVERYTHING you need to know about the new NVIDIA Jetson Xavier NX!

 

NVIDIA Jetson Xavier NX : World’s Smallest AI Supercomputer

At just 70 x 45 mm, the new NVIDIA Jetson Xavier NX is smaller than a credit card. Yet it delivers server-class AI performance at up to 21 TOPS, while consuming as little as 10 watts of power.

Short for Nano Xavier, the NX is a low-power version of the Xavier SoC that came up tops in the MLPerf Inference benchmarks.

Recommended : NVIDIA Wins MLPerf Inference Benchmarks For DC + Edge!

With its small size and low-power, it opens up the possibility of adding AI on-the-edge computing capabilities to small commercial robots, drones, industrial IoT systems, network video recorders and portable medical devices.

The Jetson Xavier NX can be configured to deliver up to 14 TOPS at 10 W, or 21 TOPS at 15 W. It is powerful enough to run multiple neural networks in parallel, and process data from multiple high-resolution sensors simultaneously.

The NVIDIA Jetson Xavier NX runs on the same CUDA-X AI software architecture as all other Jetson processors, and is supported by the NVIDIA JetPack software development kit.

It is pin-compatible with the Jetson Nano, offering up to 15X higher performance than the Jetson TX2 in a smaller form factor.

It is not available for a few more months, but developers can begin development today using the Jetson AGX Xavier Developer Kit, with a software patch to emulate Jetson Xavier NX.

 

NVIDIA Jetson Xavier NX Specifications

Specifications NVIDIA Jetson Xavier NX
CPU NVIDIA Carmel
– 6 x Arm 64-bit cores
– 6 MB L2 + 4 MB L3 caches
GPU NVIDIA Volta
– 384 CUDA cores, 48 Tensor cores, 2 NVDLA cores
AI Performance 21 TOPS : 15 watts
14 TOPS : 10 watts
Memory Support 128-bit LPDDR4x-3200
– Up to 8 GB, 51.2 GB/s
Video Support Encoding : Up to 2 x 4K30 streams
Decoding : Up to 2 x 4K60 streams
Camera Support Up to six CSI cameras (32 via virtual channels)
Up to 12 lanes (3×4 or 6×2) MIPI CSI-2
Connectivity Gigabit Ethernet
OS Support Ubuntu-based Linux
Module Size 70 x 45 mm (Nano)

 

NVIDIA Jetson Xavier NX Price + Availability

The NVIDIA Jetson Xavier NX will be available in March 2020 from NVIDIA’s distribution channels, priced at US$399.

 

Recommended Reading

Go Back To > Enterprise | Software | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


NVIDIA Wins MLPerf Inference Benchmarks For DC + Edge!

The MLPerf Inference 0.5 benchmarks are officially released today, with NVIDIA declaring that they aced them for both datacenter and edge computing workloads.

Find out how well NVIDIA did, and why it matters!

 

The MLPerf Inference Benchmarks

MLPerf Inference 0.5 is the industry’s first independent suite of five AI inference benchmarks.

Applied across a range of form factors and four inference scenarios, the new MLPerf Inference Benchmarks test the performance of established AI applications like image classification, object detection and translation.

 

NVIDIA Wins MLPerf Inference Benchmarks For Datacenter + Edge

Thanks to the programmability of its computing platforms to cater to diverse AI workloads, NVIDIA was the only company to submit results for all five MLPerf Inference Benchmarks.

According to NVIDIA, their Turing GPUs topped all five benchmarks for both datacenter scenarios (server and offline) among commercially-available processors.

Meanwhile, their Jetson Xavier scored highest among commercially-available edge and mobile SoCs under both edge-focused scenarios – single stream and multi-stream.

The new NVIDIA Jetson Xavier NX that was announced today is a low-power version of the Xavier SoC that won the MLPerf Inference 0.5 benchmarks.

All of NVIDIA’s MLPerf Inference Benchmark results were achieved using NVIDIA TensorRT 6 deep learning inference software.

 

Recommended Reading

Go Back To > Enterprise | Software | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


Samsung – IBM AI IoT Cloud Platform For 5G Mobile Solutions!

At the Samsung Developer Conference 2019, Samsung and IBM announced a joint platform that leverages Samsung Galaxy devices and IBM cloud technologies to introduce new 5G, AI-powered mobile solutions!

Here is what you need to know about this new Samsung-IBM AI IoT cloud platform, and the 5G AI-powered mobile solutions it’s powering for governments and enterprises.

 

Samsung – IBM AI IoT Cloud Platform For 5G Mobile Solutions!

Built using IBM Cloud technologies and Samsung Galaxy mobile devices, the new platform will help improve the work environment for employees in high-stress or high-risk occupations.

This will help reduce the risks to these public employees who work in dangerous and high-stress situations. This is critical because nearly 3 million deaths occur each year due to occupational accidents.

This new, unnamed Samsung-IBM platform will help governments and enterprises track their employee’s vitals, including heart rate and physical activity. This will allow them to determine if that employee is in distress and requires help.

 

The Samsung – IBM AI IoT Cloud Platform In Use

5G mobile solutions based on the new Samsung-IBM AI IoT platform is being piloted by multiple police forces to monitor their health in real-time, and provide situational awareness insights to first responders and their managers.

The platform can track in real time, the safety and wellness indicators of first responders equipped with Samsung Galaxy Watches and Galaxy smartphones with 5G connectivity.

It can instantly alert emergency managers if there is a significant change in the safety parameters, which may indicate the first responder is in danger of a heart attack, heat exhaustion or other life-threatening events.

This allows them to anticipate potential dangers, and quickly send assistance. This should greatly reduce the risk of death and injuries to their employees.

 

Recommended Reading

Go Back To > Business + Enterprise | Mobile | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


Key NVIDIA EGX Announcements @ MWC Los Angeles 2019!

At MWC Los Angeles 2019, NVIDIA announced major partnerships on their EGX edge computing platform with Microsoft, Ericsson and Red Hat to accelerate AI, IoT and 5G at the Edge.

Catch the official NVIDIA EGX briefing on these new updates, and find out what it means for edge computing!

 

The Official NVIDIA EGX Briefing @ MWC Los Angeles 2019

Before the official announcement, NVIDIA gave us an exclusive briefing on their EGX edge computing platform. We are sharing it with you, so you can hear it directly from Justin Boitano, General Manager of NVIDIA’s Enterprise & Edge Computing division.

Here is a summary of what NVIDIA unveiled at MWC Los Angeles 2019 :

NVIDIA EGX Early Adopters

NVIDIA actually announced EGX at Computex Taipei in June 2019 – a combination of the NVIDIA CUDA-X software and NVIDIA-certified GPU servers and devices.

Now, they are announcing that Walmart, BMW, Proctor & Gamble, Samsung Electronics and ETT East have adopted the EGX platform, and so have the cities of San Francisco and Las Vegas.

  • Samsung Electronics — The Korean electronics giant is using AI at the edge for highly complex semiconductor design and manufacturing processes.
  • BMW — The German automaker is using intelligent video analytics in its South Carolina manufacturing facility to automate inspection. With EGX gathering data from multiple cameras and other sensors in inspection lines, BMW is helping ensure only the highest quality automobiles leave the factory floor.
  • NTT East — The Japanese telecom services giant is using EGX in its data centers to develop new AI-powered services in remote areas through its broadband access network. Using the EGX platform, NTT East will provide remote populations the computing power and connectivity required to build and deploy a wide range of AI applications at the edge.
  • Procter & Gamble — The world’s leading consumer goods company is working with NVIDIA to develop AI-enabled applications on top of the EGX platform for the inspection of products and packaging to help ensure they meet the highest safety and quality standards. P&G is using NVIDIA EGX to analyze thousands of hours of footage from inspection lines and immediately flag imperfections.
  • Las Vegas — The city is using EGX to capture vehicle and pedestrian data to ensure safer streets and expand economic opportunity. Las Vegas plans to use the data to autonomously manage signal timing and other operational capabilities.
  • San Francisco — The city’s Union Square Business Improvement District is using EGX to capture real-time pedestrian counts for local retailers, providing them a powerful business intelligence tool for engaging with their customers more effectively.

NVIDIA EGX Case Study : Walmart

Walmart is a pioneer user of EGX, deploying it in its Levittown, New York, Intelligent Retail Lab — a fully-operational grocery store where it’s exploring the ways AI can further improve in-store shopping experiences.

Using EGX’s advanced AI and edge capabilities, Walmart is able to compute in real time more than 1.6 terabytes of data generated each second, and can use AI to :

  • automatically alert associates to restock shelves,
  • open up new checkout lanes,
  • retrieve shopping carts, and
  • ensure product freshness in meat and produce departments.

NVIDIA EGX Partnership With Microsoft

NVIDIA announced a collaboration with Microsoft to enable the closer integration between Microsoft Azure and the NVIDIA EGX platform, to advance edge-to-cloud AI computing capabilities for their clients.

The NVIDIA Metropolis video analytics application framework, which runs on EGX, has been optimized to work with Microsoft’s Azure IoT Edge, Azure Machine Learning solutions and a new form factor of the Azure Data Box Edge appliance powered by NVIDIA T4 GPUs.

In addition, NVIDIA-certified off-the-shelf servers — optimised to run Azure IoT Edge and ML services — are now available from more than a dozen leading OEMs, including Dell, Hewlett Packard Enterprise and Lenovo.

NVIDIA EGX Partnership With Ericsson

NVIDIA also announced that they are collaborating with Ericsson on developing virtualised RAN technologies.

Their ultimate goal is to commercialise those virtualised RAN technologies to deliver 5G networks with flexibility and shorter time-to-market for new services like augmented reality, virtual reality and gaming.

NVIDIA EGX Partnership With Red Hat

Finally, NVIDIA announced a collaboration with Red Hat to deliver software-defined 5G wireless infrastructure running on Red Hat OpenShift to the telecom industry.

Their customers will be able to use NVIDIA EGX and Red Hat OpenShift to deploy NVIDIA GPUs to accelerate AI, data science and machine learning at the edge.

The critical element enabling 5G providers to move to cloud-native infrastructure is NVIDIA Aerial. This software developer kit, also announced today, allows providers to build and deliver high-performance, software-defined 5G wireless RAN by delivering two essential advancements.

They are a low-latency data path directly from Mellanox network interface cards to GPU memory, and a 5G physical layer signal-processing engine that keeps all data within the GPU’s high-performance memory.

Next Page > NVIDIA EGX Presentation Slides From MWC Los Angeles 2019!

 

Recommended Reading

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


NVIDIA EGX Presentation Slides From MWC Los Angeles 2019!

Here is the full set of NVIDIA EGX presentation slides from MWC Los Angeles 2019 for your perusal :

 

Recommended Reading

Go Back To > First PageBusiness + Enterprise | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


The Alibaba Hanguang 800 (含光 800) AI NPU Explained!

At the Apsara Computing Conference 2019, Alibaba Group unveiled details of their first AI inference NPU – the Hanguang 800 (含光 800).

Here is EVERYTHING you need to know about the Alibaba Hanguang 800 AI inference NPU!

Updated @ 2019-09-27 : Added more details, including a performance comparison against its main competitors.

Originally posted @ 2019-09-25

 

What Is The Alibaba Hanguang 800?

The Alibaba Hanguang 800 is a neural processing unit (NPU) for AI inference applications. It was specifically designed to accelerate machine learning and AI inference tasks.

 

What Does Hanguang Mean?

The name 含光 (Hanguang) literally means “contains light“.

While the name may suggest that it uses photonics, that light-based technology is still at least a decade from commercialisation.

 

What Are The Hanguang 800 Specifications?

Not much is known about the Hanguang 800, other than that it has 17 billion transistors, and is fabricated on the 12 nm process technology.

Also, it is designed for inferencing only, unlike the HUAWEI Ascend 910 AI chip which can handle both training and inference.

Recommended : 3rd Gen X-Dragon Architecture by Alibaba Cloud Explained!

 

Who Designed The Hanguang 800?

The Hanguang 800 was developed over a period of 7 months, by Alibaba’s research unit, T-Head, followed by a 3-month tape-out.

T-Head, whose Chinese name is Pintougehoney badger in English, is responsible for designing chips for cloud and edge computing under Alibaba Cloud / Aliyun.

Earlier this year, T-Head revealed a high-performance IoT processor called XuanTie 910.

Based on the RISC-V open-source instruction set, 16-core XuanTie 910 is targeted at heavy-duty IoT applications like edge servers, networking gateways, and self-driving automobiles.

 

How Fast Is Hanguang 800?

Alibaba claims that the Hanguang 800 “largely” outpaces the industry average performance, with image processing efficiency about 12X better than GPUs :

  • Single chip performance : 78,563 images per second (IPS)
  • Computational efficiency : 500 IPS per watt (Resnet-50 Inference Test)
Hanguang 800 Habana Goya Cambricon MLU270 NVIDIA T4 NVIDIA P4
Fab Process 12 nm 16 nm 16 nm 12 nm 16 nm
Transistors 17 billion NA NA 13.6 billion 7.2 billion
Performance
(ResNet-50)
78,563 IPS 15,433 IPS 10,000 IPS 5,402 IPS 1,721 IPS
Peak Efficiency
(ResNet-50)
500 IPS/W 150 IPS/W 143 IPS/W 78 IPS/W 52 IPS/W

Recommended : 2nd Gen EPYC – Everything You Need To Know Summarised!

 

Where Will Hanguang 800 Be Used?

The Hanguang 800 chip will be used exclusively by Alibaba to power their own business operations, especially in product search and automatic translation, personalised recommendations and advertising.

According to Alibaba, merchants upload a billion product images to Taobao every day. It used to take their previous platform an hour to categorise those pictures, and then tailor search and personalise recommendations for millions of Taobao customers.

With Hanguang 800, they claim that the Taboo platform now takes just 5 minutes to complete the task – a 12X reduction in time!

Alibaba Cloud will also be using it in their smart city projects. They are already using it in Hangzhou, where they previously used 40 GPUs to process video feeds with a latency of 300 ms.

After migrating to four Hanguang 800 NPUs, they were able to process the same video feeds with half the latency – just 150 ms.

 

Can We Buy Or Rent The Hanguang 800?

No, Alibaba will not be selling the Hanguang 800 NPU. Instead, they are offering it as a new AI cloud computing service.

Developers can now make a request for a Hanguang 800 cloud compute quota, which Alibaba Cloud claims is 100% more cost-effective than traditional GPUs.

 

Are There No Other Alternatives For Alibaba?

In our opinion, this is Alibaba’s way of preparing for an escalation of the US-Chinese trade war that has already savaged HUAWEI.

While Alibaba certainly have a few AI inference accelerator alternatives, from AMD and NVIDIA for example, it makes sense for them to spend money and time developing their own AI inference chip.

In the long term, the Chinese government wants to build a domestic capability to design and fabricate their own computer chips for national security reasons.

Recommended : The HUAWEI Trump Ban – Everything You Need To Know!

 

Recommended Reading

Go Back To > Business + Enterprise | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


Kambyan ManUsIA + AleX Laser Cutting Drone Technology!

Kambyan Network recently invited us to a demonstration of their AleX laser cutting drone, which is designed to harvest oil palm fruits.

They also invited David Cirulli from the Embry Riddle Aeronautical University, and Associate Professor Sagaya Amalathas from Taylors University, to talk about the ManUsIA digital agriculture technology and the future jobs available to young teens today.

 

Kambyan ManUsIA Digital Agriculture

Kambyan Network has been working with David Cirulli of the Embry Riddle Aeronautical University in Singapore to develop what they call the ManUsIA digital agriculture technology.

Manusia is actually a Malay word for human, and it is an apt moniker because according to David Cirulli, ManUsIA stands for Man Using Intelligent Applications.

ManUsIA is a digital agriculture platform that Kambyan is developing as a SPaaS (Solution Platform as a Service) offering to improve yield and reduce manpower in agriculture.

It combines the use of drones with artificial intelligence and machine learning capabilities on the cloud to make use of surveillance data and weather information to maximise yield and reduce manpower requirements for dirty, difficult and dangerous jobs.

ManUsIA will start with drones that are remotely controlled using mobile device integration, and eventually hope to integrate intelligent drones that work independently.

 

Future Jobs For Teens Today

Kambyan also invited Associate Professor Dr. Sagaya Amalathas, a Programme Director at Taylors University to talk about future jobs that teens today should consider.

She points out that the future will be highly dependent on new digital skills in the areas of Big Data Analytics and Artificial Intelligence, as well as Blockchain technology, and the Internet of Things.

She also shared some really useful information on what careers will remain stable in the fast-changing times, and what jobs will be lost and what new opportunities will arise.

 

The reason why Kambyan invited her was because their training arm, Adroit College offers a Drone Operator & Robotics course.

The Professional Certificate in Robotic Process Automation – Field Operations (RPA-FO) course combines a 5-week intensive workshop with an apprenticeship and internship program at Kambyan, allowing the student to graduate with a Professional Certificate in 11 months.

 

Kambyan AleX Laser Cutting Drone Demonstration

The star of the event was the Kambyan AleX laser cutting drone – the Airborne Laser Cutter Mark 1.

Designed to be a laser harvesting drone for the oil palm industry, it weighs 3 kilograms and is approximately 70 cm in diameter.

Powered by a 150 watt pulsed laser in the operational model, it is capable of cutting through 6 inches of plant material.

Piloted remotely by a drone operator in the current iteration, it will be used to trim the fronds of the oil palm trees and cut through the stem of oil palm fruit bunches to harvest them.

Using drones will not only reduce manpower, it will allow plantations to let their oil palm trees grow much higher, reducing the need to cut them down so often.

This will increase profit over the long term, while reducing the oil palm industry’s impact on the environment… in particular their contribution to the slash and burn activity that results in terrible haze in Southeast Asia.

In the demo, they used a less powerful laser for safety reasons. But as this video shows, that itself is a danger!

 t

Fortunately, the operational drone uses a much more powerful laser to cut at a safer distance. This would prevent the drone from getting hit by falling oil palm fruits or flying debris.

 

Recommended Reading

Go Back To > Business + Enterprise | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


Microsoft + IDC : APAC Higher Education Can Double Innovation With AI!

Microsoft Asia and IDC Asia Pacific just released findings of a study which suggests that higher education institutions in APAC can double their rate of innovation with AI (artificial intelligence)!

 

APAC Higher Education Can Double Innovation With AI!

The Microsoft-IDC study – Future Ready Skills : Assessing APAC Education Sector’s Use of AI – found that AI (artificial intelligence) will help double the rate of innovation for higher education institutions.

This involves using AI to better manage student performance and enhance student engagements, while optimising operations to reduce work amongst the faculty and administrative staff.

Based on the study, the top business drivers to adopt AI in higher education include better student engagement, higher funding, and accelerated innovation.

Institutions that have already adopted AI say that they are seeing improvements in the rate of 11% to 28% in those areas.

By 2021, Microsoft and IDC predict that institutions using AI will experience the biggest jump in funding – 3.7X, which is higher than most industry sectors in Asia Pacific.

 

AI In Higher Education Case Study

Developing a globally engaged citizenry is one of Japan’s key priorities. However, many students avoid studying or going abroad, as doing so can delay them from taking classes they need to graduate.

The Faculty of Engineering at Hokkaido University, for example, has chosen to implement AI as part of its mission to encourage students to study abroad.

They developed a Microsoft Azure-based e-learning system that leverages AI and automation capabilities. This system lets students keep up with coursework back home, with course preparation streamlined from days to mere hours.

 

AI Skills Required For The Future Of Higher Education

Both education leaders and their staff are equally positive about the impact of AI on higher education jobs.

A large majority (61%) of both segments believe that AI will either help them do their jobs better, or reduce repetitive tasks.

21% of education leaders, and 13% of their staff also agree that AI will help create new jobs in higher education.

However, the requisite skills for an AI future are currently in shortage. The top three skills that education leaders believe will face a shortage in the next three years include :

[adrotate group=”2″]
  • IT skills and programming
  • Digital skills
  • Quantitative, analytical and statistical skills

The Study also noted a disconnect with the perception of education leaders of their staff’s willingness to reskill to adapt to an AI future.

26% of education leaders believe that their staff have no interest to reskill, but in reality, only 11% of their staff had no interest to reskill.

 

Recommended Reading

Go Back To > Business + Enterprise | Home

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


FREE AI Course For Jetson Nano – What You Need To Know!

NVIDIA just announced a FREE course on getting started with AI on the Jetson Nano!

Here is everything you need to know about this new Jetson Nano AI course – the first to be offered for FREE by the Deep Learning Institute!

 

The FREE AI Course For NVIDIA Jetson Nano

Looking to get started with AI, but don’t know how? The NVIDIA Deep Learning Institute has just published a new self-paced course that uses the newly released Jetson Nano Developer Kit to get up and running fast.

Best of all – this AI course for the NVIDIA Jetson Nano is FREE. This is the first Deep Learning Institute course to be offered for free.

In the course, students will learn to collect image data and use it to train, optimize, and deploy AI models for custom tasks like recognizing hand gestures, and image regression for locating a key point in an image.

  • Set up your Jetson Nano and camera
  • Collect image data for classification models
  • Annotate image data for regression models
  • Train a neural network on your data to create your own models
  • Run inference on the Jetson Nano with the models you create
  • Upon completion, you’ll be able to create your own deep learning classification and regression models with the Jetson Nano.

Some experience with Python is helpful but not required. You will need the NVIDIA Jetson Nano Developer Kit, of course.

 

The FREE Jetson Nano AI Course Requirements

Duration : 8 hours

Prerequisites: Basic familiarity with Python (helpful, not required)

Tools, libraries, frameworks used: PyTorch, Jetson Nano

Certificate: Available

Assessment Type: Multiple-choice

Required Hardware

  • NVIDIA Jetson Nano Developer Kit
  • High-performance microSD card: 32GB minimum (NVIDIA tested and recommend this one)
  • 5V 4A power supply with 2.1mm DC barrel connector (NVIDIA tested and recommend this one)
  • 2-pin jumper: must be added to the Jetson Nano Developer Kit board to enable power from the barrel jack power supply (here’s an example)
  • Logitech C270 USB Webcam (NVIDIA tested and recommend this one). Alternate camera: Raspberry Pi Camera Module v2 (NVIDIA tested and recommend this one)
  • USB cable: Micro-B To Type-A with data enabled (NVIDIA tested and recommend this one)
  • A computer with an Internet connection and the ability to flash your microSD card

 

How To Sign Up For The FREE Jetson Nano AI Course

You can sign up for the free Jetson Nano AI course at this link – Getting Started with AI on Jetson Nano.

 

Recommended Reading

[adrotate group=”2″]

Go Back To > Computer HardwareHome

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!

Why AI Digital Intuition Will Deliver Cyberimmunity By 2050!

In his first prediction for Earth 2050, Eugene Kaspersky believes that AI digital intuition will deliver cyberimmunity by 2050. Do YOU agree?

 

What Is Earth 2050

Earth 2050 is a Kaspersky social media project – an open crowdsourced platform, where everyone can share their visions of the future.

So far, there are nearly 400 predictions from 70+ visionaries, from futurologist Ian Pearson, astrophysicist Martin Rees, venture capitalist Steven Hoffman, architect-engineer Carlo Ratti, writer James Kunstler and sci-fi writer David Brin.

Eugene himself dabbles in cyberdivination, and shares with us, a future of cyberimmunity created by AI digital intuition!

 

Eugene Kaspersky : From Digital Intuition To Cyberimmunity!

In recent years, digital systems have moved up to a whole new level. No longer assistants making life easier for us mere mortals, they’ve become the basis of civilization — the very framework keeping the world functioning properly in 2050.

This quantum leap forward has generated new requirements for the reliability and stability of artificial intelligence. Although some cyberthreats still haven’t become extinct since the romantic era around the turn of the century, they’re now dangerous only to outliers who for some reason reject modern standards of digital immunity.

The situation in many ways resembles the fight against human diseases. Thanks to the success of vaccines, the terrible epidemics that once devastated entire cities in the twentieth century are a thing of the past.

 

However, that’s where the resemblance ends. For humans, diseases like the plague or smallpox have been replaced by new, highly resistant “post-vaccination” diseases; but for the machines, things have turned out much better.

This is largely because the initial designers of digital immunity made all the right preparations for it in advance. In doing so, what helped them in particular was borrowing the systemic approaches of living systems and humans.

One of the pillars of cyber-immunity today is digital intuition, the ability of AI systems to make the right decisions in conditions where the source data are clearly insufficient to make a rational choice.

But there’s no mysticism here: Digital intuition is merely the logical continuation of the idea of machine learning. When the number and complexity of related self-learning systems exceeds a certain threshold, the quality of decision-making rises to a whole new level — a level that’s completely elusive to rational understanding.

An “intuitive solution” results from the superimposition of the experience of a huge number of machine-learning models, much like the result of the calculations of a quantum computer.

So, as you can see, it has been digital intuition, with its ability to instantly, correctly respond to unknown challenges that has helped build the digital security standards of this new era.

 

Recommended Reading

Go Back To > Cybersecurity | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


FIVE Dell AI Experience Zones Launched Across APJ!

In partnership with Intel, Dell Technologies announced the launch of five Dell AI Experience Zones across the APJ region!

Here is a quick primer on the new Dell AI Experience Zones, and what they mean for organisations in the APJ region!

 

The APJ Region – Ripe For Artificial Intelligence

According to the Dell Technologies Digital Transformation Index, Artificial Intelligence (AI) will be amongst the top spending priorities for business leaders in APJ.

Half of those surveyed plan to invest in AI in the next one to three years, as part of their digital transformation strategy. However, 95% of companies face a lack of in-house expertise in AI.

This is where the five new Dell AI Experience Zones come in…

 

The Dell AI Experience Zones

The new AI Experience Zones are designed to offer both customers and partners a comprehensive look at the latest AI technologies and solutions.

Built into the existing Dell Technologies Customer Solution Centres, they will showcase how the Dell EMC High-Performance Computing (HPC) and AI ecosystem can help them address business challenges and seize opportunities.

All five AI Experience Zones are equipped with technology demonstrations built around the latest Dell EMC PowerEdge servers. Powered by the latest Intel Xeon Scalable processors, they are paired with advanced, open-source AI software like VINO, as well as Dell EMC networking and storage technologies.

Customers and partners who choose to leverage the new AI Experience Zones will receive help in kickstarting their AI initiatives, from design and AI expert engagements, to masterclass training, installation and maintenance.

“The timely adoption of AI will create new opportunities that will deliver concrete business advantages across all industries and business functions,” says Chris Kelly, vice president, Infrastructure Solutions Group, Dell Technologies, APJ.

“Companies looking to thrive in a data drive era need to understand that investments in AI are no longer optional – they are business critical. Whilst complex in nature, it is imperative that companies quickly start moving from theoretical AI strategies to practical deployments to stay ahead of the curve.”

 

Dell AI Experience Zones In APJ

The five new AI Experience Zones that Dell Technologies and Intel announced are located within the Dell Technologies Customer Solution Centres in these cities :

  • Bangalore
  • Seoul
  • Singapore
  • Sydney
  • Tokyo

 

Recommended Reading

Go Back To > Enterprise + Business | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


The Human-Machine Partnership by Erik Brynjolfsson + Rana el Kaliouby

At the Dell Technologies World 2019, we were lucky enough to snag a seat at the talk by MIT Professor Erik Brynjolfsson; and MIT alumni and Affectiva CEO, Rana el Kaliouby, on human-machine partnership.

We managed to record the incredibly insightful session for everyone who could not make it for this exclusive guru session. This is a video you must not miss!

 

The DTW 2019 Guru Sessions

One of the best reasons to attend Dell Technologies World 2019 are the guru sessions. If you are lucky enough to reserve a seat, you will have the opportunity to listen to some of the world’s most brilliant thinkers and doers.

 

The Human-Machine Partnership

The talk on human-machine partnership by Professor Brynjolfsson and Ms. Rana was the first of several guru sessions at Dell Technologies World 2019.

Entitled “How Emerging Technologies & Human Machine Partnerships Will Transform the Economy“, it focused on how technology changed human society, and what the burgeoning efforts in artificial intelligence will mean for humanity.

Here are the key points from their guru session on the human-machine partnership :

Erik Brynjolfsson (00:05 to 22:05) on the Human-Machine Partnership

  • You cannot replace old technologies with new technologies, without rethinking the organisation or institution.
  • We are now undergoing a triple revolution
    – a rebalancing of mind and machine through Big Data and Artificial Intelligence
    – a shift from products to (digital) platforms
    – a shift from the core to crowd-based decision making
  • Shifting to data-driven decision-making based on Big Data results in higher productivity and greater profitability.
  • Since 2015, computers can now recognise objects better than humans, thanks to rapid advances in machine learning.
  • Even machine-based speech recognition has become as accurate as humans from 2017 onwards.
  • While new AI capabilities are opening up new possibilities in many fields, they are also drastically reducing or eliminating the need for humans.
  • Unlike platforms of the past, the new digital networks leverage “two-sided networks“. In many cases, one network is used to subsidise the other network, or make it free-to-use.
  • Shifting to crowd-based decision-making introduces diversity in the ways of thinking, gaining new perspectives and breakthroughs in problem-solving.
  • Digital innovations have greatly expanded the economy, but it doesn’t mean that everyone will benefit. In fact, there has been a great decoupling between the productivity and median income of the American worker in the past few decades.

Rana el Kaliouby (22:08 to 45:05) on the Human-Machine Partnership

  • Human communication is mostly conveyed indirectly – 93% is non-verbal. Half of that are facial expression and gestures, the other half is vocal intonation.
  • Affectiva has the world’s largest emotion repository, with 5 billion frames of 8 million faces from 87 countries.
  • Facial expressions are largely universal, but there is a need diversity of their data to avoid bias in their models. For example, there are gender differences that vary by culture.
  • They use computer vision, machine learning and deep learning to create an Emotional AI model that learns from all those facial expressions to accurately determine a person’s emotions.
  • Emotional artificial intelligence has many real-world or potential uses
    – detecting dangerous driving, allowing for proactive measures to be taken
    – personalising the ride in a future robot-taxi or autonomous car
    – the creation of more engaging and effective social robots in retail and hospitality industries
    – help autistic children understand how facial expressions correspond to emotions, and learn social cues.

 

Erik Brynjolfsson + Rana el Kaliouby

Professor Erik Brynjolfsson holds many hats. He is currently :

  • Professor at the MIT Sloan School of Management,
  • Director of the MIT Initiative on the Digital Economy,
  • Director of the MIT Center for Digital Business, and
  • Research Associate at the National Bureau of Economic Research

Rana el Kaliouby was formerly a computer scientist at MIT, helping to form their Autism & Communication Technology Initiative. She currently serves as CEO of Affectiva, a spin-off from MIT’s Media Lab that focuses on emotion recognition technology.

 

Recommended Reading

Go Back To > Enterprise + Business | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


Frontier Supercomputer From AMD + Cray Is World’s Fastest!

AMD and Cray just unveiled the Frontier supercomputer, which will deliver exascale performance! Here is a primer on the world’s fastest supercomputer!

 

The Frontier Supercomputer – Designed By Cray, Powered By AMD

AMD announced that it is joining Cray, the U.S Department Of Energy and Oak Ridge National Laboratory to develop the Frontier supercomputer. It will be the fastest in the world, delivering exascale performance.

Developed at a cost of over US$600 million, the Frontier supercomputer will deliver over 1.5 exaflops of processing power when it comes online in the year 2021!

AMD Contributions To The Frontier Supercomputer

AMD is not just a provider of hardware – the CPUs and GPUs – for the Frontier supercomputer. They will contribute their years of experience in High Performance Computing and Artificial Intelligence :

  • Experience in High Performance Computing (HPC) and Artificial Intelligence (AI)
  • Custom AMD EPYC CPU
  • Purpose-built Radeon Instinct GPU
  • High Bandwith Memory (HBM)
  • Tightly integrated 4:1 GPU to CPU ratio
  • Custom, high speed coherent Infinity Fabric connection
  • Enhanced, open ROCm programming environment for AMD CPUs and GPUs support

 

Frontier Supercomputer And The Future Of Exascale Computing

With the development of the Frontier supercomputer, AMD and Cray will usher in a new era of exascale computing. It will lay the foundation for advanced and high performance of Artificial Intelligence (AI), analytics and simulation.

The use of this super-fast supercomputer by the U.S Department of Energy will further boost the limits of scientific discovery for the U.S and the world.

Recommended Reading

Go Back To > Business + Enterprise | Home

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


The Microsoft-IDC Report On AI Growth Potential For Malaysia

Microsoft and IDC Asia Pacific just unveiled the results of their latest study of the AI growth potential for Malaysia. Here is a video of their briefing, and a summary of their key findings!

 

The Microsoft-IDC Report On AI Growth Potential For Malaysia

The Microsoft-IDC report on AI growth potential for Malaysia is based on their 2018 survey of 100 business leaders and 100 workers in Malaysia.

Presenting the key findings were K Raman, Managing Director of Microsoft Malaysia, and Jun-Fwu Chin, Research Director for IDC Asia Pacific Datacenter Group.

Increased Innovation + Productivity

Titled “Future Reader Business: Assessing Asia Pacific’s Growth Potential Through AI“, it revealed that Artificial Intelligence (AI) will almost double the rate of innovation, and boost employee productivity by 60% by 2021.

Low Uptake Of AI So Far

Even though 70% of the business leaders surveyed believe that AI is instrumental for their organisation’s competitiveness, only 26% of organisations in Malaysia have begun their AI initiatives.

[adrotate group=”2″]

The Top 5 Reasons For Adopting AI

For those companies who have already started their AI initiatives, these are their top 5 reasons :

  • Better customer engagements (31%)
  • Higher competitiveness (31%)
  • Accelerated innovation (12%)
  • Improved efficiency (12%)
  • More productive employees (8%)

Initial Results Of AI Initiatives

For those companies, their AI initiatives have resulted in some tangible improvements of between 17% to 34% in those 5 areas. They forecast a further boost of 60% to 130%  over a three-year horizon.

Malaysia Not Prepared

The study also evaluated the six dimensions critical to developing Malaysia’s AI growth potential, and found them wanting. In particular, Malaysia is weak in data and investments.

Top Three Challenges

Business leaders who are already adopting AI cited these three top challenges in realising their companies’ AI growth potential :

  • Lack of thought leadership and commitment to invest in AI
  • Lack of skills, resources and continuous learning programs
  • Lack of advanced analytics or infrastructure and tools to develop actionable insights

Leaders + Workers Are Positive About AI

The study also found that 67% of business leaders and 64% of workers in Malaysia are positive about AI’s impact on the future of jobs.

In addition, the study claims that workers are MORE optimistic about AI creating jobs than replacing them, than business leaders!

 

Editor’s Note : We find the high favourability by workers to be highly questionable, and have requested more information about the type of workers surveyed by IDC.

It is possible that the workers they surveyed are high-level executives who see AI as a useful tool that will enhance their jobs, rather than the job killers that many low-level executives and blue-collar workers are worried about.

 

Recommended Reading

Go Back To > Business + Enterprise | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


The 2019 Dell EMC Global Data Protection Index Summarised!

The 2019 Dell EMC Global Data Protection Index is out! Here is a summary of its key findings!

 

The 2019 Dell EMC Global Data Protection Index

The 2019 Dell EMC Global Data Protection Index is the third survey conducted by Dell EMC in collaboration with Vanson Bourne.

The survey involved 2,200 IT decision makers from public and private organisations (of 250+ employees) across 18 countries and 11 industries. It was designed to reveal the state of data protection in the Asia Pacific and Japan region.

 

What Did The 2019 Data Protection Index Reveal?

The 2019 Dell EMC Global Data Protection Index revealed a large increase in the amount of data managed – from 1.68 petabytes in 2016 to a staggering 8.13 petabytes in 2018.

They also saw a corresponding increase in awareness about the value of data, with 90% of the respondents aware about the value of the data they manage. However, only 35% are monetising their data.

The Index also noted that despite an impressive jump in the number of data protection leaders (from 1% to 13%) and “adopters” (from 8% to 53%) since 2016, most of the survey respondents still face challenges in implementing the right data protection measures.

  • Organisations in Asia Pacific & Japan managed 8.13 PB of data in 2018 – an explosive growth of 384% compared to the 1.68 PB managed in 2016
  • 90% of businesses see the potential value of data but only 35% are monetising it
  • 94% face data protection challenges, and 43% struggle to find suitable data protection solutions for newer technologies like artificial intelligence and machine learning
  • More than a third (34%) of respondents are very confident that their data protection infrastructure is compliant with regional regulations, but only 18% believe their data protection solutions will meet all future challenges

 

The State Of Data Protection In APJ

Data disruptions and data loss happen more frequently in APJ organisations, than the global average. Some 80% of the APJ respondents reported experiencing some type of disruption over the last 12 months.

This is higher than the global average of 76%. Even worse – 32% were unable to recover their data using existing data protection solutions.

Although system downtime is a problem, the loss of data is particularly expensive. On average, 20 hours of downtime cost businesses US$ 494,869. The average data loss of 2.04 terabytes, on the other hand, costs nearly twice as much at US$ 939,703.

 

Challenges To Data Protection In APJ

The vast majority of respondents (some 94%) report that they encounter at least one barrier to data protection. The top three challenges in APJ was determined to be  :

  1. The inability to keep track of and protect all data because of growth of DevOps and cloud development – 46% agree
  2. The complexity of configuring and operating data protection software/hardware – 45.6% agree
  3. The lack of data protection solutions for emerging technologies– 43.4% agree

They also struggled to find adequate data protection solutions for newer technologies :

  • Artificial intelligence and machine learning data – 54% agree
  • Cloud-native applications – 49% agree
  • Internet of Things – 40% agree

 

Cloud Is Changing The Data Protection Landscape

[adrotate group=”2″]

According to the 2019 Dell EMC Global Data Protection Index, organisations have increase their use of public cloud services – up from 27% in 2016 to 41% in 2018.

Nearly all of those organisations (99%) using public cloud, are leveraging it as part of their data protection strategy. The top use case – backup or snapshot services to protect data and workloads.

More than 60% of the respondents also consider the scalability of data protection solutions important, in anticipation of the inevitable boom of cloud workloads.

 

Regulation Is Not A Key Concern

Compliance with data privacy regulations like the UE’s General Data Protection Regulation (GDPR) is not a key concern for most of these organisations. Only 36% listed it as a top data protection challenge.

 

Recommended Reading

Go Back To > Enterprise + Business | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


These 2019 Imagine Cup Asia Winners Will Change The World

Asia has chosen their representatives to the 2019 Imagine Cup World Championships – Team Caeli from India! They also honoured Team RailinNova from China and Team AidUSC from the Philippines. Congratulations!

 

What Is The Imagine Cup?

Held and sponsored by Microsoft since 2003, the Imagine Cup is the world’s premier student technology competition.

Teams of students from across the globe work together with mentors and industry leaders to bring their biggest and boldest ideas to life.

The five Imagine Cup judges evaluated the merits of these 12 top teams in a three-step process :

  • a 3-minute product presentation
  • 5 minutes of hands-on time
  • a Q&A session

To avoid bias and group-think, the judges evaluated the teams independently, and even used an application to anonymously tabulate the results.

 

The 2019 Imagine Cup Asia Winners

The 2019 Imagine Cup Asia Champion : Team Caeli from India

Team Caeli won over the hearts of the judges with their automated Anti-Pollution and Drug delivery mask specifically designed for patients that are suffering from asthma and other chronic respiratory conditions.

Their face mask had breakthrough features that will significantly improve the quality of life for patients with respiratory issues living in polluted areas.

Team Caeli will receive USD15,000 and head to the 2019 Imagine Cup World Championship, which will be held in Seattle in May.

If they win the World Championship, they will win USD100,000 in cash, a USD50,000 Azure grant, and a mentoring session with Microsoft CEO Satya Nadella!

1st Runner-Up : Team RailinNova from China

Team RailinNova developed the Rail Component Inspection Robot, which determines and identifies rail defects through multi-sensor monitoring. The solution strives to help railway companies to solve any issues more efficiently, and more economically.

For their excellence, Team RailinNova will receive a USD5,000 cash prize.

2nd Runner-Up : Team AidUSC from the Philippines

Team AidUSC came up with Aqua Check, a water contamination mobile application that enables detection of contamination by taking a photo of a water sample through a microscope. It won them third place, and a USD1,000 cash prize.

 

Special Mention : The Other 2019 Imagine Cup Asia Teams

Having watched all 12 teams in action, we must point out that all 12 Asian teams were truly exceptional. They impressed us, not only with the depth of knowledge, but also their poise and ability to think out-of-the-box in the conception and execution of their projects.

We also admire how their projects were created with social impact in mind. This gives us great hope in the new generation of entrepreneurs, who are seeking to improve society as a whole. We are all winners with young entrepreneurs of their caliber.

 

Recommended Reading

[adrotate group=”2″]

Go Back To > Software | Business | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


How 2019 Samsung Smart TVs Will Change How We Interact With Our TV

The 2019 Samsung Smart TVs, announced at CES 2019, promises to change how we interact with our TV.

Let’s take a look at the 3 key developments that the 2019 Samsung Smart TVs will introduce, and change the way you think about content and connectivity.

1. More Content With Apple Partnership

In March 2019, Samsung’s landmark partnership with Apple will come into effect. It will allow iOS devices like iPhone and iPad to stream directly to the 2019 Samsung Smart TVs using Airplay 2, without the need for Apple TV or additional connections.

In addition, Samsung customers in over 100 countries will have exclusive access to the new Apple iTunes Movies and TV Shows app on Samsung Smart TVs, when it is launched.

2. New Bixby For 2019 Samsung Smart TVs

The 2019 Samsung Smart TVs will come with a New Bixby AI assistant, that will learn about your preferences from each interaction. This will allow the new Bixby to make informed suggestions about what to watch.

The improved Bixby assistant will cover a wider range, and can hear voices from a greater distance. As such, there is no longer a need to press the speaker button on your remote control to talk to Bixby.

3. Seamless Connectivity At Home

The 2019 Samsung Smart TVs will also act as connectivity hubs for your home. You will be able to control connected devices around your home with the SmartThings dashboard.

You can also create automated tasks to trigger certain actions when certain conditions are met. For example, you can set your air-conditioner in a room to automatically turn itself off whenever windows are kept open for a period of time.

The 2019 Samsung Smart TVs will also let you remotely control and access your PCs, laptops, smartphones and tablets, using Remote Access. They can also interact seamlessly with Amazon Alexa and Google Assistant.

Recommended Reading

[adrotate group=”2″]

Go Back To > Home Tech | Home

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!