Tag Archives: Robot

Did China Create This Amazing Female Dancing Robot?

Did China create this amazing female dancing robot for a dance show in Shanghai?

Watch the viral video for yourself, and find out what the FACTS really are!

 

Did China Create This Amazing Dancing Robot?

Here is the video that countless people have been sharing on social media and even on YouTube. It is often paired with a short explanation in (broken) English.

Can u believe that this female dancer is robot ? The dance was held in Shanghai n it lasted only 5 minutes but the queue was 4hrs n cost 499 Yuan!

It has surpassed Japan in complexity, and saw its perfect facial expressions. Send this video to everyone to watch, let us enjoy together…

 

Amazing Female Dancing Robot : The Truth

The agility and grace of this female dancing robot, and its facial expressions, are incredible.

The less gullible would point out that it is something you would only expect from a real human being, and they would be right.

Fact #1 : That Is A Video Of A WDSF Competition

That is actually a video clip from the 2018 World Dance Sport Federation Championship in Lithuania.

The initials WDSF (World Dance Sport Federation) can be seen, together with the recording date at the upper left corner of the video.

Fact #2 : The Dance Only Lasted A Minute

The dance in the video only lasted just over a minute, even though the accompanying fake message claimed it lasted for 5 minutes.

Fact #3 : Those Are Lithuanian Dance Champions

Both dancers are very human. In fact, they are Lithuanian dance champions Ieva Zukauskaite (female), and Evaldas Sodeika (male).

Fact #4 : 499 Yuan Was The Entry Price To Shanghai Disneyland

The fake story claimed that the ticket to watch this performance costs 499 Yuan – about US$73 / £55 / RM303. That would be ridiculously exorbitant for a short dance.

In truth, that was “copied” from the two earlier hoaxes (here and here) about robot dancers at the Shanghai Disneyland.

Even that was not accurate, because 499 Yuan was the cost of an entry ticket to the Shanghai Disneyland when it first opened in February 2016.

Since 6 June 2018, it has been priced at 399 Yuan (off-peak), 575 Yuan (peak) and 665 Yuan (peak holiday).

 

Why Would Someone Create This Fake Story?

With China’s aggressive foreign stance in recent years, it is not uncommon to see such fake stories being created and shared.

Some believe it’s part of a concerted attempt to burnish China’s image overseas.

Others believe the many fake stories are being created to drown out the negative coverage of China’s controversial Belt and Road Initiative, and their aggressive moves in the South China Sea.

Whatever the reasons may be, it is our duty as global citizens to stop the proliferation of such fake stories.

Please share this fact check with your friends, so they know the truth!

 

Recommended Reading

Go Back To > Fact Checks | Home

 

Support Tech ARP!

If you like our work, you can help support us by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


Fact Check : Amazing Robot Dancers At Shanghai Disneyland!

Did China create these amazing robot dancers at the Shanghai Disneyland, beating even the Japanese?

Watch the viral clip for yourself, and find out what the FACTS really are!

 

Claim : China Created Classic Robot Dancers At Shanghai Disneyland!

Here is the video that countless people have been sharing on social media and even on YouTube. It is often paired with a short explanation in (broken) English.

This classic dance is only created in China and broadcast in Shanghai Disneyland. They are not female dance artists, but all robots made in China.

The performance time is only about 5 minutes, but the ticket queue time takes 4 hours , The ticket price is 499 yuan.

It has surpassed Japan in complexity, and saw its perfect facial expressions. Send this video to everyone to watch, let us enjoy together…

 

Classic Robot Dancers At Shanghai Disneyland : The Truth

The agility and grace of these classic robot dancers at Shanghai Disneyland is amazing.

The less gullible would point out that it is something you would only expect from professional human dancers, and they would be right.

Fact #1 : That Is A Video Of A CBDF Competition

That is actually a video clip from one of the many competitions organised and broadcast by the CBDF (Chinese Ballroom Dance Federation).

Although we cannot ascertain who the dancers are, you can see the CBDF logo in the background, at 4:06 and 4:41.

No robot made today can match their agility and grace.

Fact #2 : 499 Yuan Was The Entry Price To Shanghai Disneyland

The fake story claimed that the ticket to watch this performance costs 499 Yuan – about US$73 / £55 / RM303. That would be ridiculously exorbitant for a single show.

In truth, that was the cost of an entry ticket to the Shanghai Disneyland when it first opened in February 2016.

Since 6 June 2018, it has been priced at 399 Yuan (off-peak), 575 Yuan (peak) and 665 Yuan (peak holiday).

Fact #3 : Disneyland Does Not Charge Extra To Watch Certain Shows

Anyone who has been to Disneyland knows that the entry fee is pricey. That’s because it covers all rides and shows inside Disneyland.

You only need to pay for food and drinks, and arcade games. Or if you opt to purchase Disney Fastpasses (Fastpass, FastPass+, MaxPass) to bypass long queues at popular rides or shows.

Otherwise, all rides and shows are free to enjoy once you enter Disneyland.

 

Why Would Someone Create This Fake Story?

With China’s aggressive foreign stance in recent years, it is not uncommon to see such fake stories being created and shared.

Some believe it’s part of a concerted attempt to burnish China’s image overseas.

Others believe the many fake stories are being created to drown out the negative coverage of China’s controversial Belt and Road Initiative, and their aggressive moves in the South China Sea.

Whatever the reasons may be, it is our duty as global citizens to stop the proliferation of such fake stories.

Please share this debunking with your friends, so they know the truth!

 

Recommended Reading

Go Back To > Fact Checks | Home

 

Support Tech ARP!

If you like our work, you can help support us by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


Fact Check : Chinese Robot Dancer @ Shanghai Disneyland!

Does Shanghai Disneyland have an amazing Chinese robot dancer whose technical ability surpasses even the Japanese?

Watch the viral video clip for yourself, and find out what the FACTS really are!

 

Claim : Chinese Robot Dancer @ Shanghai Disneyland!

Here is the video that countless people have been sharing on social media and their microblogs. It is paired with a short explanation in both English and Chinese.

The Classical dancer in this video is not a real living person but a newly constructed robot!

The video, a product of Shanghai Disney, is only a five minutes show but it would take more than four hours queuing up to purchase a ticket which costs 499 yuan per person .

The robot is entirely Chinese made and its technical level has surpassed those of Japan. This video is indeed worthy of watching.

观看上面视频时请留意,古典舞蹈表演者不是你固定思维的女演员,而是一位刚问世的机器人!

这段视频来自上海迪士尼4D影视,观看只有5分钟,但排队却要4小时以上,票价:499元。这是我国自行研制的机器人,

其技术水平已超越日本。现转发微信朋友,先睹为快,与大家一起分享!

Chinese Robot Dancer @ Shanghai Disneyland : The Truth

The agility of the “Shanghai Disneyland robot dancer” in the video which lasts over 4 minutes is amazing.

The less gullible would point out that it is something one would only expect from a professional human dancer, and they would be right.

Fact #1 : That Is A Video Of The 4th CCTV Dance Competition

That is actually a video clip from the 4th CCTV dance competition in 2007.

The so-called robot dancer is actually a professional dancer called Qin Xi, who performed the “Dream of a Spring Girl” dance for that segment of the dance competition.

That’s why her dance is so amazing – no robot made today can match her grace.

Fact #2 : 499 Yuan Was The Entry Price To Shanghai Disneyland

The fake story claimed that the ticket to watch this performance costs 499 Yuan – about US$73 / £55 / RM303. That would be ridiculously exorbitant for a single show.

In truth, that was the cost of an entry ticket to the Shanghai Disneyland when it first opened in February 2016.

Since 6 June 2018, it has been priced at 399 Yuan (off-peak), 575 Yuan (peak) and 665 Yuan (peak holiday).

Fact #3 : Disneyland Does Not Charge Extra To Watch Certain Shows

Anyone who has been to Disneyland knows that the entry fee is pricey. That’s because it covers all rides and shows inside Disneyland.

You only need to pay for food and drinks, and arcade games, like The Frontierland Shootin’ Arcade.

Or if you opt to purchase Disney Fastpasses (Fastpass, FastPass+, MaxPass) to bypass long queues at popular rides or shows.

Otherwise, all rides and shows are free to enjoy once you enter Disneyland.

 

Why Would Someone Create This Fake Story?

With China’s aggressive foreign stance in recent years, it is not uncommon to see such fake stories being created and shared.

Some believe it’s part of a concerted attempt to burnish China’s image overseas.

Others believe the many fake stories are being created to drown out the negative coverage of China’s controversial Belt and Road Initiative, and their aggressive moves in the South China Sea.

Whatever the reasons may be, it is our duty as global citizens to stop the proliferation of such fake stories.

Please share this debunking with your friends, so they know the truth!

 

Recommended Reading

Go Back To > Fact Checks | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


Iron Man Dance Hero Review : Cute + Loud!

The Iron Man Dance Hero looked so darn cute when we first saw it, we decided to check it out in detail.

Here is our short and sweet review of the cute and loud Iron Man Dance Hero!

 

Iron Man Dance Hero

The Iron Man Dance Hero is a simple toy, with a simple premise – an Asian-looking Tony Stark in a cute Iron Man suit dancing to music.

The truth though is that it is a little more than just a dancing robot. Take a closer look at the Iron Man Dance Hero in our hands-on video!

The Icon Man Dance Hero basically has three key features, officially listed as :

  • Cool music
  • Fabulous dance
  • Cool lighting

It also has a retractable face mask (which reveals an Asian-looking Tony Stark!), as well as posable arms :

It requires three AA batteries. The box says 1.5 V alkaline batteries are recommended, but we had no problem using 1.2 V NiMH rechargeable batteries.

 

Iron Man Dance Hero In Action!

The Iron Man Dance Hero has a simple three-way switch at the back of its head.

  • Left : Lights, music and dancing off
  • Middle : Only lights turned on
  • Right : Lights, music and dancing turned on!

When turned off, it functions like a figurine or toy. Turning on the lights lets it function like a night light.

And of course, it becomes an entertainment device when you turn everything on! But look at how cute it dances!

The lights in the eyes will change, and so does the music. But the lights in its palms and its chest will remain uniformly white.

There is no volume control, which could be a problem because the Iron Man Dance Hero is very loud! But kids seem to love it!

Even if you tire of its dance and music after some time, it is quite useful as a night light, especially with both arms pointing upwards.

 

Iron Man Dance Hero Price + Availability

The Iron Man Dance Hero we tested has a height and width of 19 cm, with a thickness of about 10 cm. Just to be clear, it does not come with any batteries in the box.

Its price varies wildly, depending on whether you purchase it online or in retail (where we have seen prices of RM 45-60 / US$11-15 / £8-11 / S$15-20 / A$16-21 being quoted).

[adrotate group=”2″]

But here are some online purchase options you can consider :

 

Recommended Reading

Go Back To > Home Tech + Entertainment | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


NVIDIA DRIVE AGX Orin for Autonomous Vehicles Revealed!

NVIDIA just introduced the DRIVE AGX Orin – a new software-defined platform for autonomous vehicles and robots!

Here is a quick primer on NVIDIA DRIVE AGX Orin, including its key capabilities and specifications!

 

NVIDIA DRIVE AGX Orin for Autonomous Vehicles + Robots

At GTC China 2019, NVIDIA unveiled the DRIVE AGX Orin – the culmination of a four-year development process.

The NVIDIA DRIVE AGX Orin is a new system-on-a-chip (SoC) designed to drive a software-defined platform for automatous machines.

The Orin is designed to handle large numbers of applications and deep neural networks simultaneously, while achieving systematic safety standards like ISO 26262 ASIL-D.

Built as a software-defined platform, the NVIDIA DRIVE AGX Orin can be used to develop architecturally-compatible platforms that are scalable from Level 2 to Level 5 (full self-driving) vehicles.

Since both Orin and Xavier are programmable through Open CUDA and TensorRT APIs and libraries, developers can work on both, leveraging their investments across both product lines.

Recommended : DiDi Adopts NVIDIA AI + GPUs For Self-Driving Cars!

 

NVIDIA DRIVE AGX Orin Performance

With 17 billion transistors, the Orin combines Arm Hercules CPU cores with NVIDIA GPU cores, as well as new deep learning and computer vision accelerators.

According to NVIDIA, the DRIVE AGX Orin delivers an aggregate performance of 200 trillion operations per second (200 TFLOPS) – almost 7X faster than the previous generation NVIDIA DRIVE Xavier SoC.

Recommended : NVIDIA DRIVE Deep Neural Networks : Access Granted!

 

Recommended Reading

Go Back To > Automotive | Business | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


Dell Forecasts The Future of Connected Living In 2030!

Dell Technologies just shared with us the key findings from their research that explore the future of connected living by the year 2030!

Find out how emerging technologies will transform how our lives will change by the year 2030!

 

Dell On The Future of Connected Living In 2030!

Dell Technologies conducted their research in partnership with the Institute for the Future (IFTF) and Vanson Bourne, surveying 1,100 business leaders across ten countries in Asia Pacific and Japan.

Let’s take a look at their key findings, and find out why they believe the future is brimming with opportunity thanks to emerging technologies.

 

Technological Shifts Transforming The Future By 2030

IFTF and a forum of global experts forecast that emerging technologies like edge computing, 5G, AI, Extended Reality (XR) and IoT will create these five major shifts in society :

1. Networked Reality

Over the next decade, the line between the virtual and the real will vanish. Cyberspace will become an overlay on top of our existing reality as our digital environment extends beyond televisions, smartphones and other displays.

This transformation will be driven by the deployment of 5G networks that enable high bandwidth, low-latency connections for streaming, interactive services, and multi-user media content.

2. Connected Mobility and Networked Matter

The vehicles of tomorrow will essentially be mobile computers, with the transportation system resembling packet-switched networks that power the Internet.

We will trust them to take us where we need to go in the physical world as we interact in the virtual spaces available to us wherever we are.

3. From Digital Cities to Sentient Cities

More than half of the world’s population live in urban areas. This will increase to 68% over the next three decades, according to the United Nations.

This level of growth presents both huge challenges and great opportunities for businesses, governments and citizens.

Cities will quite literally come to life through their own networked infrastructure of smart objects, self-reporting systems and AI-powered analytics.

4. Agents and Algorithms

Our 2030 future will see everyone supported by a highly personalised “operating system for living” that is able to anticipate our needs and proactively support our day-to-day activities to free up time.

Such a Life Operating System (Life OS) will be context-aware, anticipating our needs and behaving proactively.

Instead of interacting with different apps today, the intelligent agent of the future will understand what you need and liaise with various web services, other bots and networked objects to get the job done.

5. Robot with Social Lives

Within 10 years, we will have personal robots that will become our partners in life – enhancing our skills and extending our abilities.

In some cases, they will replace us, but this can mean freeing us to do the things we are good at, and enjoy.

In most cases, they can become our collaborators, helping to crowdsource innovations and accelerate progress through robot social networks.

 

Preparing For The Future Of Connected Living By 2030

Anticipating Change

Many businesses in APJ are already preparing for these shifts, with business leaders expressing these perceptions :

  • 80% (82% in Malaysia) will restructure the way they spend their time by automating more tasks
  • 70% (83% in Malaysia) welcome people partnering with machines/robots to surpass our human limitations
  • More than half of businesses anticipate Networked Reality becoming commonplace
    – 63% (67% in Malaysia) say they welcome day-to-day immersion in virtual and augmented realities
    – 62% (63% in Malaysia) say they welcome people being fitted with brain computer interfaces

Navigating Challenges

These technological shifts are seismic in nature, leaving people and organisations grappling with change. Organisations that want to harness these emerging technologies will need to collect, process and make use of the data, while addressing public concerns about data privacy.

APJ business leaders are already anticipating some of these challenges :

  • 78% (88% in Malaysia) will be more concerned about their own privacy by 2030 than they are today
  • 74% (83% in Malaysia) consider data privacy to be a top societal-scale challenge that must be solved
  • 49% (56% in Malaysia) would welcome self-aware machines
  • 49% (43% in Malaysia) call for regulation and clarity on how AI is used
  • 84% (85% in Malaysia) believe that digital transformation should be more widespread throughout their organisation

 

Recommended Reading

Go Back To > Business + EnterpriseHome

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


NVIDIA Jetson Xavier NX : World’s Smallest AI Supercomputer

On 7 November 2019, NVIDIA introduced the Jetson Xavier NX – the world’s smallest AI supercomputer designed for robotics and embedded computing applications at the edge!

Here is EVERYTHING you need to know about the new NVIDIA Jetson Xavier NX!

 

NVIDIA Jetson Xavier NX : World’s Smallest AI Supercomputer

At just 70 x 45 mm, the new NVIDIA Jetson Xavier NX is smaller than a credit card. Yet it delivers server-class AI performance at up to 21 TOPS, while consuming as little as 10 watts of power.

Short for Nano Xavier, the NX is a low-power version of the Xavier SoC that came up tops in the MLPerf Inference benchmarks.

Recommended : NVIDIA Wins MLPerf Inference Benchmarks For DC + Edge!

With its small size and low-power, it opens up the possibility of adding AI on-the-edge computing capabilities to small commercial robots, drones, industrial IoT systems, network video recorders and portable medical devices.

The Jetson Xavier NX can be configured to deliver up to 14 TOPS at 10 W, or 21 TOPS at 15 W. It is powerful enough to run multiple neural networks in parallel, and process data from multiple high-resolution sensors simultaneously.

The NVIDIA Jetson Xavier NX runs on the same CUDA-X AI software architecture as all other Jetson processors, and is supported by the NVIDIA JetPack software development kit.

It is pin-compatible with the Jetson Nano, offering up to 15X higher performance than the Jetson TX2 in a smaller form factor.

It is not available for a few more months, but developers can begin development today using the Jetson AGX Xavier Developer Kit, with a software patch to emulate Jetson Xavier NX.

 

NVIDIA Jetson Xavier NX Specifications

Specifications NVIDIA Jetson Xavier NX
CPU NVIDIA Carmel
– 6 x Arm 64-bit cores
– 6 MB L2 + 4 MB L3 caches
GPU NVIDIA Volta
– 384 CUDA cores, 48 Tensor cores, 2 NVDLA cores
AI Performance 21 TOPS : 15 watts
14 TOPS : 10 watts
Memory Support 128-bit LPDDR4x-3200
– Up to 8 GB, 51.2 GB/s
Video Support Encoding : Up to 2 x 4K30 streams
Decoding : Up to 2 x 4K60 streams
Camera Support Up to six CSI cameras (32 via virtual channels)
Up to 12 lanes (3×4 or 6×2) MIPI CSI-2
Connectivity Gigabit Ethernet
OS Support Ubuntu-based Linux
Module Size 70 x 45 mm (Nano)

 

NVIDIA Jetson Xavier NX Price + Availability

The NVIDIA Jetson Xavier NX will be available in March 2020 from NVIDIA’s distribution channels, priced at US$399.

 

Recommended Reading

Go Back To > Enterprise | Software | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!


Jackrabbot : The Robot That Learns From Human Behaviour

Noah Kravitz writes about Jackrabbot, a robot that learns by watching human behaviour. It is a great look at how neural task programming (NTP) may create intelligent robots that learn from human behaviour.

 

Robot See, Robot Do: Bots Learn by Watching Human Behavior

Robots following coded instructions to complete a task? Old school.

Robots learning to do things by watching how humans do it? That’s the future.

Stanford’s Animesh Garg and Marynel Vázquez shared their research in a talk on “Generalizable Autonomy for Robotic Mobility and Manipulation” at the GPU Technology Conference last week.

In lay terms, generalizable autonomy is the idea that a robot can observe human behavior, and learn to imitate it in a way that’s applicable to a variety of tasks and situations.

What kinds of situations? Learning to cook by watching YouTube videos, for one. And figuring out how to cross a crowded room for another.

Cooking 101

Garg, a postdoctoral researcher at the Stanford Vision and Learning Lab (CVGL), likes to cook. He also likes robots. But what he’s not so keen on is a future full of robots who can only cook one recipe each.

While the present is increasingly full of robots that excel at single tasks, Garg is working toward what he calls “the dream of general-purpose robots.”

The path to the dream may lie in neural task programming (NTP), a new approach to meta-learning. NTP leverages hierarchy and learns to program with a modular robot API to perform unseen tasks working from only a single test example.

For instance, a robot chef would take a cooking video as input, and use a hierarchical neural program to break the video data down into what Garg calls a structured representation of the task based on visual cues as well as temporal sequence.

Instead of learning a single recipe that’s only good for making spaghetti with meatballs, the robot understands all of the subroutines, or components, that make up the task. From there, the budding mechanical chef can apply skills like boiling water, frying meatballs and simmering sauce to other situations.

Solving for task domains instead of task instances is at the heart of what Garg calls  meta-learning. NTP has already seen promising results, with its structured, hierarchical approach leaving flat programming in the dust on unseen tasks, while performing equally well on seen tasks. Full technical details are available on the project’s GitHub.

Feeling Crowded? Follow the Robot

We’ve all been there. You’re trying to make your way through a crowded room, and suddenly find yourself face-to-face with a stranger coming from the opposite direction.

You move right to get around them, but they move the same way, blocking your path. Instinctively, you both move the other way. Blocked again!

One of you cracks a “Shall we dance?” joke to break the tension, and you finally maneuver past one another to continue on.

Understanding how and why people move the way we do when walking through a crowded space can be tricky. Teaching a robot to understand these rules is daunting. Enter Vázquez and Jackrabbot, CVGL’s social navigation robot.

Jackrabbot first hit the sidewalks in 2015, making small deliveries and travelling at pedestrian speeds below five miles per hour. As Vázquez explained, teaching Jackrabbot — named after the jackrabbits that also frequent his campus — is a vehicle for tackling the complex problem of predicting human motion in crowds.

Teaching an autonomous vehicle to move through unstructured spaces — for example, the real world —  is a multifaceted problem. “Safety is the first priority,” Vázquez said. From there, the challenge quickly moves into predicting and responding to the movements of lots of people at once.

To tackle safety, they turned to deep learning, developing a generative adversarial network (GAN) that compares real-time data from JackRabbot’s camera with images generated by the GAN on the fly.

These images represent what the robot should be seeing if an area is safe to pass through, like a hallway with no closed doors, stray furniture or people standing in the way. If reality matches the ideal, JackRabbot keeps moving. Otherwise, it hits the brakes.

[adrotate group=”2″]

From there, the team turned to multi-target tasking, aka “Tracking the Untrackable.” Moving gracefully through a crowd goes beyond immediate assessment of “Is my path clear?” to tracking multiple people moving in different directions, and predicting where they’re headed next.

Here the team built a recurrent neural network using the long short term memory approach to account for multiple cues —  appearance, velocity, interaction and similarity — measured over time.

A published research paper delves into the technical nitty-gritty, but in essence, CVGL devised a novel approach that learns the common sense behaviors that people observe in crowded spaces, and then uses that understanding to predict “human trajectories” where each person is likely to go next.

So the next time you find yourself headed for one of those awkward “Shall we dance?” moments in a room full of strangers, just remember to track multiple cues over time and project the motion trajectories of every human in sight.

Or take the easy way and find a JackRabbot to walk behind. Or better yet, the newly announced JackRabbot 2.0 (with dual NVIDIA GPUs onboard). It’ll know what to do.

Go Back To > Articles | Home

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participate in the Tech ARP Forums, or even donate to our fund. Any help you can render is greatly appreciated!

HUMAN+ : The Cyborgs Are Coming To The ArtScience Museum!

SINGAPORE – Delve into a future world where the lines between fiction and reality are blurred in ArtScience Museum’s latest exhibition HUMAN+ : The Future of Our Species, opening on 20 May.

Advances in genetic engineering, biotechnology and nanotechnology that not long ago seemed purely science fiction are now real. Cyborgs, superhumans and clones are alive amongst us today.

 

HUMAN+ : The Future of Our Species

What does it mean to be human now? What will it feel like to be a human a hundred years from now? Should we continue to embrace modifications to our minds, bodies and daily lives, or are there boundaries we shouldn’t overstep?

These are the issues at the heart of HUMAN+ : The Future of Our Species. Showcasing the work of 40 international artists, scientists, technologists and designers, the show explores possible future paths for our species. It includes major names from the fields of robotics, biotechnology, synthetic biology and artificial intelligence, including the world’s first living cyborg, Neil Harbisson; Australia’s leading performance artist, Stelarc; and Oron Catts and Ionat Zurr, who grow sculptures from living tissue.

A collaboration between ArtScience Museum, Science Gallery at Trinity College Dublin, and The Centre de Cultura Contemporània de Barcelona (CCCB), this cutting-edge exhibition asks what it means to be human in a world of artificial intelligence, life-like robots and genetic modification. It probes the social, ethical and environmental questions raised by using technology to modify ourselves.

Will virtual reality be the new reality? What would happen if a robot knew what we wanted before we knew ourselves? In the future, who will have ownership of our genetic information?

From spectacular demonstrations of the latest robotic technologies, to challenging contemporary artworks, intriguing design prototypes, and exciting innovations from Singapore, HUMAN+ imagines many possible futures.

 

Four Themed Galleries

Spanning four themed galleries, HUMAN+ presents a wide range of artwork and scientific research that shows how our perception of humanity is being transformed by science and technology.

Augmented Abilities

The first section of the show presents physical and biological ways in which we have augmented our minds and bodies. From prosthetics that augment bodily functions to medical interventions that change how we think, this part of the show explores what it means to be a cyborg today.

A key highlight is work by Neil Harbisson, the world’s first human to be officially recognised as a cyborg. Born without the ability to see colour, Harbisson, who will be in Singapore for the opening of the show, wears a prosthetic antenna called “eyeborg” that allows him to hear colour. This antenna has been implanted in his skull since 2003.

Also included are works by star performance artist, Stelarc, plus captivating images and fascinating prototypes by Aimee Mullins, Chris Woebken and many others.

Encountering Others

The second section of the show explores the changing nature of social relationships, due to advances in technology.

It includes provocative artwork by Addie Wagenknecht that explores how motherhood might evolve in a world of robotics. Her artwork depicts a robot arm that gently rocks a bassinet whenever a baby cries.

Also included are cutting-edge artworks by Louis-Phillippe Demers from Singapore, Cao Fei, Yves Gellie, S.W.A.M.P and many others.

Authoring Environments

This section analyses how we are transforming the very environment we live in due to far-reaching advances in science and technology.

It includes The Human Pollination Project by Laura Allcorn, a pollination tool kit, designed to be won as a fashion accessory. It raises questions about the social and environmental implications of the collapse of bee populations, which are responsible for pollinating the plants that grow into the food we eat.

Also included are intriguing speculative artworks and design proposals by Antony Dunne and Fiona Raby, Liam Young, The Centre for PostNatural History, Robert Zhao and many others.

Life at the Edges

[adrotate banner=”4″]

This section of the exhibition explores the limits of human life and longevity. What does it mean to create life, or extend a person’s lifespan?

It includes a compelling and challenging work by designer, Agatha Haines, who has created five sculptures of human babies, each with a surgically implemented body modification.

Also included are living artworks designed in a laboratory by Oron Catts and Ionat Zurr, and works which explore the end of life by Julijonas Urbonas, and James Auger and Jimmy Loizeau.

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participate in the Tech ARP Forums, or even donate to our fund. Any help you can render is greatly appreciated!

ASUS Announces Zenbo Availability In Taiwan

21st December 2016 ASUS announced at a press event today in Taipei that the Mandarin edition of Zenbo — the first ASUS robot designed to provide assistance, entertainment, and companionship to families — will be available for pre-order in Taiwan from 1st January, 2017.

“For decades, humans have dreamed of owning such a companion: one that is smart, dear to our hearts, and always at our disposal,” expressed Chairman Shih on his vision of enabling robotic computing for every household.

Chairman Shih also announced that developers and partners from various major industries in Taiwan — including those in the education, transportation, e-commerce, entertainment, and cleaning sectors — have partnered with ASUS to extend and enrich the Zenbo experience through dedicated apps and capabilities.

A special collaboration with the National Police Agency of Taiwan resulted in the development of an app for Zenbo that enables families to contact their local police department in an emergency and speak with an officer using Zenbo’s built-in video calling feature. Director-General, National Police Agency, Ministry of the Interior, Republic of China (Taiwan), Kuo-En Chen, joined Chairman Shih onstage for a live demo of the application.

Intel Corporation’s Vice President of the New Technology Group and General Manager of the Perceptual Computing Group, Achin Bhowmik, also joined Chairman Shih onstage. “For nearly thirty years, Intel and ASUS have been collaborating to bring some of the most leading innovations in PCs and devices to market,” said Dr. Bhowmik. “We are excited to continue that collaboration on Zenbo, and we look forward to continue working with ASUS on enabling intelligent and interactive devices with Intel RealSense technology.”

 

ASUS Zenbo — Your Smart Little Companion

[adrotate banner=”4″]

ASUS Zenbo is a friendly and capable home robot designed to provide assistance, entertainment, and companionship to families and meant to address the needs of each family member in this ubiquitous computing era.

Feel At Ease Even If You’re Away From Home

Zenbo is designed for everyone, but he has specific functionality that helps senior family members safeguard their health and well-being as well as enjoy a connected digital life. Zenbo monitors the home for emergency situations — such as falls — and immediately responds to them by notifying specified family members on their smartphones, no matter where they are. When they receive an emergency notification, family members can remotely control Zenbo to move nearby and use his built-in camera to visually check on their loved on via an intuitive smartphone app.

Kids Interactive Education

Zenbo is a fun and educational playmate for kids who entertains them with interactive stories and learning games that foster their creativity and logical thinking skills. With his high-quality, built-in stereo sound system, Zenbo can play children’s favorite songs and even dance along to the music, making for fun playtime activity. Zenbo includes a built-in library of stories that he tells in a variety of entertaining voices, while displaying accompanying images on his screen and controlling the room lighting to add a new level of interactivity and fun to story time.

Great Family Bonding Time

Zenbo can also help out in the kitchen, reading recipes out loud and functioning as a voice-controlled timer, so home chefs can stay focused on cooking. With his built-in camera and ability to move around the house, Zenbo is a great family photographer who helps preserve special and everyday moments alike.

ASUS uses an Intel processor and Intel RealSense technology, enabling Zenbo’s interaction with people for a more natural and intuitive experience. Zenbo supports a growing number of custom apps that expand his capabilities and bring new features to users. The free Zenbo Developer Program provides members with access to the Zenbo SDK and a library of information they need to bring their creative ideas to life. Additionally, Zenbo Partners can work with ASUS to help build a rich, robotic ecosystem that will enhance Zenbo and enrich users’ lives.

[adrotate banner=”5″]

 

Support Tech ARP!

If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!