Emotionally intelligent AI
Soul Machines is creating AI-powered avatars with emotional intelligence. We speak to chief business officer Greg Cross about how brands can use its lifelike chatbots.

From Terminator and I, Robot, to Humans and Black Mirror, sci-fi has long imagined how intelligent, lifelike robots might make an impact on the world we live in. And we’re in equal parts fascinated and fearful of this blurring line between man and machine.

But one company in New Zealand looks to be bringing these dystopian dreams to life, developing disarmingly realistic avatars with emotional intelligence. Dubbed “digital humans,” the chatbots and virtual assistants created by Soul Machines are capable of recognising users’ emotions, as well as expressing their own.
 
As customer-facing interactions become increasingly automated, companies are starting to explore commercial applications for affective computing: technologies that can interpret – and in turn influence – human emotions.
 
So far, Soul Machines’ digital humans are helping make disability services in Australia more accessible, and beginning to communicate information to banking customers. The company has even developed an AI baby to see whether they can learn from the way we raise children, to improve the way we train AI.
 
To find out how brands can benefit from emotionally intelligent avatars, we speak to Soul Machines’ chief business officer, Greg Cross.

Why does technology need to be emotionally intelligent?

We expect AI to make our lives easier, but current technologies barely scratch the surface. To reach a point where AI can make a significant impact on everyday life, we need to make computing more human, so that machines can connect with us on an emotional level. 
 
By adding a layer of humanised computing, we are creating machines that can better understand our everyday needs, and learn how to address them. 
 
How does it work?
 
Put simply, emotionally intelligent technology allows machines to read human emotions, and use that information to respond in the same natural way humans that interact with one another. 
 
Our digital humans are powered by our human computing system (HCS). This is essentially the brain language that brings them to life and gives them the ability to connect and engage with humans on an emotional level. 
 
To do this, we use biologically inspired models of the brain that are modulated by virtual neurotransmitters and hormones like dopamine, serotonin and oxytocin. Through webcams and microphones, they can recognise emotions by analysing facial and vocal expressions in real time, and use that information to respond appropriately. 

How does the technology ascertain our emotions? And change its reactions in response? 

If a person’s emotional state changes, Soul Machines’ digital humans can recognize this shift and give the appropriate emotional response, both verbally and non-verbally. They are also able to learn through experience. The more they interact with a person, the more they learn about their personality and emotions, meaning they can impact the direction of the conversation in response.
 
How do the avatars’ own expressions affect the way humans behave and interact with it?
 
By adding a layer of personality, our technology can deliver a character that matches each digital human’s particular role. This is important as we think about the different ways people will interact with them.
 
For example, a virtual customer agent for a brand can incorporate a range of emotional responses, expressions and behaviors that show care for the customer. This can help alleviate concerns or aggravation a customer may feel toward a product or service.

How will putting a human face on technology benefit brands?
 
We imagine digital humans being the tool that will raise standards for customer service. Brands can use avatars as the first line of defence whenever a customer has an issue with a product or service. They’ll be able to handle and close out cases that require basic information, giving service reps more time to handle urgent issues. It also gives consumers access to information more quickly, since a digital human is always available to talk.
 
For example, Rachel was prototyped by IBM to help banking customers. By evaluating their emotions, she could understand their needs and share basic information like account balances and overdraft fees.
 
In which contexts do you think the technology will be most powerful?
 
Leaders across all industries constantly think about how they can create new experiences and better business models, so we think we’ll see digital humans making an impact in many different contexts.
 
Educators can use digital humans to reach new levels of engagement and enhance their teaching methods. While the entertainment industry can replicate a celebrity, and commercialise it for different uses.
 
Are there any challenges associated with bringing technology and emotion together?
 
We already use virtual assistants like Alexa and Siri. Adding humanised computing and merging emotions with technology is simply the next step. 
 
Technically, one of the biggest hurdles is getting emotionally driven technology to accurately mirror the wide range of emotions humans feel every day. While emotions like happy, sad and angry are clear enough to depict, more nuanced emotions are harder to replicate.