blog Article

Why is Emotional AI not dead?

Author: Pierre Gerardi ,

 

In June 2022, Microsoft announced it will stop selling emotion recognition software. Building emotion recognition software comes with a great number of risks, and that’s why Microsoft decided to stop investing in it. The challenge is not only the privacy concerns the technology raises but also the lack of consensus on a definition of 'emotions' and the inability to generalize the linkage between facial expression and emotional state across uses, regions, and demographics. 

However, we believe that even with these challenges, emotion recognition systems still have use cases. In this blog post, we will cover some of these use cases and how to address the challenges arising from them to unlock value for organizations. 

What is Emotional AI?

It is known that, even if we don’t always want it, emotions are one of the driving forces of what we do and how we do it. The emotions we experience at a particular time often have a huge impact on the decision we make. For example, if we’re excited, we might make quick decisions without considering the implications. Not only do emotions impact the actions we make, but also the final result of our actions. Anger can lead to impatience and bad execution of our actions and, thus, a poor outcome.

Due to its important influence on our behavior, emotions became the subject of numerous studies executed by companies. When a business succeeds better in understanding and predicting human emotions, it succeeds better in forecasting how a person will react to certain events. Emotional AI can assist businesses in better reacting to and understanding human emotion.

Emotional AI is software that is capable of learning emotional patterns from human behavior. This capability makes it possible for computers to interact better with humans and their emotional state. When working well, emotion recognition software can understand emotions based on a person’s gestures, tone of voice, facial expression or other emotional state indicators. Emotional AI does not only focus on voice or physical features but also on written text. Written text contains as much information on the emotional state of a person and, therefore, also is an interesting topic for emotional AI.

Technologies

Screenshot 2023-01-11 at 10.29.00

Emotional AI has applications in a variety of technologies, including text, audio, and video.

In text-based applications, emotional AI can analyze words to detect emotional tone and sentiment. This can be used in chatbots and virtual assistants to provide more personalized and empathetic responses to user inquiries.

In audio-based applications, emotional AI can analyze the tone and pitch of a person's voice to detect their emotional state. This can be used in voice assistants and call center software to provide more personalized and empathetic support to users.

In video-based applications, emotional AI can analyze facial expressions and body language to detect a person's emotional state. This can be, for example, used in video conferencing software to improve communication and collaboration.

The risks and how to avoid them

While emotional AI has the potential to greatly improve the way we interact with machines and create more natural and intuitive communication experiences, there are also risks associated with its use. Here are a few of the risks to consider:

Privacy 

The use of emotional AI often involves collecting and analyzing large amounts of sensitive or personal information. This raises concerns about privacy and the potential misuse of this data.

To address privacy concerns, it is crucial to put in place proper safeguards and controls to handle personal data responsibly. This includes taking steps like:

  • Getting explicit consent from users before collecting and using their data.
  • Ensuring data security through measures like encryption, secure servers, and access controls.
  • Only collecting the minimum amount of data needed for a specific purpose.
  • Being transparent with users about how their data is being used and giving them the option to access, modify, or delete it.

Emotions are subjective

One of the challenges of developing emotional AI systems is accurately labeling the emotional content of data. This is because emotions are subjective and can vary widely from person to person. What one person may interpret as happy or angry may be perceived differently by another person. 

This can make it difficult for labelers to accurately define the emotions of a given example, as they are not the ones experiencing the emotions themselves. This can lead to errors in labeling, which can introduce bias into the model. But how to avoid this challenge:

  • Use multiple labelers: By using multiple labelers to label the data, it is possible to reduce the impact of individual subjectivity and increase the overall accuracy of the labeling.
  • Use self-report data: By collecting self-report data, where individuals report their own emotions, it is possible to obtain a more accurate representation of the emotional content of the data.

Bias

Bias in emotional AI systems refers to the tendency of the model to make inaccurate or unfair predictions or decisions based on biased data. For example, suppose the model is trained on data from predominantly European individuals, and it is asked to classify the emotion of an Asian individual based on their facial expression. The model may have difficulty accurately interpreting the emotion, as it has not been exposed to a sufficient variety of Asian facial expressions and cultural cues. This could lead to an unfair or inaccurate prediction or decision based on the model's bias against Asian individuals.

Research has shown that facial expressions can vary widely from culture to culture and even from person to person, making it difficult to accurately define the link between a gesture and a specific emotion. For example, the meaning of a furrowed brow, for example, can vary widely depending on the context and the cultural background of the individual. In some cultures, a furrowed brow may be seen as a sign of concentration or determination, while in others, it may be interpreted as a sign of anger or frustration.

The quality of an emotional AI model is dependent on the data it is trained on. If this data is biased, the model will also be biased. This is a risk that applies to all AI systems, not just those focused on emotion. It is therefore important to ensure that the data used to train these models is diverse, representative, and accurate, in order to avoid bias and promote fair and unbiased outcomes.

Conclusion

Proactive risk management is crucial for maximizing the value that Emotional AI systems can bring to businesses. By anticipating and addressing potential risks early on, businesses can ensure that these systems are developed in a way that is both responsible and ethical. This can involve implementing data privacy protocols, conducting bias assessments, and regularly reviewing and updating the system to address any emerging risks. By taking these precautions, organizations can not only advance the field of Emotional AI, but also minimize negative impacts and build trust in the technology. This, in turn, allows Emotional AI systems to create maximum value for organizations by delivering reliable and trustworthy results.

 

Pierre Gerardi
About The Author

Pierre Gerardi

Pierre is a Machine Learning Engineer at Radix. He obtained his first interest in Artificial Intelligence during his Business Engineering-Data Analytics studies at the University of Gent. Pursuing this interest, he decided to take on an additional Master in Artificial Intelligence at KU Leuven. As he couldn't get enough, he joined Radix as a Machine Learning Engineer to work with Al on a daily basis. Due to his hands-on mindset, Pierre wants to assist clients in solving their complex problems and help them implement an Al-based solution.

About The Author