In 2023, the mood AI technology that can understand and interact with human emotions will be a prominent application of machine learning. For example, Hume AI, developed by former Google researcher Alan Cowen, is developing a tool to measure emotions through words, facial expressions and pronunciation. The Swedish company Smart Eyes recently acquired Affectiva, the MIT Media Lab spinoff, which developed the SoundNet neural network, an algorithm that separates emotions like anger from audio samples in less than 1.2 seconds. Even the Zoom video platform is introducing Zoom IQ, a feature that will give users real-time analysis of emotions and engagement during virtual sessions.
In 2023, tech companies will launch advanced chatbots that can closely mimic human emotions to create more empathetic relationships with users across banking, education and healthcare. Microsoft’s chatbot Xiaoice is already successful in China, with the average user reportedly chatting with “her” more than 60 times a month. It also went through a Turing test, with users not recognizing it as a bot within 10 minutes. Analysis from Juniper Research Consultancy shows that chatbot interactions in healthcare will increase by nearly 167% from 2018 to reach 2.8 billion annual interactions by 2023. It will increase medical staff time and save about $ 3.7 billion for health care systems around the world. .
By 2023, AI emotions will also become commonplace in schools. In Hong Kong, some secondary schools already use artificial intelligence software developed by Find Solutions AI that measures the movement of small muscles on a student’s face and identifies many negative and positive emotions. Teachers are using this system to monitor students’ mood swings as well as their motivation and focus, allowing them to intervene in time if students lose interest.
The problem is that most emotional AI is based on erroneous science. AI algorithms of emotions, even when trained on large and diverse datasets, reduce facial expressions and tones to emotions, regardless of human social and cultural context and situation. For example, while algorithms can recognize and report that a person is crying, it is not always possible to draw accurate conclusions about the reason and meaning behind tears. Likewise, a sad face does not necessarily indicate that the person is angry, but that is the conclusion that the algorithm will achieve. Why? We all adapt our expression to our social and cultural norms, so our expression is not always a true reflection of our inner state. People often do “emotional work” to disguise their true feelings, and the way they express their feelings is more likely to be a learned response than a spontaneous expression. For example, women are more likely to change their mood than men, especially those who have negative values about them, such as anger, because it is expected.
Therefore, AI technologies that make assumptions about emotional states are likely to exacerbate gender and racial inequalities in our society. For example, the 2019 UNESCO report highlights the gender-sensitive impact of AI technology with “female” voice assistance systems designed according to models of emotional inactivity and service.
Recognition of AI faces can also lead to racial inequality. An analysis of 400 NBA games with two popular emoticons, Microsoft’s Face and Face API, has been shown to give black players an average of more negative emotions, even when they are smiling. These results reinforce other studies showing that black men need to show more positive emotions in the workplace because they are classified as aggressive and threatening.
Emotional AI technologies will become more prevalent in 2023, but if left unchecked and unchecked, they will reinforce racial and gender bias as a system of transmission and reinforce inequality in the world and exacerbate disadvantages. More to those who are already limited.