Home / Blog / Most people are in the pink, said the computer …

Most people are in the pink, said the computer …

Facial recognition keeps progressing and it is now possible to detect emotions in a crowd. Image credit: Findface

Take Facial Recognition and mix it with Affective Computing, sprinkle with a bit of Artificial Intelligence add plenty of sensors and what you get is the new paradigm of computer to human interaction, a paradigm that is already starting to reshape many areas and generate a few concerns although it will probably show its full impact by the end of this decade.

Talking with machines is kind of awkward, quite a different experience from talking to a human fellow. Yet, this diversity is useful because it marks clearly the difference: at a syntactic level it might be bothersome because I have to adapt my language to the kind of language that the machine can understand (how many times did yo feel frustrated by the level of stupidity of a machine, needing you to state the obvious over and over again…); however, at a semantic level it may be convenient to feel that you don’t have to be afraid on how the machine may consider you, its lack of emotional response is often a plus.

The evolution of chatbots is a first sign that our interactions with machines is changing. Chatbots are software robots that can entertain a conversation with us, usually through a telecommunications network, although they can also be embedded in an anthropomorphic robot greeting you at a hotel check-in, with a very specific domain of application where their knowledge space can sustain a conversation. More and more business are adopting them (seen as the evolution of IVR -Interactive Voice Response systems). Voice interaction is important in the context of humanising the interaction because voice is the usual way of interaction among ourselves and we are quick in catching any inflection in tone to detect empathy. You can tell from listening to a person whether she is in the pink or feeling blue and your voice can convey excitement, boredom, anger… This is starting to percolate in chatbots both in the way they talk and in the way they listen. AI and affective computing are now able to catch from your tone of voice your mood and adjust the interaction accordingly making for a much better interaction experience.

Voice is important but when interacting face to face visual clues are even more important to tell emotions and mood. Software is getting better and better in detecting them. The whole area of facial recognition is evolving rapidly in that direction. Interestingly, there are several APIs, and more are becoming available, allowing development of applications that can detect the mood by looking at faces, both a single face seen in a videoconference and multiple faces in a crowd, like people listening to a speaker in an auditorium. This “mood” (or sentimenti) information can be provided in real time to the speaker that can change the tone to get people more involved in his talk or steer them towards his objectives.

Clearly mood detection can be embedded in a computer interface to adapt the interaction. There are several researches going on in this field and we can expect that all visual and voice based human computer interactions by the end of this decade will make use of mood detection via face, gait and voice analyses.

In a way this is a bit scaring. Already today the cyberspace knows more about myself than … I! It is not just because I forget parts of my life, it is because the cyberspace knows me also through what other people tell about me, like publishing photos that include my face…

In the coming years software will be able to detect subtle changes in my expression and detect mood variations that may go unnoticed to even loved ones. In addition, by looking at me, in many situations and comparing subtle changes some software might even be able to spot health issues…

We are building a new world and most likely we are not understanding its implications…

About Roberto Saracco

Roberto Saracco fell in love with technology and its implications long time ago. His background is in math and computer science. Until April 2017 he led the EIT Digital Italian Node and then was head of the Industrial Doctoral School of EIT Digital up to September 2018. Previously, up to December 2011 he was the Director of the Telecom Italia Future Centre in Venice, looking at the interplay of technology evolution, economics and society. At the turn of the century he led a World Bank-Infodev project to stimulate entrepreneurship in Latin America. He is a senior member of IEEE where he leads the Industry Advisory Board within the Future Directions Committee and co-chairs the Digital Reality Initiative. He teaches a Master course on Technology Forecasting and Market impact at the University of Trento. He has published over 100 papers in journals and magazines and 14 books.