There comes a time that it is hard to say if somebody tells a happy or sad story unless you get cues from how he or she expresses them. Worry not, a wearable device is now going to do that job for you.
There’s a wearable developed by students at MIT’s Computer Science and Artificial Intelligence Lab (CSAIL), which will help you better navigate the tones of any conversation.
It has the ability to determine the mood of human exchange based on speech patterns, tone of voice, and physiological information like a person’s heart rate, blood pressure, skin temperature, or hand movements.
Mohammad Ghassemi, a CSAIL graduate student, says, “The way we set up our experiment was to explore if we had information on half of a conversation, could we reconstruct how they felt as they were telling these stories.”
He did the experiment with fellow CSAIL graduate student Tuka Alhanai.
First, they recorded the vitals and audio from stories told by 31 different students at their university, who later noted the researchers about the tone of their story overall, whether happy or sad. A third person who is a lab technician dissects each story into five segments and categorizes them into happy, sad, or neutral.
Several of the stories became the framework of two different neural networks – one for classifying the overall mood of the stories, and another for the mood of the 5-second segments of such stories.
It was later found that their artificial intelligence device has the capacity to detect certain patterns of human communication telling the mood of a story. An example in the experiment shows that sad stories have cues from the speaker like being monotonous, involving long pauses, fidgeting and putting the hands on the face.
But Ghassemi admits that the AI is far from perfect in helping those with social anxieties, as there is no guarantee about the system being able to clearly tell the story’s mood.
“There’s a lot of variation in the way we tell stories,” he says. “A sad story can have happy moments, or a happy story can be sad up until the very end. This variation can make the emotive content of stories hard to classify.”
The results of the research revealed the AI was 83% accurate in capturing the overall mood of the given narratives, with its ability to classify the short segments only 17.9% better than if the machine had randomly guessed.
Ghassemi reasoned out that this happened due to the difficulty of trying to analyze very small segments of conversation without reference to the whole conversation, coupled with the relatively small data set used in the initial research (stories from 31 students).
Both are looking to advance their system by integrating more data in order for the AI to improve its mood-reading skills or “emotional granularity”. Later, it might just detect boredom, fear, or the changing emotion of the speaker.
It’s still far from the disposal of the public but the researchers hope that someday it can help people with social anxiety disorder or Asperger’s, and even the anti-social, the introverts, or the socially awkward.
Source: MIT News