In 1975, the mood ring was created by two New York Inventors, Josh Reynolds and Maris Ambats. They bonded liquid crystal with quartz stones set it into rings. The liquid crystal changed colors based on the temperature of the finger of the wearer. Most rings come with a color chart indicating the supposed mood of the wearer based upon the colors indicated on the ring.
Enter one Mohammad Mahdi Ghassemi and his colleague, Tuka Alhanai, two MIT students. They've developed a better way to detect the changing moods of a speaker. Their system relies on the artificial intelligence of a computer algorithm that registers not just what is being said but how it is said, as well as the speaker's vital signs. The algorithm then discerns whether the conversation is happy, neutral or sad.
Scientists have been down this path before, but these two are pushing the boundaries. They've trained a computer to consider a wide range of factors in making judgments about emotions.
In a new paper about their work, they write, "As far as we know, this is the first experimental set-up to include individuals engaged in natural dialogue with the particular combination of signals we collected and processed."
They are hoping that the new technology may someday be used by people who have problems reading the many aural and visual cues that signal emotion in human interaction. For example, people with autism may benefit from having co-workers or relatives who wear a device that makes continuous emotional assessments. Such assessments could be transmitted to the person with autism wearing a similar device.
It's possible that such technology might also be used in classrooms and focus groups. Just imagine, you could expose consumers to packaging, logos, videos, virtually every form of communication, and get a truly honest assessment. No more guessing.
In one experiment, they fed the algorithm snippets of dialogue, both happy and unhappy, as well as word definitions. Then 10 volunteers were asked to tell a story, happy or sad, whatever came to mind. Researchers then asked questions to approximate a conversation. While telling the stories, participants wore a Samsung Simband, a computer on a wristband with sensors that capture a wide variety of physiological data, including the skin's temperature and electrical resistance (a measure of stress). The device also had an accelerometer and gyroscope, which made it possible to record the speaker's movements. A sad story, for example, typically elicited increased cardiovascular activity, fidgeting, changes in posture and fingers touching the face.
The Simband sent the data to the algorithm, which also monitored what was being said. It knew that the word "hate" wasn't associated with happy discussions. It could also detect the differences between sincere and sarcastic in a phrase like, "Thanks a lot!"
Ultimately, the scientists found that their algorithm could detect whether a conversation was happy or sad with 83% accuracy, and provide an accurate assessment, positive, negative or neutral, every five seconds at a rate that is 14% points better than chance.
The conclusion of all this? Mr. Ghassemi says, "results show that it's possible to classify the emotional tone of a conversation in real time."
As of June 29, 2017, MDB will be working out of new offices at:
1634 Eye Street NW, 9th Floor, Washington DC 20006.
Give us a few days to unpack and then we'll be excited to show off the new space.