The device’s tone-tracking service raises questions about user privacy, but it’s also part of a growing industry that employs AI and voice-recognition to analyze the emotional affect in a human voice.
In many ways, the $99.99 Halo is a standard wearable, tracking a user’s health with the help of an accompanying smartphone app.
What sets it apart is a small mic on the band that can record snippets of your voice, which is analyzed using machine-learning to take into account pitch, intensity, tempo and rhythm. Those bits of speech are timestamped and identified with labels like « content » or « hesitant, » as well as measured for « positivity » and « energy level. »
Users have to opt in to the tone feature, and Amazon emphasized that speech samples are recorded locally on the phone — not shared on the cloud — and are automatically deleted after processing so that no one can listen to them.
Still, what’s known as « sentiment analysis » is increasingly being used by businesses in voice communication, often to help sales agents interact with customers — a need that has grown as agents carry out their work remotely during the pandemic. But even some of those in the field caution that machines lag behind humans when it comes to interpreting emotions.