Voice Assistants & Privacy Concerns?
Amazon has an on-going PR problem around Alexa. Some of it is merited as a software glitch sent a recorded transcript of a private conversation to a third party. Some of it is not. Today Bloomberg ran an article called “Amazon Workers are Listening to What You Tell Alexa” which describes the company practice of employing “thousands of workers” to listen in on people private conversations. It makes for a catchy, click-baity headline which taps into latent fears of Big Brother (Big Company?) using technology to observe and take actions against individuals.
But …Voice is an evolving interface. It is undergoing constant, iterative work to perfect. What the Bloomberg article is describing is a standard practice for on-going AI research. As in, researchers cannot just foist volumes of data into computers to gain insight. They need some reason testing – by a human being – to determine whether the data is accurate or relevant.
Using human interpreters is a way of validating ongoing tweaks to underlying algorithms that make Voice work.
Consider also that we see error rates of 10-20% for Natural Language Parsing (NLP) depending on Voice content and context. Using human interpreters is (currently) the best most accessible means of improving those odds.
Besides Alexa and Google Assistant also include disclosures about these practices in their respective user agreements. Consumers have the choice to opt out.