Collection of voice data for profit raises privacy fears
A customer-service center uses artificial intelligence to identify a caller's agitation, an insurance company scans voice data to flag illness and raise rates, a five-star restaurant denies a reservation over personal details revealed by the tone on the other end of the line.
Far from science fiction, such scenarios have leapt into the realm of possibility, said Joseph Turow, a professor at University of Pennsylvania's Annenberg School for Communication and author of "The Voice Catchers: How Marketers Listen In to Exploit Your Emotions, Your Privacy and Your Wallet."
The rise of voice-assisted products in homes and workplaces has driven a wave of private sector innovation, honing the intake of fast food drive-thru orders, replacing handheld tech typically used by warehouse employees, and refining smart home devices that adapt to a user's vocal tics, according to privacy experts and advocates who spoke with ABC News.
But voice data collection also fuels targeted marketing based on personal information gleaned from recordings and risks data breaches that could place one's voice in the hands of cyber criminals aiming to imitate it, they added.
"This has become a real issue as more and more people are using voice-activated devices like Alexa and Siri," Marc Rotenberg, founder and executive director for the nonprofit Center for AI and Digital Policy, told ABC News. "There's a ticking time bomb with the collection of voice recordings."
"These companies gather voice recordings to improve a service," Rotenberg added. "But their retention of these voice recordings is a real concern for privacy."
While voice assistants recently arrived in consumers' pockets and living rooms, the technology goes back more than a half-century.
In the early 1960s, IBM released the Shoebox, a calculator that could do basic arithmetic in response to voice commands. Roughly a decade later, the "Harpy" speech recognition system launched by Carnegie Mellon could identify more than 1,000 words in a user's voice.
Ultimately, the technology reached an inflection point more than a decade ago, when Apple released Siri as a feature of the iPhone, equipping tens of millions with voice command. Three years later, Amazon came out with Alexa, a voice assistant that could play songs or look up facts in response to a simple utterance. Soon after, Google launched Google Assistant, a voice-recognition feature available on its Android and Google Home devices.
Meanwhile, the technology has grown well beyond a consumer curiosity, as businesses have sought it out to improve operations and marketers have explored secondary uses of voice data. In all, the worldwide voice recognition market surpassed $3.5 billion in 2021 and is expected to reach $10 billion by 2028, according to research firm Global Market Insights.
In many cases, businesses deploy voice data because it bolsters productivity or betters the service encountered by a customer, Kristin Bryan, an attorney at Squire Patton Boggs who has worked on litigation involving the collection of voice data, told ABC News.
"Companies are increasingly finding novel ways to use voice technology to reduce human error and streamline operations," she said.
For instance, a growing number of warehouses in the nation's vast e-commerce network have replaced handheld tablets with wearable technology that allows employees to record their work through voice commands, freeing up both hands for lifting and sorting products, said Roberto Michel, a senior editor for Modern Materials Handling, a trade publication covering the manufacturing industry.
A survey conducted by the trade outlet last year found that 39% of warehouse companies use voice-assisted technology, an increase from 21% of such businesses who reported adoption of the devices a year prior, Michel said.
The technology "speeds up the order-selection process versus fumbling with a handheld," Michel said.
However, even ostensibly innocuous uses of voice-assisted technology can trigger privacy concerns, said Turow, of the University of Pennsylvania.
Last week, Amazon-owned grocery chain Whole Foods agreed to pay almost $300,000 to workers in a settlement over allegations that a voice-assisted product used to track worker productivity at a Chicago warehouse had recorded employees' voices without their consent.
Critics fear that voice-assisted products glean more revealing data than many users realize, allowing companies to profit off of utterances made at home or work through carefully honed advertising or the sale of intimate information.
A consumer's voice could be used to reveal a wealth of knowledge about him or her, including height, weight, ethnicity, personality traits and possible health issues, said Turow, who spoke to scientists about audio sleuthing for his book on the collection of voice data.
In 2019, Amazon announced the development of "a deep learning model to detect when customers are frustrated" with its voice assistant. "Alexa can now try to adjust, just like you or I would do," the company said.
"With Frustration Detection, Alexa will recognize positive, negative, and neutral tone for a request. Alexa is not designed to detect distinct emotions like happiness, sadness, anger or surprise," Amazon Spokesperson Lauren Raemhild told ABC News.
"Customers have several options to manage their Alexa voice recordings, including the option not to save their recordings at all," she added.
Last June, TikTok updated its privacy policy, expanding data collected by the company to include voice recordings.
Companies that collect voice data could use information to sell products directly to consumers, or pass the data along to advertisers, Turow said.
"As we move into a world where people use voice over typing in their everyday lives, marketers want to know: What can I get out of the voice of this person?" he said.
Rotenberg, of the Center for AI and Digital Policy, warned that the collection of audio data could also result in nefarious actors accessing one's voice, allowing them to commit fraud or other crimes through impersonation.
A thief deploying the tactic, known as deepfake audio, tricked a Hong Kong-based bank into sending $35 million to a criminal the bank thought was a corporate attorney, Forbes reported last October.
In a statement, Amazon Spokesperson Lauren Raemhild said the company takes extensive measures to ensure the security of its data.
"Amazon has hundreds of employees dedicated to designing secure products, innovating on security, and finding and fixing vulnerabilities in Amazon services and devices," Raemhild said. "We employ numerous tactics and features that help keep our devices and customer data secure, for example, rigorous security reviews during development, encryption to protect data, and regular software security updates."
Apple and Google did not immediately respond to a request for comment about potential theft and abuse of voice data.
Despite the growing use of voice-assisted technology, laws protecting audio data collection remain limited, according to attorneys and advocates who spoke with ABC News.
The U.S. lacks a federal law governing such data, leaving regulation primarily at the state level. So far, four states have enacted laws pertaining to the collection of voice data: California, Texas, Washington and Illinois, said Bryan, of Squire Patton Boggs.
The strongest law, in Illinois, requires that companies obtain written consent from individuals before collecting such data, and afterward firms are prohibited from selling or profiting off of the information. Companies that violate the law face potential financial penalties.
The effort to limit voice data collection requires urgency, before voice-assisted products become more widely adopted, Turow said.
"Once this has congealed, we can't do much about it," he said.