Vinita Bhatia
5 hours ago

Apple Siri settlement: A wake-up call for data privacy

A $95 million settlement raises urgent questions about privacy laws, ethical advertising, and the future of consumer trust.

Apple Siri settlement: A wake-up call for data privacy

A dozen years ago, when Nishad Ramachandran, a digital and AI consultant at Useristics Inc, brought an Amazon Alexa into his home, he quickly realised the implications of having a device that actively listens. "If Alexa did not listen and respond to any of our commands, we would deem it a failure. So, for the success of its product and to provide high levels of customer experience, it had to listen in," he remarked. But he also questioned, "What is it doing with all the voice data it is surveying?"

That question is now at the forefront again, following Apple’s $95 million settlement over allegations that its Siri voice assistant recorded private conversations without user consent and shared them for targeted advertising. This case exposes not just Apple’s missteps but also the larger industry-wide practices that blur the lines between enhancing user experience and violating privacy.

The lawsuit against Apple, dating back to 2019, alleged that Siri’s' ‘Hey Siri’ feature was unintentionally activated, leading to private conversations being recorded. More alarmingly, these recordings were reportedly shared with advertisers, resulting in targeted ads based on private discussions. Two plaintiffs claimed they were served ads for Air Jordans, Olive Garden, and a surgical treatment after discussing these topics at home.

Apple has denied any wrongdoing, stating it has never sold data collected by Siri or used it to create marketing profiles. The settlement, however, has shaken the smartphone company’s ‘privacy-first’ brand positioning.

Fumiko Lopez, the lead plaintiff, and others involved in the class-action suit could receive up to $20 per Siri-enabled device owned between 2014 and 2024, pending approval from US District Judge Jeffrey White in February 2025.

Apple is not alone in this controversy. In 2023, Amazon settled for $25 million over allegations that Alexa violated children’s privacy laws. Google is also facing legal action for allegedly using conversations as training data for AI workflows.

These cases underscore how tech giants exploit ‘active listening’ to gather data for AI development and targeted advertising. Cory Doctorow’s concept of ‘enshittification’—where platforms prioritise users, then exploit them for advertisers, before turning on advertisers—finds unsettling resonance here. As Doctorow writes, “Then, they die.”

Ethics versus innovation: Where do we draw the line?

The Apple Siri case raises a critical question for advertisers: how far is too far? Prashanth Joshua, head of business growth and strategy at 1verse1, notes that while active listening offers unprecedented insights into consumer behaviour, it risks eroding trust. “Without clear consent and transparency, this technology risks further eroding trust between consumers and companies,” he said.

Nishad Ramachandran, digital and AI consultant at Useristics Inc.

Active listening combines voice data with behavioural data to deliver hyper-targeted ads. However, this raises significant ethical concerns.

For instance, Bengaluru-based Dipti Shetye expressed her worry over the ease with which people unknowingly consent to such invasive practices. “I can only hope that the laws ensuring that processing of my personal data is done in a transparent and secure manner,” she said.

Global privacy regulations like the European Union’s General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA) have introduced stringent rules, requiring explicit consent for data collection. Yet, loopholes remain. Most users rarely read the fine print in terms and conditions, and companies exploit this apathy to include invasive clauses.

In India, while data protection laws mandate consumer consent for targeted advertising, the practice of clicking ‘Accept All Cookies’ remains pervasive. The lack of user awareness compounds the issue, leaving individuals vulnerable to data misuse.

Joshua suggests innovative ways to enhance user consent, such as gamifying the process. “Turning consent into an interactive experience—like a short quiz—can make users actively engage with how their data will be used,” he proposed.

Implications for adland

For advertising agencies, the Apple Siri case serves as both a warning and an opportunity. It highlights the need to prioritise ethics and transparency in data collection. Nishad Ramachandran argues, “Tech companies will continue to push at the boundaries of what is legal when it comes to privacy. Governments and courts will always be many steps behind.”

Prashanth Joshua, head of business growth and strategy at 1verse1.
 

The settlement—amounting to just 0.102% of Apple’s $93.4 billion profit last year—also underscores the inadequacy of penalties. Such fines are unlikely to deter violations. This raises the question: should penalties for data breaches be stricter?

An industry professional remarked that despite Apple's settlement, its equity has likely suffered, even if many of its 1.38 billion iOS users remain indifferent or unsure about their next steps. He noted, “Apple could well get away by claiming that it never used the data from its devices to serve ads. However, it is hard to believe that by accessing and analysing user requests, the tech major must have improved Siri's understanding of dialects and other user nuances and then served their preferences on a platter to advertisers.”

The controversy surrounding voice assistants extends beyond Siri. In 2023, Amazon paid a $25 million settlement after the DOJ and FTC accused its Alexa assistant of violating children’s privacy laws by using voice and geolocation data for its own purposes.

While Apple has taken steps to address the controversy, such as disabling human grading of Siri recordings, the broader debate around privacy persists. Sam Harris’s warnings about AI’s potential to develop beyond human comprehension serve as a stark reminder of the risks associated with unchecked technological innovation.

For advertisers, the lesson is clear: trust is fragile, and transparency is non-negotiable. Brands must lead the way in educating consumers, offering granular opt-in controls, and ensuring data practices align with ethical standards. In an era where privacy breaches can erode consumer confidence, accountability is no longer optional.

As we navigate the digital age, the Apple Siri case is a wake-up call—not just for Big Tech but for the entire advertising ecosystem. It challenges us to rethink how we balance innovation with ethics and consumer rights. The stakes couldn’t be higher.

Source:
Campaign India

Related Articles

Just Published

27 minutes ago

CNN’s commercial head on ethical AI and brand safety

Cathy Ibal shares CNN’s strategy for integrating AI, tackling brand safety concerns, and adapting to the digital and polarised media landscape in Asia-Pacific.

4 hours ago

LenDenClub's campaign redefines financial empowermen...

It encourages investors to invest in new-age lending products that can help them to earn daily and improve financial stability.

4 hours ago

Why 5G broadcasting could be the next big opportunit...

VideoVerse’s CRO reveals how 5G transforms broadcasting into dynamic, interactive experiences for today’s evolving audiences.

4 hours ago

De Beers and GJEPC partner to promote natural diamonds

The collaboration will roll out customisable marketing assets, AI-driven campaigns, and multi-lingual training modules to improve how retailers engage with buyers.