AdGuard’s Digest: Avast’s double-dealing, Meta’s ‘smokescreen’ privacy fee, and push alerts risks
Avast abused its privacy-protecting software to sell people’s data
In a true tale of wolves in sheep’s clothing, antivirus software maker Avast was found to have been selling its users’ browsing history to more than 100 third parties for years, without notice or consent. The US federal regulator, the FTC, has reported that Avast “unfairly collected consumers’ browsing information through the company’s browser extensions and antivirus software, [and] stored it indefinitely” to then sell it through its subsidiary called Jumpshot.
Users, the FTC said, were blissfully unaware of the arrangement and thought the only thing Avast was doing on their computers was blocking third-party tracking. The practice went on from at least 2014 until 2020, when Avast shut down Jumpshot. When Jumpshot was still in operation, Avast would provide it with re-identifiable browsing data, which could include every website a person visited, precise timestamps, device and browser types, and approximate geo-location. Some of the products that Jumpshot offered to its advertising clients were specifically designed to cross-correlate Avast data with other data, including from brokers, to track individual users. Avast must now pay $16.6 million for its flurry of violations and is prohibited from selling web browsing data for advertising purposes.
It’s not uncommon for big tech companies to harvest user data without so much as a heads-up or a simple notice, but it’s less common for the companies whose entire premise is to improve your privacy to be caught red-handed doing the very same thing. Unfortunately, even self-proclaimed “privacy champions” aren’t infallible when it comes to protecting your personal information. So, if you’re going to entrust your privacy to someone else, it’s worth the time and effort to do your research.
Meta’s privacy fee is ‘smokescreen’ to hide illegal data collection, EU groups say
Meta’s plan to charge users in the EU €9.99 per month for a single ad-free Facebook or Instagram account (and this is only on desktop) has drawn outrage from a European consumer rights consortium. The consortium, which brings together 45 European consumer organizations from 32 countries, has dubbed the Meta’s offer to users “smoke and mirrors” designed to preserve “what is, at its core, the same old hoovering up of all kinds of sensitive information about people’s lives which it then monetises through its invasive advertising model.”
We have argued before that what Meta portrayed as a fair choice is in fact a fake one, since it makes privacy protection a paid feature reserved only for those who are willing to foot an outsized bill — about €35 for using both Instagram and Facebook on mobile and desktop. The consortium announced that eight of its members filed formal complaints with their national data protection watchdogs, alleging violations of the EU’s data protection regulation, the GDPR. These legal challenges come on top of the complaints filed by the 19 other consortium members back in November.
It’s encouraging to see European consumer groups banding together to challenge the false dilemma that Meta users in the EU now face. The more visibility this Big Tech malarkey gets, the better. However, it remains to be seen whether European regulators will take a keen interest in consumer concerns.
Push notifications — police’s new favorite way to trace you
We’ve recently written about popular apps on iOS exploiting the push notification feature to covertly gather user data. Turns out that it’s not the only privacy risk of push notifications that users should be aware of. According to a recent report by the Washington Post, law enforcement have been increasingly seeking access to push notification metadata in order to track down suspects in various investigations.
Push notifications — alerts sent by apps that flash on your phone’s home screen — are stored on servers managed by Big Tech companies like Google, Apple, and Facebook. These companies can, upon request, supply the police with related metadata, including timestamp, your network details, and so forth. Unlike the content of messages, such as those sent via end-to-end encrypted services like Meta-owned WhatsApp, metadata is not encrypted. Yet, it can disclose significant details about a user’s interactions and behavior, including their geolocation. The Washington Post’s probe uncovered over 130 search warrants for push notification metadata across the US alone. The crimes listed in these warrants range from terrorism and sanction evasion to COVID relief fraud, maritime piracy, gun law violations, and drug offenses — in other words, a hodge-podge of offenses of various degrees of seriousness.
It’s becoming increasingly clear that the metadata that ends up on Big Tech’s servers and that is associated with individual users can offer profound insights into those users’ whereabouts and behavior patterns. It’s a privacy loophole, which third parties are turning to more and more, as traditional ways of data acquisition are being undermined by the increasingly stringent privacy regulations. This worrying trend is a growing privacy concern, and it won’t go away anytime soon.
Pornhub is panned for failing to protect “models’” privacy
Pornhub’s parent company, Aylo, has been accused of “significant problems” with privacy that led to “social stigmatization, psychological damage, financial loss, and even attempted suicide” of victims whose images were uploaded to the site without consent.
The findings are the result of a three-year investigation into Pornhub by Canada’s privacy commissioner. In his report, the commissioner took issue with Pornhub’s practice of accepting at face value uploaders’ claims that they had obtained consent from “models” to appear in pornographic material. This hands-off approach has proven to be fertile ground for revenge porn, the commissioner said. To remedy the situation, he urged Pornhub to immediately remove all content from its site for which meaningful consent had not been obtained. For its part, Pornhub’s parent company reportedly vehemently disagreed with the conclusions and refused to implement any of the recommendations, in particular that it obtains explicit consent from “talent” before greenlighting the upload of each video.
Despite efforts to regulate the porn industry, it remains a gray area. By default, users’ images or likenesses should not be used in ways they have not explicitly agreed to. The misuse of personal information by the porn industry serves as a stark example of how mishandling data — whether images or other personally identifiable information — can have serious consequences, especially in the era of AI-generated deep fakes, that we are now in.