New year — old faces: Meta pays up, Apple protects, TikTok spies, and Twitter may become more evil. AdGuard’s digest
Whether you’ve already got back to the grind or are still vacationing, we want to keep you in the loop. So, here we are, with our first news digest of the year. It features all the usual suspects: Meta, Apple, TikTok, and Twitter. May this year bring us more positive security and privacy news than bad (though who are we kidding).
Meta to pay a record $725 million to settle Cambridge Analytica lawsuit
Meta has agreed to settle a long-running lawsuit stemming from its 2018 Cambridge Analytica fiasco for $725 million. The settlement, which has yet to be approved by the judge, is the largest of its kind, the lawyers for the plaintiffs — millions of Americans whose personal data was harvested by the firm on Facebook — noted.
Cambridge Analytica harvested the data of up to 87 million people in order to create their detailed psychological profiles that could later be used for political advertising. Lawyers for Facebook argued that by choosing to have a presence on the social network, users effectively absconded their right to privacy. The judge, however, called this view “wrong.” After the settlement is approved, each US Facebook user would be able to get a piece of it.
Meta had already forked out $5 billion dollars in fines to the US regulator over the Cambridge Analytica case. CEO Mark Zuckerberg is, meanwhile, facing a seperate lawsuit, alleging he was personally responsible for mishandling users’ data.
Since the scandal broke, Meta has painted itself as a reformed citizen who now cares about people’s privacy. However, it has since been mired in several scandals — the most recent one saw its employees highjack user accounts for bribes. History has shown that past fines have not detered Meta from violating user trust, and we’re afraid it’s unlikely things will turn out differently this time.
Apple axes controversial plan to scan all photos on users’ phones
In a move that drew applause from privacy advocates, Apple has abandoned its much-criticized plan for a feature that would’ve seen it scanning photos users stored in iCloud for child sexual abuse material (CSAM). Apple first announced the CSAM detection feature last fall, but put it on hold after it was met with intense public backlash. Privacy activists argued, and not without a reason, that by scanning all photos on people’s devices Apple would undermine their privacy.
“We’re very excited, and we’re counting this as a huge victory for our advocacy on behalf of user security, privacy, and human rights,” the Center for Democracy and Technology (CDT), one of the groups that opposed the SCAM feature, stated, responding to the Apple’s move.
Indeed, the fact that Apple decided not to proceed with creating what could have been a system of mass surveillance on users’ phones is a good thing. In a statement to Wired, Apple said that it believes children should be protected from abuse, but not at the expense of their own privacy. “Children can be protected without companies combing through personal data,” according to the company’s spokesperson. And we couldn’t agree more.
More good news from Apple: end-to-end encryption comes to iCloud backups
Apple has been generous with Christmas gifts. The company has announced that it will be also expanding the number of “data categories” within iCloud protected by end-to-end encryption. That includes photos, voice memos, notes, reminders, and, perhaps, most notably, iCloud backups. The improved security is part of Apple’s Advanced Data Protection setting which has become available to the US users and will be rolled out to the rest of the world in “early 2023.” The setting is optional, meaning you’ll have to turn it on.
In addition to that, Apple will also start supporting the use of hardware security keys for two-factor authentication. Another security feature announced by Apple will ensure people who face heightened security risks are protected from sophisticated attacks. The new iMessage Contact Key Verification feature, if enabled, will alert the user if “an exceptionally advanced adversary” is “ever to succeed breaching cloud servers and inserting their own device to eavesdrop on these encrypted communications.”
Apple has previously drawn criticism for leaving its iCloud backups unencrypted, which meant it could have accessed the copy of your data and share it with law enforcement, which it went on doing at least in one high-profile criminal case. The new feature will close this security loophole, and we can only welcome that.
Twitter reportedly wants you to either share more data or pay up
But it’s not all sunshine and rainbows in the world of Big Tech. Twitter reportedly wants to force ordinary users to share their location and phone numbers for targeted advertising, with no way to opt out, Platformer reported. Well, there will be such a way, but it’s not exactly free… According to Platformer, Twitter will only allow users who have paid for a Twitter Blue subscription to refuse to share their data.
The irony is that the main idea behind this plan is reportedly to compensate for the possible loss of ad money due to the launch of the revamped Twitter Blue subscription, which was originally introduced to boost the company’s revenue. Twitter CEO Elon Musk previously said that Twitter Blue subscribers would see half as many ads, and since advertising accounts for more than 90 percent of Twitter’s total revenue, this could deplete the company’s coffers. According to Platformer, the change will first be rolled out to a limited number of US users to gauge their reaction.
Twitter’s reported new plan, if true, is alarming. It can deal a serious blow to privacy by leaving users no choice but to share their data with advertisers. Although Musk has tried to distance himself from the worst practices of Big Tech, his Twitter does not look much different from the rest of the pack as of now, and might actually end up being worse.
TikTok admits its employees tracked journalists
TikTok has added fuel to existing privacy concerns after it admitted that its employees accessed user data of US journalists, including their IP addresses, in a bid to track their movements. Four employees of TikTok’s parent company, ByteDance, who schemed to track at least two US reporters, were fired, TikTok confirmed. The goal of the spying mission (which ultimately failed) was to see if the journalists were in the same locations as ByteDance workers suspected of leaking insider information.
TikTok apologized for the incident, calling it “an egregious misuse of their [employees’] authority to obtain access to user data.” While TikTok blamed the incident on the “misconduct of a few individuals,” it apparently became the final straw for the US House of Representatives that banned lawmakers and staff from using the app on any government-issued devices due to “security issues.”
Like any other social network that makes most of its money from advertising, TikTok collects a large amount of user data, and therefore poses an inherent risk to privacy and security. The data we share on social media can be leaked, shared with third parties or can fall into the hands of unscrupulous employees. These are the risks we have to take into account when using social media platforms like Facebook, Instagram, Twitter, TikTok, and others.
To help you protect your privacy and security, we released an updated version of our digital hygiene cheat sheet. Check it out!.