AdGuard’s digest: Google pays Apple, X wants to replace your bank, YouTube’s ad blocking detection faces a privacy challenge
In this edition of AdGuard’s Digest: report reveals how much Google pays Apple, a controversial ‘anti-encryption’ bill becomes law, X wants to be your bank, YouTube faces a surprise challenge, OpenAI is preoccupied with doomsday thoughts.
Google paid Apple $18 bn to remain iPhone’s default search engine
Apple and Google are bitter rivals, competing at many battlegrounds, ranging from mobile operating systems and app stores to advertising. However, they are also close partners. It has never been much of a secret that Google pays Apple huge sums of money to be the default search engine in Safari across all Apple’s devices. The only question was: how much? And now The New York Times has it answered.
The Times reported, citing people with the knowledge of the deal, that Google paid Apple a whopping $18 billion in 2021. To put this into perspective, in 2022 Apple generated “just” around $4.7 billion with its own global advertising business. According to the report, Google also actively worked to nip Apple’s aspirations to build its own search engine that could compete with Google Search. After Apple’s built-in Spotlight search feature started to perform exceptionally well, Google concocted a plan to introduce a similar feature in Chrome. Google is also planning to use the new anti-competition laws in the EU to its advantage. The laws should force Apple to open its closed ecosystem to other companies, including Google. Google believes that once it happens, a number of European iPhone users with Chrome would triple, according to the internal documents cited by the Times.
The report reveals that While Apple and Google compete fiercely in many domains, they also cooperate to maintain their dominance and influence. These deals behind the scenes lead to centralization and stifle innovation and diversity in the tech industry. Moreover, Google’s desire to exploit the EU regulation to make the browser market even more centralized is worrying, as it could undermine user choice, privacy, and security.
UK’s ‘anti-encryption’ bill becomes law
A controversial piece of legislation, which can potentially undermine online privacy and security, has become law in the UK. The so-called ‘Online Safety Bill’ received royal assent on Thursday, and will now be enacted in several stages.
One of the law’s stated goals is to stop the spread of child sexual abuse material (SCAM) both when it communicated “publicly or privately,” the latter meaning in private messages. Earlier this year, heads of the most popular end-to-end encrypted messengers, including WhatsApp, Viber, and Signal, argued that this clause effectively means that they will have to scan private chats, and this is impossible to implement without undermining end-to-end encryption (E2EE). They threatened to leave the country if this part of the law is not removed. Later (we wrote about it here), the UK regulator said that it won’t use the law to force service providers to scan messages because there was no safe and secure way to do so at the moment.
However, this does not mean that the British government won’t enforce the so-called “spy clause” in the future. As for us, we are not at all sure that the UK will keep its promise. In any case, now that the ‘Online Safety Bill’ has become law, it will be interesting to see how the situation unfolds. Specifically, whether service providers will back down or buckle up under the threat of fines of up to 10 percent of their annual global turnover. For one thing, Signal president Meredith Whittaker has already stated that Signal would rather leave the country than be “forced to build a backdoor.”
Musk wants you to ditch your bank in favor of X
Would you trust a social network with your finances? If you ask us, the answer is a resounding “no.” First of all, banks may not be without their shortcomings, but they have perfected the craft of safeguarding our money over the years through trial and error. On the other hand, social networks have proven not to be as trustworthy, just by the way they handle (or rather mishandle) our personal data. And X (formerly Twitter) is no exception. Regardless, the platform’s current owner, Elon Musk, wants the app to become the center of people’s financial lives.
According to The Verge, Musk told a recent staff meeting that he wants X to handle payments, adding: “When I say payments, I actually mean someone’s entire financial life.” Musk further clarified, sounding increasingly ominous: “If it involves money. It’ll be on our platform… like you won’t need a bank account.” Musk’s vision of turning the microblogging service into a bank replacement is consistent with his idea of X as an “everything app,” similar to China’s WeChat. But the idea, however grand, has been met with skepticism.
Musk has said he wants the financial features to launch by the end of 2024. Whatever form they take, we strongly advise against trusting X (or Facebook, or Snapchat, or YouTube…) with your finances, and especially from transferring your entire financial life to a platform with a poor privacy and security record.
YouTube, in war with ad blockers, faces an unexpected privacy challenge
YouTube has always been hostile to ad blockers, but lately it has stepped up its war against them. The Google-owned platform now displays annoying pop-ups to some users with ad blocking tools and blocks their video playback if they don’t disable them. YouTube claims that it has the right to do so, citing a violation of its terms of service, but a privacy advocate is now challenging this claim.
YouTube’s terms of service do not explicitly prohibit ad-blocking extensions, but they do say that users should not “interfere with any part of the service” — which presumably includes ads. It’s a murky legal issue, so it will be interesting to see what the Irish regulator says in response to the complaint. For his part, Hanff has mentioned that the European Commission told him as far back as in 2016 that the use of such ad-blocking scripts should require consent.
OpenAI will investigate chemical, biological, and nuclear threats posed by AI
OpenAI is plagued by a growing number of legal challenges, including copyright infringement and privacy claims. However, the issue that seems to have the company’s leadership in its thrall is the threat of extinction posed by AI.
The company has set up a special team, called Preparedness to assess “catastrophic risks from frontier AI will.” The team will evaluate various scenarios, such as AI deceiving humans, cybersecurity breaches, chemical, biological, radiological, and nuclear attacks, and “autonomous replication and adaptation” — or, in other words, AI going rogue.
The team’s mission is commendable, but it may also distract from more immediate and realistic risks posed by AI, such as privacy, authorship, and ethics. We have written extensively about these issues here and here.