AdGuard Digest: Ad blocking on the rise, OpenAI’s new tool, TikTok’s old-new woes and misleading chatbots
Ad-blocking nation: More than half of Americans now block ads, report says
Ad blockers have become an integral part of digital life for about 52 percent of Americans, according to a new survey conducted by research firm Censuswide. This represents a significant increase from 2022, when it was reported that only about 34 percent of Americans used ad-blocking software.
When it comes to advertising, programming, and security professionals who know the ad industry inside and out, the numbers are even more impressive. Between two-thirds and three-quarters of professionals in these industries use ad blockers. Ad blocking users told researchers that they use ad blockers to protect their privacy (about 20 percent), while about 18 percent said their primary concern is blocking ads. About 9 percent also said that ad blockers help make pages load faster. The survey was conducted on behalf of Ghostery, a maker of ad blocking software themselves.
The results of the survey suggest that ad blocking has increasingly become mainstream as more people realize that it makes surfing the Web easier and more convenient. The fact that the majority of people now see ad blocking software not only as a way to get rid of annoying and invasive ads, but also as a way to protect their privacy, suggests that people have become more aware of the privacy risks associated with targeted advertising, namely that it involves extensive collection of their personal information. And that is a good sign.
A recipe for disaster? OpenAI unveils tool that can help create voice deepfakes
OpenAI, the maker of ChatGPT, has unveiled a preview of its new product called Voice Engine. The tool is a text-to-speech AI model and is said to be capable of creating a synthetic voice. To generate a copy of the voice, you need to feed the model a 15-second snippet of recorded audio.
While the tool can come in handy for those who need reading assistance, for example, there have been concerns about its potential for abuse. The existence of technology that can imitate any voice based on such a small sample can open the door to a new wave of phone scams and even security breaches. While there are tools on the market that can mimic a real voice, OpenAI boasts that its product would stand out due to a higher quality of output speech.
OpenAI says it won’t be releasing the new tool to the public just yet, citing the need to put mitigations in place so it won't be abused by bad actors. However, it’s almost impossible to ensure that such a powerful tool won’t be exploited by cybercriminals once it’s out in the wild. So while it’s fascinating to see further advances in AI-powered speech generation, the consequences of moving too fast without proper guardrails may be too dire.
TikTok under microscope for suspected privacy and security violations
The US Federal Trade Commission (FTC) is looking into TikTok’s alleged failure to comply with the country's security and privacy laws. US officials are reportedly investigating whether the world’s most popular platform for short-form video content violated the Children’s Online Protection Rule, which mandates that companies obtain parental consent before harvesting data from anyone under the age of 13.
The regulator is also investigating whether TikTok broke the law by engaging in “unfair or deceptive” business practices. This charge is related to TikTok reportedly allowing individuals based in China to access US user data.
TikTok has been on the verge of being forced to leave the US for some time. The bill that would force TikTok to either sell out or be banned in the country has already been passed by the lower house of the US Congress and is now being debated by the Senate. And while US lawmakers have indicated that they will not rush to pass the bill, US President Joe Biden has already said that he will sign the bill if it is sent to him.
While the concerns about TikTok are valid, it’s important to apply the same scrutiny to other social networks that collect user data. The focus should not be solely on TikTok’s Chinese roots due to its parent company, ByteDance. Instead, we need to put all social media networks under the same magnifying glass.
Chatbot chaos: NYC government-backed chatbot spouts misinformation
Never rely too much on your AI-powered friends — that’s what we’ve been preaching since AI-powered assistants became a staple of our daily lives in recent years. But companies, and now governments, have been embracing AI at a dizzying pace and are now facing the music. One such example is the New York City government.
Launched last October as an extension of a city portal, the chatbot has been found to be spreading misleading information about legal matters. According to an investigation by The Markup, the chatbot claimed that employees could tip their bosses and advised that landlords have the right to discriminate against potential tenants based on their source of income.
The chatbot is powered by Microsoft’s Azure services. In response to the criticism, the New York City mayor’s administration pointed out that the chatbot is accompanied by a disclaimer that people should double-check its answers before trusting its advice. But who reads the fine print?
The New York City chatbot debacle serves as a cautionary tale. While automation can streamline processes, blind reliance on flawed algorithms risks tarnishing reputations and doing more harm than good. Striking a balance between technology and human oversight is critical for the new AI era, but it’s proving elusive for now.