Genesis meets its end, OpenAI on the ropes, Google copies Apple, and more leaky apps. AdGuard’s digest
In this edition of AdGuard’s digest: ChatGPT looks at a first defamation lawsuit and more regulatory woes, dark web marketplace selling stolen credentials implodes, alcohol recovery platforms leak data to advertisers, as Google gives users more control over in-app data.
Italy sets conditions for ChatGPT’s return, and they are all about privacy
Italy has given OpenAI, a US-based startup behind ChatGPT, a list of demands it must meet if it wants its massively popular chatbot to be unbanned in the country. ChatGPT has been inaccessible in Italy since March 31, after the local regulator accused it of violating the EU’s GDPR data protection law.
Now, the Italian regulator has come up with a list of things that OpenAI needs to do to bring ChatGPT into compliance. The demands include that OpenAI clarify a legal basis for processing users’ data to train its AI algorithms; give users and non-users a way to correct any false information the chatbot has spread about them, or, if that’s “technically unfeasible,” have that data deleted. In addition, both users and non-users should have the right to opt out of having their data processed. Finally, the Italian regulator demanded that OpenAI implement an age verification system that would prevent users under the age of 13 from accessing the chatbot. It’s not just Italy that’s taking OpenAI to task — a European privacy watchdog has set up a special ChatGPT task force to help EU member states align their positions on the issue.
It will be interesting to see how/if OpenAI clears the GDPR hurdle, as AI-powered tools, including ChatGPT, are not inherently privacy-friendly by design. The AI models on which ChatGPT and other AI tools, such as DALLE-E, are built were trained on large amounts of data scraped from across the web without user consent. While it’s possible to opt your data out of ChatGPT’s training set and delete your account, having AI to “unlearn” something is a difficult process. Besides, there’s no surefire way to know that your data has actually been removed for good. Either way, if Italy and OpenAI can figure out their issues, it could serve as an example for other countries where GDPR applies.
Hey AI bot, see you in court! Whistleblower mayor claims ChatGPT defamed him
For now, however, ChatGPT is on the ropes, facing more legal challenges every day. It’s no secret that large language models can “hallucinate”, i.e. spread misinformation or even “invent” facts. That’s apparently what happened in the case of an Australian mayor who may file the first ever defamation lawsuit against OpenAI. The official says that ChatGPT falsely claimed that he had served time for bribery, thus damaging his reputation. In fact, the mayor was the one who blew the whistle on the bribery scheme and was never charged with a crime. The mayor’s legal team gave OpenAI 28 days to correct the mistake or face legal action. The mayor could potentially seek more than $200,000 in damages if he follows through on his threat to sue OpenAI.
OpenAI has generally absolved itself of responsibility for the chatbot’s output by warning that it “sometimes writes plausible-sounding but incorrect or nonsensical answers.” It’s hard to say whether the mayor will follow through on his threat to sue OpenAI, but if he does, we’d certainly be grabbing our popcorn and watching, because it could set a precedent for the future.
Whatever the outcome, one thing is certain: as generative AI becomes more skilled at crafting believable answers and more people use it in the workplace, the question of who should be held accountable for its errors and how to stop the spread of falsehoods is a legitimate one.
Alcohol recovery startups spill user data to advertisers
There’s hardly anyone you trust more than your doctor, and there’s hardly a more vulnerable time than when you’re battling an addiction. But 100,000 patients who have their personal data leaked by online alcohol recovery startups, Monument and Tempest, may now think twice when entrusting their data to online healthcare platforms.
In a disclosure first reported by TechCrunch, Monument, who acquired Tempest in 2022, revealed that it may have exposed a vast trove of patients’ personal and health data to advertisers, including Facebook, Google, Microsoft, and Pinterest. The data was funneled to the ad giants through tracking pixels that were embedded with Monument’s site since 2020 and with Tempest’s site since 2017. The company has said that it fully removed third-party trackers only in February this year. Tracking pixels are snippets of code that website owners can place on their websites to track user actions; they also help advertisers to measure performance of their ads and target them. The information that could have been shared include: patients’ names, dates of birth, email addresses, phone numbers, home addresses, insurance number ID, as well as such sensitive information as photos, appointment-related information, selected services, and survey responses.
Needless to say, the patients did not consent to their private data and treatment regimens being shared with ad tech. This incident is unfortunately far from isolated, and follows the case of two mental health platforms who admitted to doing much the same thing last month. These data practices are bad, regrettable, and all too common. If you absolutely must entrust your data to online healthcare providers, make sure you choose reputable ones. However, even that may not be a guarantee.
‘Cookie monsters’ bust dark web site that peddled stolen credentials
Genesis marketplace, which used to be a go-to-place for stolen credentials and digital fingerprints (for more on what a digital fingerprint is read our article) has gone out of business thanks to a joint effort by the FBI and its peers from law enforcement agencies around the world.
The notorious invitation-only dark web marketplace has been taken down in an “Operation Cookie Monster”. The operation led to the arrest of 120 people, including suspected users of the site, and resulted in 200 searches worldwide. Officials made a point of targeting users, not just administrators of the site: “Genesis falsely promised a new age of anonymity and impunity, but in the end only provided a new way to identify, locate, and arrest online criminals.” Since its genesis (pun intended) in 2018 and before its ignominious end, the marketplace offered access to approximately 80 million stolen account access credentials, such as usernames and passwords. Device fingerprints also offered on the site allowed criminals to bypass anti-fraud protections.
The seizure of Genesis is welcome news, but it’s unlikely that we’ve seen its last iteration. And its demise is by no means the end of online identity theft. The problem is that many people give up their sensitive information voluntarily, either by sharing it on social media or by handing it over to unscrupulous third parties. Oversharing is contagious, and it is in our best interest to go against this trend.
Google makes it easier to delete your app account and data
Google wants app developers on its Play Store to make it easy for users to delete their accounts and associated data. Users should be able to do this both within the app and on the web. Google says that by implementing this policy, it’s giving users more control over their in-app data.
With the rule in place, users won’t have to download the app again just to ask for their account to be deleted. They will be able to do that via a web link. When the user requests their account deletion, developers will also have to delete all the data related to that account, unless they have “legitimate reasons” to keep it. They will also have to explain what these “legitimate reasons”, such as preventing fraud, are. Developers have until December 7th this year to provide more information about their data deletion practices, and users may see the changes next year. The rule that Google is introducing is similar to the one that the App Store implemented in 2021. Apple demanded that the apps that enabled account creation also allowed users to delete their accounts, but only from within the app
It’s always a positive thing when big tech companies like Google give users some power over their personal data back, and our only regret is that it didn’t happen sooner. However, it’s also worth noting that unless there’s a way to verify that the data has actually been deleted, unscrupulous app vendors could still be able to keep it.