The rise of creepy AI surveillance, ChatGPT’s first leak, and more ads. AdGuard’s digest
In this edition of AdGuard’s news digest: ChatGPT exposes personal data, Instagram adds even more ads, a CIA-funded firm manages patient records, France is close to allowing AI-based surveillance, another US state gets its own privacy law, while police are increasingly turning to facial recognition tech.
ChatGPT leaks chats and payment info in first major data breach
ChatGPT, a mega-popular AI-powered chatbot, has suffered its first ever personal data breach. The breach revealed some users’ chat titles and the initial messages of their conversations with the chatbot to other users, who could see them in their own chat history. Moreover, some payment-related information of about 1.2% of active ChatGPT Plus users may have also been exposed. That information could have included users’ full name, email address, the last four digits of their credit card number, and the card’s expiry date.
OpenAI, the company behind ChatGPT, blamed the leak on a bug in an open-source library it uses for ChatGPT. In a statement, OpenAI said that the bug has since been patched, and claimed there was “no ongoing risk to users’ data.”
ChatGPT’s first privacy breach happened rather quickly, but it shouldn’t have come as much of a surprise. Privacy is something that developers of AI tools have put on the back burner by training their models on data, including copyrighted content, scraped from all over the internet. In fact, you should not expect much privacy when using ChatGPT: OpenAI’s terms of service openly state that it may use your prompts and images to improve its services, whatever that may mean.
Instagram is giving you something no one asked for: more ads
Ever since it started showing ads in 2013, Instagram has seemingly been on a mission to shove as many ads into your feed as possible. This trend will continue in 2023, as Instagram has just introduced two new types of ads: ads that appear when you search for something and ads that remind you of products you have viewed before.
‘Search ads’ will jump into your feed after you tap into a post from the search results. This ad format is still in the early stages of rollout, and is set to become globally available in the next few months. We can not wait. ‘Reminder ads’ promise to be less intrusive as you need to opt into them to see notifications about the start of an advertised event.
It’s no secret that we’re not the biggest fans of ads, especially when it comes to Instagram, a platform that is already oversaturated with ads and ‘sponsored’ posts. Will the new ad placement be the last straw that breaks the camel’s back? Unlikely, but it could hasten the migration of users suffering from ‘ad fatigue’ to other, cleaner platforms such as BeReal.
UK hospitals reportedly told to share data with US spytech firm
Palantir, a data-mining company with links to the CIA, has reportedly been contracted to collect and process “confidential patient information” from British public hospitals, OpenDemocracy reported. According to an internal document cited by the publication, hundreds of hospitals were told to start uploading patient records to a database run by Palantir. The database uses Palantir’s software called Foundry, which integrates data from different sources and helps analyze it.
Palantir has worked for the NHS before: in 2020, it was responsible for the rollout of the Covid-19 vaccine. Since then, the US firm has secured tens of millions of dollars in contracts with the UK government and is eyeing a new $580 million contract to build an NHS database.
The UK government assured that Palantir would not have access to the information that could expose people’s identities because those would be ‘masked’ by pseudoanonymisation. This is a method where the information that can be used to identify individuals is kept apart from the rest of the data. However, this method is not foolproof, as the ‘hidden’ information can be merged with the known data again, which creates a serious risk to privacy and security. And the fact that this data is being handled by Palantir, a company known for its work with the police and the FBI, makes it even more troubling.
Liberté, égalité, surveillance: France is legalizing AI-powered monitoring system
France has inched closer to legalizing the use of AI-powered surveillance, and is set to become the first EU country to do so. Both chambers of the French parliament have overwhelmingly approved the bill that greenlights the use of AI software to monitor people during next year’s Olympic and Paralympic Games in Paris.
The measure is supposed to be temporary, but as the saying goes, nothing is more permanent than a temporary government program. Supporters of the bill say that the mix of cameras and AI is what is required to stop stampedes and attacks. The French government claims that the surveillance would not involve facial recognition. However, critics say that it would necessarily involve the gathering and processing of other biometric data, including appearance, posture, gestures, and gait, which would be enough to identify a person. Civil liberties organisations argue that the proposed system could lead to an “all-out assault on the rights to privacy, protest and freedom of assembly.”
Indeed, there’s a risk that the law may set France on a slippery slope towards a full-blown surveillance state and further away from being Les Pays des Droits de l'Homme (The Country of Human Rights), its fabled nickname. Moreover, that could also set a dangerous precedent for the rest of the EU.
US police increasingly rely on facial recognition tech
If you need an example of where this slippery slope can lead, look no further than the US, where police ran nearly a million searches on a database compiled by Clearview AI, a US facial recognition company. This was revealed by Clearview CEO Hoan Ton-That himself. He said that the company has now scraped 30 billion images from the Internet and put them into a database that can be searched by police. The images were harvested without the users’ knowledge or consent.
Last year, Clearview AI was banned from selling its software to private US companies and individuals, but not to law enforcement agencies. Some US states and cities took matters into their own hands and either banned or severely restricted the use of facial recognition tech by police. Others have not. Thus, Miami police told the BBC that they use facial recognition for “every” type of crimes.
The way Clearview collects biometric data without people’s awareness or consent is a violation of privacy. This puts millions of people at risk of having their personal data misused. As a private company, Clearview operates under a veil of secrecy, so one must also wonder about the accuracy of its algorithm. It is known facial recognition AI can be racially biased and cause false arrests. AI has many positive uses, but enabling a surveillance state is not one of them. In order to prevent the use of AI for these purposes, more regulation is needed.
Another US state now has comprehensive data privacy law
Not to end our digest on a bad note, here’s some good news: Iowa has just become the sixth US state and the first in the Midwest to enact a comprehensive privacy law. The law gives state residents more leverage over their personal information, including the right to opt out of the sale of their personal data and its use for targeted advertising.
The law aims to fill the gap left by the lack of US federal privacy legislation. It would come into force in 2025 and apply to companies that handle the personal data of at least 100,000 Iowans in a year, or that make more than 50% of their revenue from the sale of personal data and at the same time process the data of at least 25,000 consumers in the state. In enacting this law, Iowa follows in the footsteps of California, Colorado, Connecticut, Utah, and Virginia, which have all passed similar legislation.
While the new law is a big step forward in terms of privacy protection in the state, the mere fact that Iowa is only the sixth state out of 50 to have such a law reflects the poor state of privacy protections in the US as a whole. A federal privacy law is long overdue, and hopefully it will arrive sooner rather than later.