How Tesla workers turned car camera footage into memes — and why it matters
‘Privacy from Day One,’ — this is one of the first lines you see in Tesla’s customer privacy notice. In it, the electric car manufacturer says that “your privacy is and will always be enormously important to us.” Despite promising to respect your privacy from the moment you bought the car, Tesla has apparently failed to keep its word. In a recent report, Reuters revealed that private videos and photos from customers’ car cameras were shared and sometimes memefied by its employees in group chats.
The practice ran rampant from 2019 until at least 2022, Reuters reported, citing interviews with nine former employees who decided to spill the beans. The images, ranging from disturbing to deeply intimate, came from multiple cameras built into Tesla’s models.
Cameras only, please
Over the years, Tesla has gradually relied more and more on cameras to support the car’s self-driving features, while retiring other sensors. First Tesla removed radars from its vehicles, though it may bring them back on, and last year moved on to getting rid of ultrasound sensors in favor of the camera-only ‘Tesla Vision’ system.
The system relies on 8 cameras strategically placed around the car: three in the front, two forward-looking and two rear-looking on the sides, and one in the back. These cameras use AI to process and interpret images in real time, allowing the car to see and understand its surroundings. Some Teslas also have a driver-facing camera mounted above the rear window. The ‘cabin camera’ monitors the driver and beeps if you fail to keep eyes on the road and hands on the wheel.
Tesla assures its customers that their privacy is protected and that it does not access or share any images or videos from the cabin camera or any other cameras, unless you agree to do so. But even if you opt in such data-sharing, Tesla says it only uses the data for three specific purposes: to communicate with you, to provide you with services such as over-the-air updates, and to improve its products and services (needless to say that there is no mention of using the customers’ data for fun).
Since cameras are the primary source of information for Tesla’s Autopilot system (for example, Ford’s Active Driving Assist is fed data from both cameras and a radar sensor), it is critical that they can accurately detect objects. This is where the work of human data labellers come into play, and where the problems start.
Making memes out of mishaps
Artificial intelligence may seem like magic, but it still relies heavily on human help. To train the AI systems to spot different objects on the road, such as cars, traffic signs, pedestrians, lanes, Tesla employs hundreds of human labellers. Tesla is not alone in this practice; other car makers also enlist the services of both in-house and external data labellers to annotate videos and images, which are then used to teach self-driving car algorithms.
Initially, Tesla outsourced the data-labeling chores to a company called Sama (formerly Samasource). Headquartered in San Francisco with centers in Kenya and Uganda, it counts General Motors and Ford among its clients. Over time, however, Tesla apparently became dissatisfied with the quality of the labeling it was getting from outside contractors, and set up an in-house data labeling service that grew to 1,000 employees.
A former data labeller at Tesla told Reuters that they would sometimes see “scandalous stuff”, including “scenes of intimacy.” Another ex-worker said that he was privy to “a lot of stuff that like, I wouldn’t want anybody to see about my life.”
Either to spice up their dull routines or show off their excitement, some workers apparently could not resist sharing “cool” images they saw with coworkers in group chats. According to some former employees, this behavior was only occasionally frowned upon, and was generally seen as a sign of being fun and engaging. “People who got promoted to lead positions shared a lot of these funny items and gained notoriety for being funny,” one former labeller told Reuters. Sometimes, the workers would reportedly embellish the screenshots with photoshop — in other words, your typical office vibe, except that these were someone’s private moments used for in-house entertainment.
The types of content shared in group and one-on-one chats ranged from the innocuous, such as pictures of random dogs or unique cars (including, reportedly, Elon Musk’s very own submersible vehicle), to the more disturbing, such as people falling, people being forced into cars, and cars ramming into bikes.
‘Anonymized data’… or not quite?
However, according to Reuters, data labellers used a program that allowed them to view the location of recordings on Google Maps. This could potentially lead to re-identification, such as, for example, when the recording is made in your driveway. Not so comforting anymore, huh?
Tesla, which hasn’t had a PR department since 2020, has been tight-lipped about the allegations. On the other hand, Tesla owners, concerned about the possible misuse of their personal data, have been more vocal. One of them, a San Francisco resident and a Model Y owner Henry Yeh, sued Tesla in California for capturing “highly-invasive videos and images of cars’ owners” and sharing them for the “tasteless and tortious entertainment of Tesla employees, and perhaps those outside the company.”
Yeh wants his lawsuit to represent all the affected customers. He accuses Tesla of being “immoral, unethical, unscrupulous, or substantially injurious to consumers.” He also says that Tesla broke its contract with the customers by not anonymizing the footage, as the system used by the employees could reportedly reveal the locations of the recordings.
How Tesla’s memegate could be more than just an accident
When asked by Reuters for their thoughts on what happened, some former Tesla employees said the way customers’ private information was handled was wrong. “It was a breach of privacy, to be honest. And I always joked that I would never buy a Tesla after seeing how they treated some of these people,” one ex-employee was cited as saying.
But two other ex-employees said it was not a big deal, suggesting, according to Reuters, that “people long ago had given up any reasonable expectation of keeping personal data private.”
The latter statement echoes the attitude to privacy expressed by none other than Tesla CEO Elon Musk. In an interview with podcaster Joe Rogan, Musk questioned the importance of privacy in the modern word, soon to be dominated by AI. “I think there’s not that much that’s kept private by people that is actually relevant. That other people would actually care about. When you think other people care about it, but they don’t really care about it. And, certainly, governments don’t,” Musk told Rogan in 2018.
Why Musk and co. are wrong
What Musk seems to say is that no one cares about your private data, that it’s not something you should lose a night’s sleep over. But his own company’s practices betray a different truth: people, occasionally, do care. In Tesla’s case, employees were using private videos and pictures to break up the routine. You wouldn’t want that to happen with your data anyway, but what if someone else had a darker motive? Someone could be using this data for blackmail or to prepare for a robbery. And if you’re a public figure, such as a celebrity or politician, your private data can be your downfall if it falls into the wrong hands.
Employees may not have malicious intentions (like the third-party data labellers hired by vacuum maker iRobot who leaked owners’ intimate images to Facebook and Discord), but the victims of data misuse may still suffer greatly. If anything is a telling sign, the work environment in Tesla’s labeling department, as described by Reuters, is eerily similar to the work ethic (or lack thereof) at the NSA, as described by Edward Snowden.
The privacy risks you can’t dodge, but can diminish
Tesla’s case is not unique. As long as companies will have to hire humans to sift through raw data to improve AI algorithms, cases like Tesla’s are bound to happen. Humans are the weakest link when it comes to safeguarding data: there’s always a risk that an employee can use it for their own benefit or leak it. The latter was the case with Google, for example, whose partners leaked more than 1,000 audio recordings of customer conversations with the Google Assistant to a news site — the news site was then able to successfully identify some of the people in the voice clips.
We’re entering the age of AI, so it’s a safe bet that if we don’t somehow solve this problem soon, there’s going to be more and more room for things to go south. When it comes to the automotive industry, most modern cars are already filled to the brim with sensors and cameras that rely on AI. The global market for automotive cameras is expected to reach $69 billion by 2030, while that for automotive sensors is on the track to reach $36.53 billion in five years. Cars need these cameras and sensors to drive themselves, for parking and safety, and it’s impossible to imagine the future of the auto industry without them.
This presents us with a dilemma: embrace technological advancement and the privacy risks that come with it, or buy a car that doesn’t have as many cameras. The latter option would probably be safer in terms of privacy, but it might also be less convenient. But even if we succumb to the fruits of technological progress — the more or less self-driving cars — there are still ways to minimize risks to privacy.
At the very least, you can study manufacturers’ privacy policies before buying a car and take advantage of the existing privacy options that they offer. For example, Tesla lets you opt out of sharing data through your car’s touchscreen (one can only hope that your opt-out will actually stop the collection of data, but that’s another issue). And to learn what Tesla already knows about you, you can request a copy of the information associated with your Tesla account — informed means armed.