Skip to content

Five Ways AI Can Undermine Our Privacy

The following is a guest post written by UX Design Consultant and OpenWater member Robert Stribley. Find more of his work at https://www.robertstribley.com/.

Gizmodo recently wrote about the proliferation of virtual boyfriends and girlfriends, which have resulted in what they called a “data harvesting horror show.” In short, most of these chatbots sell or share the personal data users are entering during their romantic AI sessions. That might include information about gender-affirming care, medication, or sexual health. Some may argue that this sharing shouldn’t come as a surprise to consumers. However, people often aren’t aware of exactly how, nor the extent to which their data is shared elsewhere, sometimes with hundreds of third parties. (A recent Wired study, for example, showed that some popular sites share your data with over 1500 other companies.)

As this brave new world of AI or large language models (LLMs) continues to explode into view, the technology is outrunning our ability to get a grip on some emerging privacy issues.  

So what privacy concerns should we keep in mind when utilizing AI? So far, I see at least five potential problem areas. Let’s take a look at them.

  1. Lack of transparency with data sharing

As the example above illustrates, many users of AI interfaces are likely not aware of how their personal data is being used. The FTC has already expressed its concerns about this issue, explaining in a recent blog post that AI companies should be aware that “Quietly changing your privacy policy to collect data for AI training is unfair, deceptive, and illegal.”

Companies need to clearly and prominently explain how consumers’ data will be used as they are being onboarded or via “just-in-time” alerts — in other words, warning users before they even have the opportunity to enter information into a prompt. This information isn’t something that should be buried in the platform’s terms and conditions, either.

  1. Accidental exposure of personal data

Late last year, a Google research team explained how they tricked ChatGPT into surfacing personally identifiable information (PII) data from dozens of people, which had been used for training the AI. The team simply asked the LLM to repeat the single word “poem” forever. That prompted this extraction, which included phone numbers, personal addresses and banking information.

While AI companies continue to work on the security of these systems, anyone using them should be aware that the data you enter there may not be safe from prying eyes.

  1. The myth of data anonymization

One approach to preventing privacy lapses would be to anonymize any data you’re feeding an AI. However, anonymization isn’t always the cure you’d think it would be.

Recently, DocuSign noted that they would be using personal data to train specific AI products. However, they said, “DocuSign only trains AI models on data from customers who have given consent, and the data is de-identified and anonymized before training occurs.”

Given the significant documentation DocuSign is leveraged to complete, critics expressed concern at this development, also pointing out that users were never asked if they consented to this use of their data. Others expressed doubt that the data could be successfully anonymized, suggesting that descriptions of any automatic methods for anonymizing data have proven opaque. Further, researchers have repeatedly shown that anonymized data can often be de-anonymized. In fact, a 2019 study showed that “99.98% of Americans would be correctly re-identified in any dataset using 15 demographic attributes.”

  1. Deceptive design patterns

Given the way companies increasingly look to generative AI platforms to complete design work, AI might suggest deceptive design patterns simply because they work. Untethered from any specific ethical guidelines, these platforms might suggest content or UX patterns, which tricks consumers into surrendering information they didn’t intend to. And companies might not question or closely scrutinize these patterns if they boost leads and sales.

In early 2023, Northeastern associate professors Christo Wilson and David Choffnes shared this concern as the motivating force behind their project, “Dark Patterns in AI-Enabled Consumer Experiences”: 

While AI-enabled devices and services may bring benefits to consumers and businesses, they also have the potential to incorporate dark patterns that cause harm.  … To date, there is very little work that examines the unique potential of AI to worsen existing classes of dark patterns, as well as facilitate entirely new classes of dark patterns that are specific to AI-enabled experiences.

For example, preselected checkboxes are a common pattern already used to trick users into unwittingly signing up for recurring payments, subscribing to newsletters, or sharing their personal contacts. Imagine an AI tool creating a slicker, more effective version of this pattern and then justifying it to business stakeholders via detailed projections for how much the company can benefit from the deception. You’d hope for a vocal critic in the room to argue against the solution. But what if such solutions are deployed automatically by AI without any supervision and left in place because they’re so successful?

  1. Malicious misuse of AI

Some platforms may enable people to act even more maliciously. We’ve all been fascinated by the increased use of deepfakes, convincing replications of real people saying and doing things they didn’t really do. These deepfakes are used in misinformation campaigns, but they also already operate in ways intended to steal people’s information, identity, and, naturally, money. Criminals use deepfake videos and audio to trick people into surrendering money they thought was going to family members or business colleagues. Last year, a Hong Kong finance worker was tricked into sending $25 million to criminals after attending a video meeting with his colleagues—who turned out to all be deepfake versions of people he actually knew.

This technology is evolving with incredible rapidity and we’ve barely begun to have a conversation about how we as consumers can control the use of our likenesses on generative AI platforms.

Some have already noted, too, that generative AI could be used to mimic biometrics, undermining what has been considered a powerful source of security for personal information.

We won’t be stuffing the AI genie back in its bottle any time soon. So, as we learn to navigate its complexities and, ideally, benefit from its productivity, we’ll want to pay close attention to the potential harm it can bring, too. Privacy lapses are already proving to be one of those harms.

Back To Top
Apply Now