Dating apps and ChatGPT: User safety in the age of AI
Modern dating is tough. Between stressing over how to create the most engaging profile, worrying about whether you’re being ghosted and feeling overwhelmed by yet another source of notifications, trying to find the one can be exhausting. And now, daters don’t have to just worry about the perfect opening line, but whether or not the person they’re sending said line to even exists at all. Now entering the dating space: Artificial Intelligence.
To be clear, dating apps have always been built on AI. Tinder, Bumble, Hinge and other apps use AI to suggest matches based on compatibility, while Tinder gives tips for choosing your “best” profile pictures and Bumbles suggest conversation topics. But now, users are turning to AI to help them in their love lives. This can range from the fairly harmless, like using it to help generate catchy ice breakers or generate a bio, to the more morally gray areas, such as editing photos and using AI to generate actual messages during conversation.
“I'm sure many users have edited the photos that they share,” says futurist and technology expert Sinead Bovell. “But AI just takes that to another level. And then you have some users that are even going as far as creating bots that can speak on their behalf, which has a lot of red flags and ethical issues.”
This is just the latest byproduct of ChatGPT’s rising popularity since its inception in November 2022. What started with people turning to the chatbot to help them finesse their resumes, develop a script to use when asking for a raise or even write a term paper has naturally expanded to include messaging on dating apps. “It's just become so democratized and so accessible for everyone,” Bovell says.
But while equal access is usually a positive, in the case of dating apps, the consequences can be dire.
“On the one hand, catfishing just got an enormous upgrade,” Bovell explains. “Not only can you create AI-generated imagery, you can have AI-generated audio, AI-generated video and AI-generated text in the style and voice of somebody. So it becomes much more challenging to [recognize that] somebody isn't real or to be suspicious of someone.”
This can lead to physical safety concerns—users could make plans to meet up with someone who misrepresented themselves online—as well as scams. According to a 2020 study from cybersecurity blog TechShielder, Canada has the third-highest number of catfishing reports globally, with over 1,000 Canadians reportedly being swindled out of a collective $9 million dollars in 2020 alone.
AI can also contribute to the proliferation of racist biases among dating app users. Since it is trained on data sets that may contain racist imagery or information, AI may respond by editing users’ photos so they appear lighter-skinned or with Eurocentric features, for example.
App creators say they have taken note and are taking steps to ensure safety against AI for their users. As far back as 2016, Bumble implemented profile verification via selfie and Tinder did the same in 2020. In 2022, Hinge followed suit with their own verification process, adding a “Verified” badge to any account that submitted a selfie video. And in early 2024, Match Group Inc., which owns dating app companies Tinder and Hinge, announced the wider implementation of its expanded ID verification program after testing it in Australia and New Zealand and releasing it in Japan in 2019. With this new verification step, users submit a photo of their valid driver’s license or passport along with a self-recorded video, which is then checked by a third-party vendor. Approved profiles receive a blue check mark. In addition to this, Match Group has also launched an educational campaign for users against romance scams, and in 2018 launched The Match Group Advisory Council, a safety council consisting of advocates and safety experts who meet to assess and review a product’s safety.
While these steps are helpful, it’s important to note that across all of these platforms, the verification status is only an additional (and optional) step, leaving the onus on app users to discern whether or not they want to move forward with unverified matches. What’s more, relying on AI detectors isn't a foolproof way to weed out inauthentic accounts and users, Bovell notes. This is because some AI-generated accounts are easy to spot, but advances in this technology—like the ability to emulate creative, human dialogue—aren’t as easy to discern.
“At the end of the day, we don't have consistent AI tools that can 100% verify that something is AI-generated,” Bovell says, noting that this is largely because the technology is so new. Eventually, “there's going to be bigger structural changes to the internet and to digital platforms that make pathways to accountability a lot easier.”
For now, though, she advises dating apps to commit to staying at the forefront of industry safety standards and protocols.
There’s only one way to look, she says, and that’s forward: “Being proactive is really the only path to take.”