- Valentine’s Day study finds romance-seeking AI chatbots prioritize data collection over user privacy.
- Lack of security measures and unchecked data sharing raise concerns about harassment.
- Researchers advise, protecting your heart and data alike.
Just in time for Valentine’s Day, researchers delivered a sobering take on the privacy perils lurking within the rising realm of AI-powered romantic chatbots.
A report this week surveying 11 such apps found all fell into the worst rankings for protecting user data.
How secure are AI love assistants?
While marketed as tools for companionship and mental health, the study by the nonprofit Mozilla Foundation discovered flaws like weak security controls and rampant personal data sharing with third parties. Some apps also displayed tendencies toward enabling harassment, discrimination, or illegal content.
Navigating AI romance with caution
Eva AI was the sole chatbot attempting stricter content moderation, but a majority neglected sharing security practices or allowed easily guessable passwords.
With personal info on sexual health and more being extracted, researchers say users share intimate details with AI girlfriends at their own risk.
As the report concluded, the sheer convenience and customization these apps provide camouflage serious vulnerabilities and ethical lapses in an unregulated space.
So those determined to test AI matchmaking should approach with eyes wide open, strong passwords, and limited data access granted.