Safeguard the experience with block and report features
Dating has gone digital – customers are swiping right for online dating platforms that make them feel safe throughout the entire experience
Of course, everything can go wrong, especially when some lies are slipped in. According to a survey conducted by Match in the United States, 43% of respondents confessed to lying on first dates. Most of them lie about the number of previous relationships they had (17.6%), their finances (11.8%), where they live (9.9%) and their age (8.3%).
That means it’s up to the companies behind the platforms to maintain trust and protect customer privacy. Those who take this to heart will have users falling ‘head over heels’ for their brand.
Online dating existed long before mobile apps arrived on the scene, but it was Tinder’s 2012 launch that led to explosive growth in the market. There are now over 1,500 different dating apps and websites from which to choose, and more arriving on the scene seemingly every day. According to recent estimates from Business of Apps, usage of dating apps surged from 185 million users in 2015 to 270 million in 2020, and revenue grew to $3.1 billion in 2020 – nearly double the amount from just five years prior. And there’s no sign of things slowing down.
All dating app experiences begin with strict user guidelines on what is, and isn’t, appropriate or acceptable while using the technology. These rules of engagement are constantly updated, and can be enhanced through partnerships with companies that have a rich understanding of customer experience (CX), technology and trust and safety. Dynamic partners can help these companies to develop and implement in-app technology like chatbots and AI-driven content moderation tools to help keep users safe.
These features are intrinsic to an app’s security and its perception in the marketplace. Here’s a look at some of the best practices in the industry.
Start with profile verification
One of the best ways to keep dating app users safe is to prevent those with ill-intent from entering online communities in the first place.
Dating app brands are becoming more and more vigilant on this front, vetting all profiles and verifying their veracity before allowing users access into the community. This gatekeeping approach helps to deny entry to scammers and others who set out to exploit the apps and their users.
However, even with a robust profile verification system in place, there’s still a need for security features that can address the noncompliant users who slip through the cracks. That’s why it’s important to empower app users with reactive security features, like report functions. When a user flags a suspicious or malicious profile, human content moderators will conduct an investigation into the profile in an effort to keep the community safe.
For those who progress beyond browsing profiles onto messaging, block and report features remain essential throughout the entire dating journey. There are two key tenets here.
- Users should be able to flag when a message they’ve received goes against community guidelines, thereby prompting further investigation by company behind the app. At the conclusion of the investigation, the support team should provide an update to the user who made the initial report, helping to instill confidence and trust in the platform.
- Users should be able to bar another user from messaging them so as to deny them the ability to send unwanted content. Consent is key, and that applies to messaging too.
In addition to these self-reporting functions, other features are being introduced that make use of artificial intelligence (AI) to prevent noncompliant behavior and to mitigate its effects.
Meetwo invites users to show what makes them unique. Appearance is important but more important is personality, it is highlighted on the site.
“Our algorithm only shows single women or men who meet your requirements”, reads the description of the international dating app, which is available for iOS and Android.