If you’re a customer experience expert – or perhaps simply an avid traveler – you may have stumbled upon the latest numbers highlighting fake reviews on TripAdvisor. Frankly, it doesn’t look great. After analyzing almost 250k reviews from the 10 top-ranked hotels in 10 popular tourist destinations, review and advocacy organization, which found that one in seven showed “blatant hallmarks” of fake reviews. If you consider that online reviews are believed to influence up to $28B USD in annual UK booking transactions alone, that one in seven takes on a whole new meaning.
Granted, many of us have probably had an inkling that fake feedback was on the rise. Perhaps we noticed an oddly mechanical turn of phrase in that Yelp review, or an unnerving pattern in the precise features that keep popping up. And of course, if it’s long been possible to buy Facebook likes and Instagram followers, it’s only natural that an entire industry of review farms has risen to the demand for positive reviews.
Online reviews are believed to influence up to $28B USD in annual UK booking transactions alone.
But beyond being annoying, how does that surge in unreliable, qualitative data impact customer trust and loyalty? And more importantly: how can brands be vigilant in understanding their data in order to maintain credibility and spot real issues as they arise?
How fake reviews break more than consumer trust
One of the most obvious impacts of fake reviews, of course, is the erosion of customer trust. And that’s no small hit, especially for customer experience professionals. Trust, in many ways, has become the ultimate currency – not to mention the most direct pathway to customer loyalty. In fact, a recent study by PR Firm Edelman shows that 80% of global respondents named brand trust as either a deal-breaker, or a deciding factor in their purchasing decision. So if customers think you’re feeding them fake information to bolster your sales, you’re likely to see the ripple effects on your conversion rates and bottom line.
But beyond the decrease in consumer trust, fake reviews also impact your company’s ability to make strategic and informed business decisions. Recent numbers show that 65% of marketers world-wide see improved data analysis capabilities as their top priority. And as most brands move towards a data-driven approach to product development and customer experience, they rely heavily on the accuracy of their qualitative data to improve every touchpoint, minimize friction, and identify opportunities to improve and bolster revenue streams. You can see then, how data failing to discern and tag legitimate pain points, will once again risk chipping away at your customer’s trust. But how can brands make sure they’re not letting fake, unreliable data sway or cloud their decision-making? By training their keen eye – and sophisticated AI – to recognize the signs.
80% of global respondents named brand trust as either a deal-breaker, or a deciding factor in their purchasing decision.
How AI can filter out fake reviews
Though deciphering fake reviews can prove much more complex and complicated than you might think, there are nonetheless a few tell-tale signs that can help you raise a red flag.
“Research has shown that people who are posting fake reviews haven’t actually bought the product,” says Keatext CTO, Charles-Olivier Simard. “So their way of describing the product is different, a bit more elusive, a bit more generic.”
But beyond being imprecise – an experiment by Cornell University showed humans were able to spot fake reviews with less than 50% accuracy – relying on human detection makes it nearly impossible to scale. That’s where text analytics comes in. Through powerful AI and continuous training, text analytics algorithms are capable of deciphering and tagging patterns that might not be embedded in the meta-data used by traditional algorithms.
Text analytics algorithms are capable of deciphering and tagging patterns that might not be embedded in the meta-data used by traditional algorithms.
For example, reviews generated through promotional channels or incentives will sometimes be identified through text in the post itself. The post might, for example, include a disclaimer stating “this review was gathered through a promotional initiative.” While that linguistic pattern would not necessarily be identified or flagged by review platforms, text analytics technology like Keatext can easily be trained to pick up on those unique word combinations. In fact, that sort of analysis is what text analytics does best.
Another thing to look out for? Duplication. If the same text is found across multiple sources, it’s usually worth flagging for further validation. Often, you’ll find the same review posted on different versions of the same website or across multiple review channels; signalling that at the very least, the business is likely to have duplicated an existing review to bolster their ratings. But of course, to be able to pick up on those cross-channel patterns and trends, you first have to centralize and de-silo your data. As things stand, 30% of organizations state data silos and fragmentation as one of the biggest challenges to implementing a data-driven customer experience – and fake review farms have quickly learned to use this to their advantage.
“If you focus on one source,” says Simard, “this won’t seem like an issue – but you’re dealing with a dangerous lack of visibility across other platforms where people are talking about you. That’s why we work across multiple sources to allow for monitoring and visibility across all of those channels; and that can mean up to 40 or 45 channels for some of our current clients. With the rise of fake reviews, it becomes a must-have to look at your reviews across multiple properties to see if you can notice any of those patterns emerge.”
With the rise of fake reviews, it becomes a must-have to look at your reviews across multiple properties to see if you can notice any of those patterns emerge.
According to Simard, once all of your feedback and review data is centralized, insights become a simple matter of how you filter and compare your analytics.
“We like to keep fake reviews as part of the customer data sets so they can review and flag them if ever our AI made a bad call – which then allows us to readjust the backend. But on our dashboard, users have the ability to slice and dice their data not only based on themes, number of stars, or location, but also based on user score. If a review is flagged as a duplicate, the user score will take a serious hit, which means you can then use that metric to filter out unreliable or fake reviews.’
Fighting the good fight
Brands are best served by investing in the right tools and technology to have full context and visibility over their own feedback analysis and qualitative data.
While quirky experiments like London’s top-rated, non-existent restaurant serve as a light-hearted reminder to stay cautious and critical online, the issue of fake and unreliable customer experience data has very real impacts on the services we provide and the products we build. And of course, as Simard points out, the more complex our solutions to counter it, the more creative the fraudsters will become in their quest to fool sophisticated algorithms. That’s why, beyond relying on platforms like Trip Advisor to single-handedly tackle the problem, brands are best served by investing in the right tools and technology to have full context and visibility over their own feedback analysis and qualitative data. With that internal capacity, companies can not only decipher fake reviews and have them taken down by the respective review platforms, but they’re equipped to focus on what truly matters: improving customer experiences by responding more nimbly and quickly to legitimate issues and criticism.