Amazon has employed AI tools to combat fake reviews as the company removed over 200 million suspected fake reviews to uphold the integrity of its online marketplace.
This comes as there has been a proliferation of fake feedback on the platform, potentially jeopardizing customer trust. Introduced in 1995, reviews are an integral part of the company’s strategy for influencing customer purchase decisions.
According to Amazon, 125 million customers contributed almost 1.5 billion reviews and ratings of its stores last year alone. This translates to about 45 reviews and ratings every second, “making reviews at Amazon an incredible resource for customers.”
However, the same year witnessed an influx of fake reviews. With the help of AI-powered tools, Amazon removed over 200 million fake reviews last year alone in one of its many battles against fraudulent activities on its platform.
According to Financial Express, the fake reviews distorted consumer perceptions of certain products, prompting the company to act in a move to restore confidence and trust. Amazon made use of AI algorithms to evaluate a wide range of data to assess a review’s validity.
To improve accuracy when identifying fake reviews, Amazon employs a combination of advanced technology and proprietary data. This approach goes beyond surface-level indicators of abuse, delving into deeper relationships between potential bad actors in the online review ecosystem, according to Financial Express.
“The difference between an authentic and fake review is not always clear for someone outside of Amazon to spot; for example, a product might accumulate reviews quickly because a seller invested in advertising or is offering a great product at the right price,” said Amazon’s Josh Meek.
“For example, a product might accumulate reviews quickly because a seller invested in advertising or is offering a great product at the right price. Or a customer may think a review is fake because it includes poor grammar,” added Meek.
Fishing out the mishaps
Machine learning algorithms examine several variables, such as the historical background of reviews, consumer complaints about suspicious activity, strange behavioral patterns, and whether vendors paid for marketing.
According to Techco, Amazon launched an AI tool that would summarize customer reviews on a product and display them on the product page to improve the customer experience. However, indications were that there would be problems with fake reviews, hence the need for another system to handle those fake reviews.
In October, Amazon reportedly teamed up with booking.com, TripAdvisor, and Expedia in a bid to fight bad feedback. TripAdvisor had identified about 1.3 million fake reviews on their platform in 2022.
Now, by using language models, Amazon expects to improve its capacity to spot data irregularities that could point to the falsity of a review. One example is when people claim to have gotten gift cards or merchandise in exchange for their feedback.
Fakeness almost everywhere
Since the launch of ChatGPT in November last year, the world has experienced rapid growth in generative AI, with competitors scrambling to implement their own versions of the AI tool.
With the increased number of generative AI tools and their ability to generate text, audio, and videos, a lot of authors have also emerged leveraging the technology.
In February, the company also set new regulations limiting the number of books that authors could self-publish in a single day on its site. This came days after it took several books from its shelf that were AI-generated but falsely listed as authored by Jane Friedman.