The Federal Trade Commission (FTC) has moved to ban fake, AI-generated reviews and testimonials.
The ordinance, handed down last week, targets a sweeping array of dishonest online review practices ranging from purchasing positive reviews to paying for fake followers to fake a strong social media presence to “reviews and testimonials that misrepresent that they are by someone who does not exist.”
This latter category includes “AI-generated fake reviews” or product assessments that otherwise misrepresent the experience — or lack thereof — of a reviewer actually trying the product or brand in question.
The ban also forbids the publishing of reviews that fail to disclose conflicting interests, like in cases where a brand secretly owns the website where reviews of its products are published, or instances of “insider reviews” in which a product review is “written by company insiders that fail to clearly and conspicuously disclose the giver’s material connection to the business.”
The FTC argues that the ban is designed to protect consumers and their hard-earned cash from companies that abuse the illusory, SEO-driven world of online retailing.
“Fake reviews not only waste people’s time and money, but also pollute the marketplace and divert business away from honest competitors,” FTC Chair Lina Khan said in a statement. “By strengthening the FTC’s toolkit to fight deceptive advertising, the final rule will protect Americans from getting cheated, put businesses that unlawfully game the system on notice, and promote markets that are fair, honest, and competitive.”
Instances of fake reviews for individual products, like the one-to-five star reviews on digital marketplaces like Amazon, have exploded in the AI era, making it that much harder for online shoppers to tell whether a product is really up to snuff — or is instead relying on phony, AI-spun assessments to fake legitimacy.
Last year, a Futurism investigation revealed that the venerable magazine Sports Illustrated had been publishing product reviews bylined by nonexistent AI-generated writers with made-up bios feigning experience in specific product categories. Our followup investigation found that the contractor responsible for those fake reviews, AdVon Commerce, had left a staggering trail of similar articles throughout the media landscape, publishing fake reviews everywhere from the Los Angeles Times to The Miami Herald.
Insiders at the company, who were tasked with using an in-house AI tool dubbed MEL to churn out the review-style buying guides, told us that they’d never actually tested any of the products featured in the content.
But that wasn’t even all. Our reporting was also the first to reveal that AdVon was operating a second, closely-tied company called Seller Rocket, which allowed sellers of consumer goods to pay for coverage in AdVon-generated buying guides. In other words, AdVon was quietly double-dipping — and didn’t disclose that relationship to readers, adding yet another layer of sleaze.
It seems that all of this material – the fake authors in the bylines, the use of AI to generate the reviews, and the undisclosed pay-to-play nature of the posts — would fall squarely within the FTC’s ban, which allows the federal agency to “seek civil penalties against known violators.”
The e-commerce landscape is a junk-packed mess, and from botched AI-generated Amazon product listings to fake reviews published by legitimate news organizations, AI has made it all the muddier. It’s intriguing to see the FTC take steps to clean up some of the slop — and hopefully clear the way for the human-powered internet, as well as reviewers with real, consumer-aiding expertise, to thrive in the process.
“Folks shopping for products or services should be able to rely on customer reviews to find companies that provide the best service,” President Joe Biden wrote in an X-formerly-Twitter post last week. “That’s why the FTC has proposed to stop marketers that use fake reviews from undercutting honest businesses.”