DoNotPay, a company that claimed to offer the “world's first robot lawyer,” has agreed to a $193,000 settlement with the Federal Trade Commission, the agency announced Tuesday. The move is part of Operation AI Comply, a new FTC enforcement action to crack down on companies that use AI services to deceive or defraud customers.
According to the FTC's complaint, DoNotPay said it would “replace the $200 billion legal industry with artificial intelligence” and that its “robot lawyers” could replace the expertise and performance of a human attorney when drafting legal documents. However, the FTC says the company made this claim without backing it up with testing. In fact, the complaint states:
None of the Service's technologies have been trained on a comprehensive and current corpus of federal and state laws, regulations and court decisions, or the application of those laws to factual situations. DoNotPay employees have not tested the quality and accuracy of the legal documents and advice generated by most of the Service's legal-related features. DoNotPay has not employed or retained attorneys, let alone attorneys with the appropriate legal expertise, to test the quality and accuracy of the Service's legal-related features.
The lawsuit also alleges that the company even told consumers they could use the company's AI service to sue for personal injury without using a human, and that it could check small business websites for legal violations based solely on a consumer's email address. DoNotPay claimed this would save the companies $125,000 in legal fees, but the FTC says the service is not effective.
According to the FTC, DoNotPay has agreed to pay $193,000 to settle the allegations against the company and to inform consumers who subscribed between 2021 and 2023 about the limitations of its legal-related offerings. DoNotPay is also prohibited from claiming it can replace a professional service without providing evidence to support this.
The FTC also announced action against other companies that have used AI services to deceive customers. This includes the AI ”writing assistant” service Rytr, a company that provides its subscribers with tools to create AI-generated fake reviews, according to the FTC. The move against Rytr came just over a month after the Federal Trade Commission announced to the FTC a final rule banning all companies from creating or selling fake reviews, including AI-generated ones. This rule will soon take effect, meaning the FTC can seek a maximum of $51,744 from companies per violation.
The FTC has also filed a lawsuit against Ascend Ecom, which it alleges defrauded consumers of at least $25 million. Ascend promised its customers that its AI-powered tools would enable them to open online stores on e-commerce platforms like Amazon that would earn them five-figure monthly income.
“Using AI tools to trick, deceive, or defraud people is illegal,” said FTC Chair Lina M. Khan. “The FTC's enforcement actions make clear that there is no exception to existing laws for AI. By cracking down on unfair or deceptive practices in these markets, the FTC is ensuring that honest companies and innovators get a fair chance and consumers are protected.”