A coalition of more than 30 advocacy groups focused on consumer protection and children’s rights on Thursday urged the Federal Trade Commission (FTC) to stop Meta from using individuals’ conversations with its AI chatbots for advertising and content delivery. On October 1, Meta announced that starting on December 16 it will target ads and customize content based on users’ engagement with chatbots. The tech giant is offering no opt-in consent tool. In a letter sent to FTC Chair Andrew Ferguson and other commissioners, the coalition of nonprofits — including the Electronic Privacy Information Center and the Center for Digital Democracy — called on the agency to block Meta’s plans by treating the program as illegal under the commission’s Section 5 provision barring unfair and deceptive practices. “The FTC has a sordid history of letting Meta off the hook, and this is where it’s gotten us: industrial-scale privacy abuses brought to you by a chatbot that pretends to be your friend,” John Davisson, director of litigation for EPIC said in a prepared statement. “Meta’s appalling chatbot scheme should be a wake-up call to the Commission. It’s time to get serious about reigning in Meta.” A spokesperson for Meta declined to comment. The FTC has recently pursued several enforcements against companies it says have violated the Children’s Online Privacy Protection Rule (COPPA). In September, the agency announced an inquiry into how tech companies protect the privacy and safety of children engaging with chatbots. Ferguson has frequently stated that the FTC intends to strictly enforce COPPA and in January the commission adopted a new tougher version of the rule.
Get more insights with the
Recorded Future
Intelligence Cloud.