New CARU Strict Guidelines on AI-Generated Children Advertisements and Data Collection

by: Afam Okeke

On May 1, 2024, the Children’s Advertising Review Unit (“CARU”) of the Better Business Bureau issued a Compliance Warning to clarify that its Advertising and Privacy Guidelines apply to the use of artificial intelligence (“AI”).  The compliance warning also came with a reminder that CARU will strictly enforce its guidelines in connection with the use of AI. As a reminder, CARU’s guidelines apply to advertising directed to children under the age of 13 and to online services that target children under 13.  CARU’s compliance warning references themes that have become common when discussing legal concerns surrounding the use of AI; however, because children are more vulnerable due to their limited knowledge, experience, sophistication, and maturity, the compliance nuances can be harder to address.   Our quick takeaways are as follows:

Advertising

Brands using AI in advertising should be mindful of an age-old principle when it comes to children’s advertising – do not blur the lines between real and fake.

  • AI-Generated Deepfakes: Advertisements that contain deepfakes (simulations of real people via video, image or audio) can deceive children into believing that these simulations are authentic, which could be deceptive.  Advertisers must not use AI to falsely imply that a celebrity or a fictitious person endorses a product.

  • Misleading Product Depictions or Performance:  Advertisers should not mislead children about the inclusion, benefits, or features of AI technology in the product depiction or performances themselves.

  • Exploiting A Child’s Imagination: AI tools should not be used to create a semblance of a fantasy that could improperly exploit a child’s imagination, create unattainable performance expectations, or exploit a child’s difficulty in distinguishing between real and fake.

  • Personal Relationship with Brands: Advertisers should not use AI technology to mislead children into believing they are engaging with a real person or have a personal relationship with a brand, character or influencer.

  • Bias, Inappropriate Behavior or Negative Stereotypes: Knowing that bias within AI exists, and understanding the special obligation that advertisers have to children, Advertisers must ensure AI-generated images or other materials do not portray or promote harmful negative social stereotypes, prejudice, or discrimination.

Privacy

If an operator collects personal information from children and uses it in or with AI systems (even third-party AI systems), those operators need to be mindful that compliance with privacy principles can be more complicated than usual.  For example:

  • Need to Obtain Verifiable Parental Consent and Honor Deletion Requests: Operators that collect personal information from children and use it for machine learning processing need to obtain verifiable parental consent to do so. But if an operator is unable to delete a child’s information from an AI system upon parental request, then that information should not be collected by the AI system in the first place.

  • AI-connected toys and Online Services: CARU is putting operators of AI-connected toys and related online services on notice that they must secure verifiable parental consent and properly disclose their collection practices in their privacy policy, prior to any collection, use or sharing of a child’s personal information.

As AI continues to blur the line between real and fake, advertisers who use AI as part of their advertising need to be cognizant of how they market their products or services to children.  Moreover, marketers must take appropriate steps to ensure that the collection of information by an AI tool that interfaces with children comports with privacy laws and principles applicable to children.

Originally published by InfoLawGroup LLP. If you would like to receive regular emails from us, in which we share updates and our take on current legal news, please subscribe to InfoLawGroup’s Insights HERE.