Search
Close this search box.

UK authorities scrutinize potential risks to children’s privacy posed by My AI chatbot on Snapchat

Snapchat’s AI chatbot faces scrutiny for potential risks to children’s privacy in the UK

Snapchat’s AI chatbot, My AI, has faced controversies and regulatory scrutiny, including a recent warning from the UK’s Commissioner’s Office (ICO) regarding potential risks to children’s privacy.

Concerns regarding children’s privacy

The ICO has raised concerns about Snap’s alleged failure to assess the privacy risks of the chatbot before its official launch. While the warning does not indicate a breach, the ICO may block the chatbot in the UK if no action is taken.

Focused on protecting children aged 13 to 17

The ICO specifically emphasizes the potential risks faced by children aged 13 to 17, who fall under the Children’s Design Code implemented in 2021.

The introduction of My AI chatbot

Snapchat introduced its generative AI-powered chatbot, My AI, in February. Initially exclusive to Premium subscribers, the chatbot served as a virtual friend, offering advice and answering questions. Following the testing period, it became available to all Snapchat users, including underage individuals.

Concerns and inappropriate advice

However, concerns were raised regarding the chatbot’s actions. Despite Snap’s claims of strict moderation and safeguarding features, instances were reported where the chatbot provided inappropriate advice. An example includes the chatbot suggesting ways for a 15-year-old user to hide the smell of alcohol.

Snap’s response

In response to the ICO’s warning, Snap asserts its commitment to user privacy and states that it is reviewing the ICO’s decision. The company intends to work closely with the ICO to ensure its risk assessment procedures align with their requirements.

Share on:

Leave a Reply

On Key

Related Posts