Categories: BusinessTechnology

Meta Alters Teen AI Chatbot Responses as U.S. Senate Probes Inappropriate Conversations

Meta Platforms is tightening its AI chatbot policies for teenagers after lawmakers raised concerns over safety risks, inappropriate interactions, and “romantic” responses involving minors.

The move comes just days after Sen. Josh Hawley (R-Mo.) launched a Senate investigation into the tech giant, following a Reuters report that revealed alarming examples of Meta’s AI bots allegedly engaging in flirty and intimate exchanges with underage users.

Temporary Policy Changes for Teens

On Friday, a Meta spokesperson confirmed that the company is making temporary adjustments to how its chatbots interact with teenagers across Facebook, Instagram, and WhatsApp.

Key updates include:

  • Blocking AI chatbot responses on sensitive topics such as self-harm, suicide, eating disorders, and romantic conversations with minors.
  • Redirecting teens to professional resources and hotlines if they raise concerns about mental health or personal safety.
  • Restricting access so that teens can only use AI bots designed for educational purposes, skill-building, and safe entertainment.

“These changes reflect our ongoing efforts to adapt protections as young people interact with emerging technologies,” Meta said in a statement, noting that the adjustments will roll out in English-speaking countries in the coming weeks.

Senate Investigation and Political Backlash

Sen. Hawley said his inquiry would focus on whether Meta “knowingly allowed unsafe AI behavior” in apps widely used by minors. His comments follow revelations from a Reuters investigation, which cited internal Meta documents suggesting that AI chatbots were once permitted to hold romantic conversations with children as young as eight.

One cited example allegedly allowed a chatbot to tell a child: “Every inch of you is a masterpiece – a treasure I cherish deeply.”

Meta has strongly denied those claims, saying the cited materials were “erroneous, inconsistent with policies, and have since been removed.”

Advocacy Groups Raise Red Flags

Beyond political scrutiny, child safety advocates are also pressuring Meta to overhaul its AI safety framework.

On Thursday, Common Sense Media, a nonprofit watchdog for families and children, issued a scathing risk assessment of Meta’s AI, calling it unsafe for anyone under 18.

“This is not a system that needs minor tweaks it’s a system that must be rebuilt with safety as the number one priority,” said CEO James Steyer. “No teenager should be using Meta AI until its fundamental flaws are fixed.”

The group alleged that Meta’s chatbot has been inconsistent sometimes dismissing genuine cries for help while simultaneously engaging in dangerous or suggestive exchanges with young users.

Celebrity-Inspired Meta Chatbots Under Scrutiny

Adding to Meta’s woes, a separate Reuters investigation reported that dozens of AI chatbots modeled after celebrities including Taylor Swift, Scarlett Johansson, Anne Hathaway, and Selena Gomez were available on Facebook, Instagram, and WhatsApp.

According to the report, when prompted, these bots produced AI-generated images of the celebrities in sexually suggestive scenarios, including depictions in lingerie or bathtubs.

In response, Meta stated that such outputs violate company policy.
“While our platform allows the generation of images featuring public figures, we prohibit nude, intimate, or sexually explicit content,” a company spokesperson told CNBC.

A Growing Safety Challenge for Tech Giants

The controversy highlights the broader challenge facing social media and AI companies as they race to deploy generative technologies while safeguarding young users. Regulators and advocacy groups worldwide have warned that inadequate guardrails could expose minors to harmful or exploitative interactions.

Meta said the latest changes are interim measures as it works on longer-term safeguards for teenagers. However, it did not specify when permanent policies would be finalized.

Looking Ahead

With mounting political scrutiny, public backlash, and increasing competition in the AI race, Meta is under pressure to strike a delicate balance: expanding its AI products while ensuring teen safety and compliance with global child protection standards.

For now, the company is betting that stricter chatbot filters, limited teen access, and external resources will help ease concerns but whether lawmakers and parents will be reassured remains an open question.

World Economic Magazine

Share
Published by
World Economic Magazine

Recent Posts

Enthuse Foundation Announced Finalists for 7th Annual Women Founders Pitch Competition

The Enthuse Foundation has revealed the finalists for its 7th Annual Women Founders Pitch Competition,…

8 minutes ago

2nd Edition Model Risk Management, Canada

The Marcus Evans 2nd Edition Model Risk Management, Canada conference taking place in Toronto, Canada…

24 hours ago

‘Grow With China’ Event Highlights Shanghai’s Expanding Role in Global Economic Growth

Economists say Shanghai is strengthening its role as China’s reform engine, accelerating innovation and global…

1 day ago

U.S. Consumers Plan to Spend Nearly $80 Billion During Black Friday

U.S. shoppers are set to spend nearly $80 billion this Black Friday and Cyber Monday,…

3 days ago

Waiken’s $450 Million Bet on Latin America: A Strategic Push into Connectivity and Content

Waiken has unveiled a US$450 million investment plan through 2031 to strengthen its entertainment and…

3 days ago

Retirement Realities of the American Middle Class Revealed in New National Report

A new Transamerica report reveals how American middle class is navigating retirement planning amid financial…

4 days ago