Categories: BusinessTechnology

Meta Alters Teen AI Chatbot Responses as U.S. Senate Probes Inappropriate Conversations

Meta Platforms is tightening its AI chatbot policies for teenagers after lawmakers raised concerns over safety risks, inappropriate interactions, and “romantic” responses involving minors.

The move comes just days after Sen. Josh Hawley (R-Mo.) launched a Senate investigation into the tech giant, following a Reuters report that revealed alarming examples of Meta’s AI bots allegedly engaging in flirty and intimate exchanges with underage users.

Temporary Policy Changes for Teens

On Friday, a Meta spokesperson confirmed that the company is making temporary adjustments to how its chatbots interact with teenagers across Facebook, Instagram, and WhatsApp.

Key updates include:

  • Blocking AI chatbot responses on sensitive topics such as self-harm, suicide, eating disorders, and romantic conversations with minors.
  • Redirecting teens to professional resources and hotlines if they raise concerns about mental health or personal safety.
  • Restricting access so that teens can only use AI bots designed for educational purposes, skill-building, and safe entertainment.

“These changes reflect our ongoing efforts to adapt protections as young people interact with emerging technologies,” Meta said in a statement, noting that the adjustments will roll out in English-speaking countries in the coming weeks.

Senate Investigation and Political Backlash

Sen. Hawley said his inquiry would focus on whether Meta “knowingly allowed unsafe AI behavior” in apps widely used by minors. His comments follow revelations from a Reuters investigation, which cited internal Meta documents suggesting that AI chatbots were once permitted to hold romantic conversations with children as young as eight.

One cited example allegedly allowed a chatbot to tell a child: “Every inch of you is a masterpiece – a treasure I cherish deeply.”

Meta has strongly denied those claims, saying the cited materials were “erroneous, inconsistent with policies, and have since been removed.”

Advocacy Groups Raise Red Flags

Beyond political scrutiny, child safety advocates are also pressuring Meta to overhaul its AI safety framework.

On Thursday, Common Sense Media, a nonprofit watchdog for families and children, issued a scathing risk assessment of Meta’s AI, calling it unsafe for anyone under 18.

“This is not a system that needs minor tweaks it’s a system that must be rebuilt with safety as the number one priority,” said CEO James Steyer. “No teenager should be using Meta AI until its fundamental flaws are fixed.”

The group alleged that Meta’s chatbot has been inconsistent sometimes dismissing genuine cries for help while simultaneously engaging in dangerous or suggestive exchanges with young users.

Celebrity-Inspired Meta Chatbots Under Scrutiny

Adding to Meta’s woes, a separate Reuters investigation reported that dozens of AI chatbots modeled after celebrities including Taylor Swift, Scarlett Johansson, Anne Hathaway, and Selena Gomez were available on Facebook, Instagram, and WhatsApp.

According to the report, when prompted, these bots produced AI-generated images of the celebrities in sexually suggestive scenarios, including depictions in lingerie or bathtubs.

In response, Meta stated that such outputs violate company policy.
“While our platform allows the generation of images featuring public figures, we prohibit nude, intimate, or sexually explicit content,” a company spokesperson told CNBC.

A Growing Safety Challenge for Tech Giants

The controversy highlights the broader challenge facing social media and AI companies as they race to deploy generative technologies while safeguarding young users. Regulators and advocacy groups worldwide have warned that inadequate guardrails could expose minors to harmful or exploitative interactions.

Meta said the latest changes are interim measures as it works on longer-term safeguards for teenagers. However, it did not specify when permanent policies would be finalized.

Looking Ahead

With mounting political scrutiny, public backlash, and increasing competition in the AI race, Meta is under pressure to strike a delicate balance: expanding its AI products while ensuring teen safety and compliance with global child protection standards.

For now, the company is betting that stricter chatbot filters, limited teen access, and external resources will help ease concerns but whether lawmakers and parents will be reassured remains an open question.

World Economic Magazine

Recent Posts

Global Fashion Summit 2026, Copenhagen Sets Its Vision on Building Resilient Futures

Global Fashion Agenda has revealed Building Resilient Futures as the theme for the Global Fashion…

7 hours ago

Huawei Wins Best Technology Provider Award at Electricity Connect 2025

The Electricity Connect 2025 conference in Jakarta spotlighted Indonesia’s energy transition, with Huawei recognised as…

8 hours ago

3D Printed Boats Prepare to Rewrite the Future of Marine Manufacturing

After years of material science breakthroughs, a team proved that a rugged, sea-ready composite could…

2 days ago

TAHO Raises 3.5 Million Seed Round to Redefine Compute Infrastructure for the AI Era

TAHO, a Venice-based compute startup founded by ex-Meta and Google engineers, raised $3.5 million in…

4 days ago

Squirrel AI Founder Haoyang Li Spotlights Global Talent Transformation

The 9th Future Investment Initiative in Riyadh spotlighted how AI is rapidly redefining global growth,…

5 days ago

Onward Robotics Names Brendon Bielat Chief Product Officer

Onward Robotics has appointed Brendon Bielat as Chief Product Officer, strengthening its leadership team as…

5 days ago