Categories: BusinessTechnology

Meta Alters Teen AI Chatbot Responses as U.S. Senate Probes Inappropriate Conversations

Meta Platforms is tightening its AI chatbot policies for teenagers after lawmakers raised concerns over safety risks, inappropriate interactions, and “romantic” responses involving minors.

The move comes just days after Sen. Josh Hawley (R-Mo.) launched a Senate investigation into the tech giant, following a Reuters report that revealed alarming examples of Meta’s AI bots allegedly engaging in flirty and intimate exchanges with underage users.

Temporary Policy Changes for Teens

On Friday, a Meta spokesperson confirmed that the company is making temporary adjustments to how its chatbots interact with teenagers across Facebook, Instagram, and WhatsApp.

Key updates include:

  • Blocking AI chatbot responses on sensitive topics such as self-harm, suicide, eating disorders, and romantic conversations with minors.
  • Redirecting teens to professional resources and hotlines if they raise concerns about mental health or personal safety.
  • Restricting access so that teens can only use AI bots designed for educational purposes, skill-building, and safe entertainment.

“These changes reflect our ongoing efforts to adapt protections as young people interact with emerging technologies,” Meta said in a statement, noting that the adjustments will roll out in English-speaking countries in the coming weeks.

Senate Investigation and Political Backlash

Sen. Hawley said his inquiry would focus on whether Meta “knowingly allowed unsafe AI behavior” in apps widely used by minors. His comments follow revelations from a Reuters investigation, which cited internal Meta documents suggesting that AI chatbots were once permitted to hold romantic conversations with children as young as eight.

One cited example allegedly allowed a chatbot to tell a child: “Every inch of you is a masterpiece – a treasure I cherish deeply.”

Meta has strongly denied those claims, saying the cited materials were “erroneous, inconsistent with policies, and have since been removed.”

Advocacy Groups Raise Red Flags

Beyond political scrutiny, child safety advocates are also pressuring Meta to overhaul its AI safety framework.

On Thursday, Common Sense Media, a nonprofit watchdog for families and children, issued a scathing risk assessment of Meta’s AI, calling it unsafe for anyone under 18.

“This is not a system that needs minor tweaks it’s a system that must be rebuilt with safety as the number one priority,” said CEO James Steyer. “No teenager should be using Meta AI until its fundamental flaws are fixed.”

The group alleged that Meta’s chatbot has been inconsistent sometimes dismissing genuine cries for help while simultaneously engaging in dangerous or suggestive exchanges with young users.

Celebrity-Inspired Meta Chatbots Under Scrutiny

Adding to Meta’s woes, a separate Reuters investigation reported that dozens of AI chatbots modeled after celebrities including Taylor Swift, Scarlett Johansson, Anne Hathaway, and Selena Gomez were available on Facebook, Instagram, and WhatsApp.

According to the report, when prompted, these bots produced AI-generated images of the celebrities in sexually suggestive scenarios, including depictions in lingerie or bathtubs.

In response, Meta stated that such outputs violate company policy.
“While our platform allows the generation of images featuring public figures, we prohibit nude, intimate, or sexually explicit content,” a company spokesperson told CNBC.

A Growing Safety Challenge for Tech Giants

The controversy highlights the broader challenge facing social media and AI companies as they race to deploy generative technologies while safeguarding young users. Regulators and advocacy groups worldwide have warned that inadequate guardrails could expose minors to harmful or exploitative interactions.

Meta said the latest changes are interim measures as it works on longer-term safeguards for teenagers. However, it did not specify when permanent policies would be finalized.

Looking Ahead

With mounting political scrutiny, public backlash, and increasing competition in the AI race, Meta is under pressure to strike a delicate balance: expanding its AI products while ensuring teen safety and compliance with global child protection standards.

For now, the company is betting that stricter chatbot filters, limited teen access, and external resources will help ease concerns but whether lawmakers and parents will be reassured remains an open question.

World Economic Magazine

Recent Posts

Peli Unveils 9730 Remote Area Lighting System, Redefining Portable Lighting for High-Risk Field Operations

Peli Products has launched the Peli™ 9730 Remote Area Lighting System, a next-generation portable lighting…

2 days ago

Polaris Brings Back Free Snowmobile Rides Program for February 2026

Polaris Inc. is set to revive its popular Free Snowmobile Rides program in February 2026

2 days ago

George Quinn Appointed Partner, Fractional Talent at Slone Partners

Slone Partners has appointed George Quinn as Partner, Fractional Talent, strengthening its focus on flexible

3 days ago

Philippe Brochard Appointed Chairman of Advisory Committee at Hanshow

Hanshow has appointed Philippe Brochard as Chairman of its Advisory Committee, strengthening the company’s governance…

3 days ago

Tiiny AI Introduces Pocket Lab, Redefining Personal and Private AI Computing

Tiiny AI’s Pocket Lab makes headlines at CES 2026 with a pocket size personal AI…

4 days ago

Cash buyers, ready homes dominate Dubai’s thriving resale market for ultra-luxury villas

Study by fäm Luxe highlights how Dubai has built ecosystem designed to attract and retain…

4 days ago