Salesforce CEO Marc Benioff Warns: AI Could Harm Kids Like Social Media Did

‘AI fuels our global growth, but…’: Salesforce CEO 'concerned' over AI’s impact on kids

Salesforce CEO Sounds Alarm: Is AI the Next Social Media Crisis for Kids?

Artificial intelligence may be fueling record-breaking growth for tech giants like Salesforce—but its CEO, Marc Benioff, isn’t celebrating. Speaking at the World Economic Forum (WEF) in Davos on January 24, 2026, Benioff delivered a sobering message: the unchecked rise of AI could inflict deep, lasting harm on children, mirroring the mistakes made during the early days of social media .

“AI fuels our global growth, but…” Benioff paused, letting the weight of his concern sink in. “We are deeply worried about the AI impact on kids. If we don’t act now, we risk repeating history—only this time, the consequences could be far more dangerous.”

Table of Contents

Benioff’s WEF Speech: A Call for Urgent Action

During a high-profile panel on “The Future of Trust in the Digital Age,” Benioff didn’t mince words. He emphasized that while AI offers immense potential for innovation and efficiency, its current deployment—especially in consumer-facing tools accessible to minors—is dangerously under-regulated.

“Children are interacting with AI chatbots, tutors, and companions that can hallucinate, mislead, or even manipulate,” he said. “These systems aren’t just inaccurate—they’re unpredictable. And when a child believes a falsehood generated by an AI, it can shape their worldview in harmful ways.”

Benioff stressed that the problem isn’t theoretical. With AI-powered apps and devices increasingly marketed to families—like smart speakers, homework helpers, and virtual friends—the line between helpful tool and psychological influencer is blurring fast.

The Social Media Parallel: Lessons from the Past

Benioff drew a direct comparison to the 2010s, when platforms like Facebook and Instagram grew explosively with little oversight, only for studies later to reveal links to teen anxiety, depression, and body image issues.

“We ignored the warnings then,” he said. “We told ourselves it was just ‘connection’ and ‘fun.’ But we now know social media rewired young brains—and not for the better. AI is 10 times more powerful, and 100 times more insidious.”

He referenced internal Meta documents leaked in 2021 that showed the company knew Instagram was toxic for teen girls—a revelation that fueled global calls for reform. “Don’t wait for the whistleblower,” Benioff urged fellow tech leaders. “Do the right thing before you’re forced to.”

How AI Specifically Endangers Young Minds

The AI impact on kids isn’t limited to screen time. Experts highlight several unique risks:

  • Hallucinations & False Information: AI models often generate convincing but entirely fabricated facts. A child asking, “Is the Earth flat?” might get a detailed, authoritative-sounding “yes” from a poorly aligned model.
  • Emotional Manipulation: Companion AIs designed to mimic empathy can create unhealthy emotional dependencies, especially in lonely or vulnerable children.
  • Privacy Exploitation: Many AI toys and apps collect voice data, behavioral patterns, and personal details without robust parental consent or encryption.
  • Normalization of Bias: If trained on biased data, AI can reinforce harmful stereotypes about gender, race, or ability—shaping a child’s beliefs during critical developmental years.

A 2025 study by UNICEF already flagged these concerns, noting that “AI systems lack the ethical frameworks necessary to interact safely with developing minds” .

What Kind of AI Regulation Does Benioff Want?

Benioff isn’t calling for a ban—he’s advocating for smart, enforceable guardrails. His proposals include:

  1. Age Verification Mandates: Strict identity checks before allowing access to advanced AI features.
  2. “Child-Safe AI” Certification: A global standard (like GDPR-K for data) requiring transparency, accuracy audits, and emotional safety protocols.
  3. Ban on AI Companions for Under-13s: Similar to restrictions on targeted advertising to children.
  4. Real-Time Hallucination Alerts: Systems must flag when responses are uncertain or potentially false.

He praised the EU’s AI Act as a starting point but called for stronger, globally harmonized rules—especially for foundational models used across borders.

Beyond Government: Tech’s Role in Child Safety

While regulation is essential, Benioff insists tech companies must lead ethically. Salesforce, for instance, has implemented strict usage policies for its Einstein AI suite, prohibiting deployment in K–12 educational tools without human oversight [[INTERNAL_LINK:ethical-ai-in-enterprise]].

“Growth shouldn’t come at the cost of a generation,” he concluded. “We built these systems. We have a moral duty to protect those who can’t protect themselves.”

Summary

Marc Benioff’s urgent warning about the AI impact on kids is a wake-up call for policymakers, parents, and tech leaders alike. By drawing parallels to social media’s failures, he underscores the need for proactive, not reactive, safeguards. Without immediate action—through regulation, corporate responsibility, and public awareness—AI could indeed become the next frontier of generational harm.

Sources

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top