Stranger Danger in the Age of AI

Image showing silhouette of a girl working on a laptop with a hand reaching out from the screen ominously.

Artificial intelligence is not just another technology in children’s lives but is reshaping how they interact, learn, and even form emotional bonds. At the 2026 World Economic Forum in Davos, Switzerland, University of Virginia School of Data Science Assistant Professor of Practice Renée Cummings joined global experts to explore these dynamics and their implications for child development and safety.

Cummings, a criminologist and criminal psychologist whose work centers on children, risk, and protection, has long studied the ways in which societal systems intersect with young people’s well-being. Her recent panel presentations — “The Future of Childhood: How AI is Shaping the Next Generation of Children” and “Monetizing Attachment: Is the Race for Artificial Intimacy Changing How We Relate to One Another?” — placed a spotlight on a critical question: what does “stranger danger” mean when the stranger is an invisible algorithm?

Cummings explains the ways AI alters familiar safeguards, the risks AI presents, and how society might adapt to protect children in this new landscape.


ShapeWhy should we rethink “stranger danger” in the age of AI?

When we talk about “stranger danger,” we typically refer to unfamiliar adults in physical spaces, people we teach children to avoid. “But as AI-infused systems become nearly ubiquitous,” Cummings explained, “the stranger is no longer outside in the park; it is inside the device that lives in a child’s pocket or bedroom.”

Children are among the earliest, most frequent users of digital and AI technologies. These systems curate attention, model interactions, and influence emotional experiences, often without transparency or oversight. “You can’t see an algorithm, you can’t touch an algorithm, you can’t feel an algorithm,” she said, “but that algorithm can behave in ways that are inappropriate or harmful for children’s development.”

This shift alters the landscape of risk: rather than a physical stranger at a gate, danger may arise from algorithmic persuasion, personalized recommendations, and systems designed to maximize engagement at all costs.


What makes AI’s influence on children especially concerning?

Cummings believes AI’s risks are both developmental and psychological. “This isn’t just about screen time,” she said. “It’s about how sustained interactions with these systems can shape a child’s thinking patterns, attention, social expectations, and emotional responses.”

Emerging research suggests that the young brain, still in formation, responds differently to sustained digital engagement than to traditional modes of learning and play. Continuous feedback loops, instant validation, and systems engineered for retention may influence attention and social development in ways we do not yet fully understand.

Moreover, some emerging products blur the line between tool and companion. Cummings points to systems that simulate relationships or recreate deceased loved ones as a stark example of what she calls “artificial intimacy.” For children, the distinction between authentic human connection and digital simulation can be especially opaque.


Can current legal systems address these harms?

One common response to technology harms has been litigation. “At the moment,” Cummings said, “we are still figuring out how to prove intent or responsibility in the context of AI systems. Traditional legal frameworks were not designed for harms that are psychological, algorithmic, or developmental.”

She hopes that legal actions will lead to broader standards for data protection and AI governance. Cases that begin as individual claims could help establish precedents for accountability, system design obligations, and redress mechanisms when harms occur.

But Cummings stresses that law alone is not enough. “We need governance structures, design practices, and societal norms that anticipate risk before harm occurs.”


What does a public health approach to AI and children look like?

To address these challenges, Cummings advocates for a public health model that prioritizes education, prevention, and systemic accountability. “AI in children’s lives is not going away,” she noted, “so we must equip families, educators, and communities with the understanding and tools to manage risk.”

This approach includes:

  • Education about technology’s impacts — helping caregivers and young people recognize how systems operate, what data they collect, and how engagement patterns form.
  • Harm reduction strategies — developing norms and tools that help children navigate digital spaces safely and healthily.
  • Design accountability — encouraging or requiring technology creators to consider child development and well-being as core design principles, not afterthoughts.
  • Governance frameworks — policies that ensure contestability, redress, and oversight of systems that affect children’s lives.

“If we are honest about the power of these technologies,” Cummings said, “we must also be honest about the responsibility we bear as designers, caregivers, and citizens.”


What role do technology creators play in safeguarding children?

Cummings argues that responsibility extends beyond families to the very entities building and deploying AI. “We need both a duty to warn and a duty of care,” she said, “so that users are informed of limitations and risks, and creators are accountable for how their systems behave.”

For children especially, this means design choices that prioritize safety and development over engagement metrics and monetization. It also demands transparency about data practices and decision logic so neither parents nor policymakers are left guessing about how systems operate.


As AI becomes woven into classrooms, homes, and handheld devices, the definition of protection is shifting. The stranger may no longer be at the playground gate, but in the code, the interface, and the algorithmic logic shaping a child’s world.

Cummings believes this moment demands clarity and courage.

“In as much as we're engaging in this technology and all its positives,” she said, “we cannot lose sight of what else is happening with this technology.” 

Author

Director of Marketing and Communications