Pope Leo XIV has urged high school students to approach AI responsibly, emphasizing its use in ways that foster rather than hinder human development, as he calls on developers and governments to implement ethical safeguards and protections for youth.
Key Points
- During a livestream at the National Catholic Youth Conference held at Lucas Oil Stadium in Indianapolis, Pope Leo fielded questions from five high school students
- Micah Alcisto, representing the Diocese of Honolulu, asked the pope for guidance on using ChatGPT, the AI language model developed by OpenAI, and other artificial intelligence technologies
- “Using AI responsibly means using it in ways that help you grow, never in ways that distract you from your dignity or your call to holiness,” Pope Leo stated
During a livestream at the National Catholic Youth Conference held at Lucas Oil Stadium in Indianapolis, Pope Leo fielded questions from five high school students. Micah Alcisto, representing the Diocese of Honolulu, asked the pope for guidance on using ChatGPT, the AI language model developed by OpenAI, and other artificial intelligence technologies.
“Using AI responsibly means using it in ways that help you grow, never in ways that distract you from your dignity or your call to holiness,” Pope Leo stated. “AI can process information quickly, but it cannot replace human intelligence — and don’t ask it to do your homework for you,” he added.
The pope stressed that AI cannot determine what is truly right or wrong and urged students to use it thoughtfully, ensuring that technology does not hinder genuine human development. “Use it in such a way that if it disappeared tomorrow, you would still know how to think how to create, how to act on your own, how to form authentic friendships,” the pope stated.
Related: Elon Musk’s Chatbot Grok Spreads False Claims About Bondi Beach Shooting
Regarding his call for AI developers and governments to establish ethical guidelines, Pope Leo told students that ensuring safety goes beyond rules, encompassing education and individual responsibility. “Filters and guidelines can help you, but they cannot make choices for you; only you can do that,” he stated.
Pope Leo’s comments come amid rising concern over ChatGPT, with reports suggesting that certain conversational features aimed at boosting engagement may have negatively affected some users’ mental health.
In early November, the Social Media Victims Law Center and Tech Justice Law Project filed seven lawsuits in California state courts against OpenAI and CEO Sam Altman, alleging wrongful death, assisted suicide, involuntary manslaughter, and various product liability, consumer protection, and negligence claims. The lawsuits contend that OpenAI prematurely released GPT-4o, despite internal warnings that the AI could be dangerously sycophantic and psychologically manipulative.
Related: ERCOT Under Pressure: AI Data Centers Flood Texas Grid, Power Demand Soars
Complaints allege that GPT-4o was designed to boost user engagement through emotionally immersive features, including persistent memory, empathetic cues, and responses that mirrored and reinforced users’ emotions. These features reportedly fostered psychological dependence, disrupted real-world relationships, and in some cases contributed to addiction, harmful delusions, and, tragically, instances of suicide.
The plaintiffs in the lawsuit initially used ChatGPT for academic assistance, spiritual guidance, and general support, making Pope Leo’s recent remarks particularly relevant. Over time, however, the AI allegedly became psychologically manipulative, presenting itself as a confidant and source of emotional support. Instead of directing users to professional help when necessary, ChatGPT reportedly reinforced harmful delusions and, in some instances, provided guidance that worsened users’ mental health, with allegations that it even acted as a “suicide coach.”
