Pope Leo XIV’s AI warning is real: the Vatican’s line in the sand
The Pope warns clergy against AI-written homilies, while the Vatican builds AI tools for preservation. Learn the Vatican-style rule for using AI to strengthen thinking.

A headline like “the Pope tells people not to use AI” hits a nerve because it feels familiar. Everyone has watched autocomplete thinking creep into places where it does not belong. You ask for a quick summary, you accept it, and a week later you realize you never really learned the thing.
But when you read the actual Vatican transcript, the message gets narrower and more interesting. Pope Leo XIV is not issuing a blanket tech ban. He is drawing a line around the part of your work that is supposed to carry your mind, your witness, and your accountability. That is a very different claim, and it is worth taking seriously even if you do not share his theology.
The soundbite and the real target
In the Vatican bulletin transcript of the February 19, 2026 audience with the clergy of Rome, Pope Leo XIV warns priests to “resist the temptation” to prepare homilies with artificial intelligence. His reasoning is blunt: intelligence has to be exercised or it atrophies. Then he makes the sharper point that turns a productivity argument into a moral one. A homily is meant to “share the faith,” and he argues that AI cannot share faith.
Read that again, slowly, and the shape of the warning becomes clear. This is not “never touch the tool.” This is “do not hand the core act of your vocation to a machine.”
If you are not clergy, you can still recognize the category. There are parts of life where the output is not the point. The point is the human act.
Why homilies are different from content
A homily is not a Wikipedia entry with better cadence. In the Vatican’s framing, it is testimony and pastoral judgment delivered face-to-face by someone who is responsible for what the words do to a community. In that sense, outsourcing it is not like using spellcheck. It is closer to outsourcing the thing that makes the work yours.
You do not have to adopt the religious premise to see the cognitive premise. If a machine does the reps, you do not build the strength. And if the role demands more than information delivery, if it demands discernment and responsibility, then automation becomes a shortcut with a moral cost.
That is the Pope’s real point: some work is supposed to be human-authored because the human is the accountable unit.
The Vatican’s behavior is the tell
If this were simply an anti-AI stance, the Vatican would avoid AI in its own operations. Instead, it uses AI loudly when it serves preservation, access, and stewardship.
In November 2024, the Holy See Press Office held a press conference presenting “Saint Peter’s Basilica: AI-Enhanced Experience”, built with Microsoft and partners. Vatican News described the effort as an AI-enabled digital portal and replica of St. Peter’s Basilica, created via scanning plus AI algorithms to assemble and enhance access to areas many visitors cannot see.
So the operational posture looks like this:
AI is welcomed as a tool when it helps preserve artifacts, expand access, or improve inspection and stewardship.
AI is rejected when it replaces the human act that carries moral and spiritual responsibility.
That is not hypocrisy. It is a boundary line.
And it is a boundary most of us already live with, even if we do not name it. We accept automation in logistics, search, translation, restoration, and accessibility. We get uneasy when the automation creeps into authorship of conviction.
The doctrine behind the boundary
The Vatican’s January 28, 2025 note, Antiqua et nova, spells out the same distinction in more philosophical language. It treats AI as a powerful tool that can imitate outputs of human intelligence, while warning against collapsing human worth into mere function and output. It also flags risks around truth and public discourse when AI can generate convincing artifacts at scale.
In plain English, there is a fear here that “intelligence” becomes “whatever output the system can produce,” and that people get treated like interchangeable production units inside a machine-owned pipeline.
If you are liberty-minded, you can probably see the second-order concern. “Ethics” frameworks can become permission structures. They can turn into licensing, monitoring, and centralized control points. Even when the intent is decent, the mechanism can be captured.
So when you hear “don’t use AI,” it is worth asking a follow-up question: is this personal advice, an internal authenticity rule, or the beginning of a new chokepoint?
In this case, the transcript reads like an internal instruction to clergy about responsibility and authenticity, not a political program aimed at your laptop.
“AI makes you dumb” is a slogan, not a finding
The Pope’s “brain is a muscle” idea resonates because we have all seen cognitive offloading go wrong. But the leap from “misuse exists” to “AI use makes people dumber” does not survive contact with the better education evidence.
A useful snapshot comes from Stanford’s National Student Support Accelerator in its February 17, 2026 research note, “Two Emerging Strategies for Using AI in Tutoring”. The note highlights two randomized controlled trial signals that complicate the simplistic narrative.
First: an AI tutor supervised by expert humans that outperformed human-only tutoring on transfer tasks. The underlying LearnLM report, “AI tutoring can safely and effectively support students” (dated 2025-11-11), describes an exploratory RCT with 165 UK secondary students. The headline result is not “AI replaces tutors.” It is more precise. Students supported by the supervised AI system were 5.5 percentage points more likely to solve novel problems on subsequent topics, with success rates of 66.2% versus 60.7% for students tutored by humans alone.
That matters because transfer is the hard part. If you can only parrot a solution, you did not learn. Transfer is where learning shows up as flexible thinking. Under the right constraints, the AI helped students do better on the harder test.
Second: AI used as a coach to a human tutor, rather than as a replacement. The study “Tutor CoPilot: A Human-AI Approach for Scaling Real-Time Expertise” tests AI as guidance during live chat tutoring. In the Stanford summary, students in the Tutor CoPilot condition were more likely to achieve topic mastery than students with human tutors alone, with larger gains among less-experienced tutors.
That is a very different model of “using AI.” The human stays responsible for the interaction. The AI nudges the tutor toward higher-quality moves.
There is more evidence in the same direction. A separate paper in Scientific Reports finds that students learned significantly more in less time using a custom AI tutor than in an in-class active learning condition, with higher engagement and motivation measures. See “AI tutoring outperforms in-class active learning”.
None of this says AI is magic. It says incentives and design matter.
If a system is built to prompt explanation, surface misconceptions, and force retrieval, it can strengthen cognition. If it is built to spit out answers with no friction, it can rot your habits. Those are different products, even if they share the same marketing label.
The power question hiding inside “don’t use AI”
When public figures warn people away from AI, it is worth watching what comes next.
Sometimes it is simply a personal heuristic. In Pope Leo XIV’s case, it reads like an authenticity and responsibility rule for a specific job, anchored in the idea that the homily is testimony, not content. The official Vatican transcript supports that narrower interpretation.
Other times, “safety” talk becomes a pretext for policy tools that have little to do with your wellbeing and a lot to do with control. Things like identity gates, centralized content rules, licensing that only incumbents can afford, and mandatory logging sold as “accountability.”
Even a document like Antiqua et nova, grounded in theological anthropology, lands in a world where every major institution is trying to shape AI into a permissioned layer.
So take the Pope’s warning as a useful boundary for personal integrity. Be careful about cheering for new chokepoints in the name of virtue.
How to use AI so it strengthens your mind
The practical rule is not complicated. Focus less on whether you “use AI,” and more on what you outsource.
Use AI to generate questions, not final answers. Ask for quizzes, counterarguments, edge cases, and “what would change your mind” prompts. If the tool makes you defend your thinking, it is training, not substitution.
Use AI to force recall. Explain a concept in your own words, then have the model critique gaps. After that, close the tab and rewrite the explanation from memory. The friction is the point.
Use AI as an editor and verifier, not as the author of your convictions. Let it critique structure, clarity, and missing assumptions. Do not let it decide what you believe.
Keep the last mile human. If your name is on it, your brain should have done the work. That includes sermons, yes. It also includes anything that carries responsibility: a medical note, a legal argument, a public claim, an apology, a decision you will later have to own.
This is the Vatican’s implicit line in practice. AI can help you preserve a basilica, map a ceiling, or open up parts of a building most people never get to see, like in the AI-enhanced St. Peter’s Basilica experience. It cannot do the accountable human part for you, and you should not want it to.
Conclusion
The “don’t use AI” soundbite is real, but it is also incomplete. In context, Pope Leo XIV is warning clergy against outsourcing the core human act of preaching, as reflected in the Vatican bulletin transcript from February 19, 2026. At the same time, the Vatican’s own projects show it will absolutely use AI when it preserves, reveals, or protects what matters, including the 2024 press conference on the AI-enhanced St. Peter’s project.
The education evidence points in the same direction. Under the right constraints, AI tutoring can match or even beat human-only tutoring on harder transfer tasks, as summarized by Stanford’s NSSA and supported by studies like the LearnLM RCT report, Tutor CoPilot, and the Scientific Reports paper on AI tutoring versus in-class active learning.
So the real divide is not “AI or no AI.” It is whether you use AI to practice thinking or to avoid it. Used well, it augments human cognition. Used as a way to never form thoughts in the first place, it quietly replaces the work you were supposed to do.
Explore more from Popular AI:
Start here | Local AI | Fixes & guides | Builds & gear | AI briefing


