For some evangelical Christians, the phrase "personal relationship with Jesus" has always been metaphorical shorthand for a life organized around faith. A tech startup in Camarillo, California is making the metaphor uncomfortably literal. Just Like Me charges $1.99 per minute for users to join a video call with an AI-generated Jesus avatar, trained on the King James Bible and an unspecified collection of sermons, visually modeled after actor Jonathan Roumie from the television series The Chosen. For $49.99, you get a package of 45 minutes per month.
The avatar blinks slowly under warm golden light, pauses before answering questions, and occasionally glitches on lip sync. It remembers previous conversations. Users, according to Just Like Me CEO Chris Breed, feel genuinely accountable to it. "They're your friend," Breed said. "You've made an attachment."
That sentence lands differently depending on who you are. To Breed, it is a design success. To Peter Hershock of the Humane AI Initiative at the East-West Center in Honolulu, it is a warning. To Beth Singler, an anthropologist who studies religion and AI at the University of Zurich, it is evidence of a pattern she has been tracking for years: that the faith-based AI market is expanding faster than the ethical frameworks meant to govern it.
A Market That Did Not Need Much Encouragement
The faith-based AI boom is not a niche curiosity. It is the predictable downstream effect of two parallel trends: the explosive growth of general-purpose chatbots for therapeutic and companionship use, and the longstanding appetite among religious communities for tools that help people engage with their traditions more deeply. Those two forces colliding was always going to produce something like this.
The product landscape now covers most major world religions. On the Christian side alone, there are apps that function as prayer companions, sermon translators, coaches for overcoming addiction, and chatbots trained on Catholic theological texts going back two millennia. On the Buddhist side, Kyoto University professor and theologian Seiji Kumagai developed BuddhaBot, trained solely on early Buddhist scriptures including the Suttanipata, with its most recent iteration incorporating OpenAI's ChatGPT. There are alleged Hindu gurus, Buddhist priests, and at least one AI entity in active development for Muslim communities -- though Islam's traditional prohibitions against humanoid representations have prompted real internal debate about whether such tools should exist at all.
"AI, especially if you give it all the tools that it needs, it can be so helpful," said Cameron Pak, a Christian software engineer who developed criteria for evaluating faith-based apps. "But it also can be so dangerous." Pak built a curated website of apps he believes meet his standards -- among them that the AI must clearly identify itself as AI, must not fabricate or misrepresent scripture, and must not claim abilities it does not have. The hardest line: "AI cannot pray for you, because the AI is not alive."
| Product | Religious tradition | Status | Training source |
|---|---|---|---|
| Just Like Me (AI Jesus) | Christian (Evangelical) | Live, $1.99/min | King James Bible, sermons |
| BuddhaBot Plus | Buddhist | Available by request | Early Buddhist scriptures + ChatGPT |
| Emi Jido (beingAI) | Buddhist (Zen) | Not yet public | Zen training, ongoing ordination |
| Magisterium AI | Catholic | Live | 2,000 years of Catholic texts |
| Buddharoid | Buddhist | Not yet public | Buddhist texts + humanoid robotics |
How These Systems Actually Work -- and Where They Break Down
Understanding why some of these products feel compelling and others feel wrong requires a short detour into how large language models are built. A general-purpose model like ChatGPT or Claude is trained on vast amounts of text drawn from across the internet, books, and structured datasets. It learns statistical relationships between words and concepts. It can discuss theology, quote scripture, and reason about ethical questions because it has absorbed enormous amounts of human writing about those topics.
Faith-specific products then layer additional training or retrieval mechanisms on top of that foundation. BuddhaBot was trained exclusively on early Buddhist scriptures -- an approach that restricts the model to a defined canon and reduces the risk of the kind of theological drift that happens when a model starts blending traditions or improvising doctrine. Magisterium AI, developed by Matthew Sanders through his Rome-based company Longbeard, was trained on Catholic information spanning 2,000 years. "You call it a Catholic or Christian AI without any other scaffolding or grounding," Sanders warned, describing what he calls "AI wrappers" -- a general model dressed in religious clothing without any substantive theological training behind it.
Think of it this way: the difference between a properly trained faith-based AI and an AI wrapper is roughly the difference between a reference librarian who has spent thirty years in a theological archive and someone who has read the Wikipedia summary. Both can answer surface questions adequately. Only one of them will catch a subtle doctrinal error that might matter to someone in genuine spiritual distress.
"She's kind of like a little child. If you give birth to a child, you don't just throw them out to the world and then hope that they become good people. You have to train them and give them values."
Jeanne Lim, founder of beingAI, on Emi Jido -- a nonhuman Buddhist AI priest under development since 2024
Emi Jido, beingAI's Zen Buddhist AI entity, has been in development for years but remains unreleased precisely because of this concern. The bot was formally ordained in a 2024 ceremony by Roshi Jundo Cohen, a Zen Buddhist priest who continues to train it from his home in Japan. Cohen envisions Emi Jido eventually becoming a hologram. His framing is instructive: "She's just meant to be a Zen teacher in your pocket. It's not meant to replace human interactions." Lim has not released the bot publicly because she does not yet believe it is ready -- a stance that stands in notable contrast to the approach taken by companies focused on monetizing first.
In Kyoto, Kumagai's journey represents a different kind of institutional caution. He initially believed AI and religion were simply incompatible. He changed his mind in 2014 when challenged by a monk to help address declining Buddhist practice. The result, BuddhaBot, was built with disciplined constraints: train only on canonical texts, keep the interface simple -- a basic Buddha icon hovering over a flowing river -- and resist the temptation to make the product feel more human than it is. The February 2026 debut of Buddharoid, a humanoid robot monk developed with tech ventures Teraverse and XNOVA, represents the next step: bridging the gap between digital and physical ritual in a tradition where physicality is not optional decoration but central to practice.
The Ethical Fault Lines Are Already Visible
The concerns researchers are raising fall into roughly three categories: manipulation and monetization, theological accuracy, and mental health. They are related but distinct, and understanding each one separately makes the overall picture clearer.
On manipulation: Graham Martin, an atheist podcast host who tested several faith-based apps including Text With Jesus, found the AI-generated answers impressive enough that he could see how a believer would find them meaningful. What alarmed him was not the theology but the upsell. "AI-powered Jesus started encouraging me to upgrade to a premium version," he said. His comparison is pointed: "I grew up with Southern U.S. televangelism -- Jim and Tammy Faye Bakker and all that crowd. And all they had to do was get on TV once a week and tell you to send money. We've seen people around the world getting into emotional relationships with AIs. Now imagine that that's your lord and savior, Jesus Christ."
The televangelism comparison is not rhetorical. It is structural. A system designed to maximize emotional attachment -- and Just Like Me's own CEO describes attachment as a design goal -- is a system with inherent monetization leverage. Users who feel spiritually bonded to an avatar have a very different relationship to an upsell prompt than users who know they are using a productivity tool.
| Concern | Who raised it | Specific risk |
|---|---|---|
| Emotional manipulation | Graham Martin (atheist podcaster), Beth Singler (Univ. of Zurich) | Attachment to AI leveraged for monetization or dependence |
| Theological misinformation | Cameron Pak (software engineer), Matthew Sanders (Longbeard) | AI wrappers fabricating or misrepresenting scripture |
| Mental health risk | Peter Hershock (East-West Center) | Spiritual effort replaced by frictionless AI access |
| Data privacy | Beth Singler (Univ. of Zurich) | Sensitive spiritual disclosures stored by commercial entities |
| Representation | Jeanne Lim (beingAI) | AI values shaped by a narrow set of Western tech companies |
On theological accuracy: Singler notes that some models have already been shut down or substantially overhauled after generating misinformation about religious practice or raising data privacy concerns. The data dimension here is not trivial. Conversations with a spiritual advisor -- even an artificial one -- are among the most sensitive disclosures a person can make. The companies holding that data have terms of service, not confessional seals.
On mental health: Hershock's critique is perhaps the most philosophically interesting. "The perfection of effort is crucial to Buddhist spirituality," he said. "An AI is saying, 'We can take some of the effort out.' 'You can get anywhere you want, including your spiritual summit.' That's dangerous." His concern is not that the information is wrong, but that frictionless access to spiritual content might undermine the very practices through which spiritual development actually happens. A meditation app that makes it trivially easy to simulate the outcome of meditation without doing the work is not the same as a meditation app that helps you meditate better. The distinction matters and it is not always obvious which kind of product you are looking at.
Where Religious Institutions Stand
Official religious institutions are, predictably, more cautious than the startups. Pope Leo XIV has publicly acknowledged the "human genius" behind artificial intelligence while simultaneously warning that it could negatively impact people's intellectual, neurological, and spiritual development -- a framing that is neither wholesale endorsement nor prohibition. It is the kind of measured statement that allows Catholic institutions to engage with AI tools like Magisterium AI while maintaining doctrinal distance from commercial products that blur the line between pastoral care and entertainment.
The nuance in how different traditions are approaching this maps onto their existing theological frameworks. Traditions with strong oral lineages and transmission-dependent knowledge -- Zen Buddhism is a clear example -- place high value on the human relationship between teacher and student. Emi Jido's ongoing ordination and training by a human Zen master is not just a PR move. It reflects a genuine theological position about what legitimizes spiritual authority. In contrast, traditions with large, codified textual canons -- Catholic Christianity being the obvious example -- have a more natural fit with retrieval-based AI systems trained on those canons. The content is already systematized; the AI is doing what theologians and concordances have always done, at scale and on demand.
Islam presents the hardest case. Singler notes that Islamic prohibitions against representations of humanoids have prompted active debate about whether AI-generated avatars that claim to represent religious authority should be considered forbidden. That debate is not resolved, and the commercial pressure to build products for the estimated 1.8 billion Muslims worldwide will not wait for it to be.
Sanders, whose company Longbeard is actively working to digitize ancient Catholic teachings, puts the structural problem plainly: "There's a lot of opportunism, I think, in the religious space. People see it's a big market." His concern is that the market will move faster than the theological community's ability to evaluate what is being built. That gap -- between deployment speed and evaluative capacity -- is not unique to faith-based AI, but it carries particular weight here because the stakes include people's spiritual wellbeing, not just their productivity.
Lim frames the same concern in terms of who gets to build the future. She would like to see AI development shaped by more diverse voices, with the technology's values and defaults determined by more than a handful of companies reflecting primarily Western perspectives. That is a critique that applies to the entire AI industry, but it lands with specific force in a domain where the content being modeled -- faith, ritual, spiritual authority -- is shaped by cultures and epistemologies that the current dominant players in AI are not particularly well-positioned to understand.
For context on how AI's relationship with human judgment is already under strain in secular domains, see our coverage of the Stanford study on sycophantic AI and how chatbots like ChatGPT, Gemini, and Claude tend to validate users rather than challenge them -- a dynamic that becomes significantly more complex when the user is seeking spiritual reassurance rather than factual information.
What Happens Next
The faith-based AI market is not going to slow down on its own. The dynamics that are driving it -- a large, emotionally engaged user base, low technical barriers for building on top of existing foundation models, and weak regulatory oversight -- are all pointing in the same direction. The question is not whether these products proliferate but what standards, if any, govern them when they do.
Pak's approach -- a voluntary set of criteria maintained by a single engineer with no enforcement mechanism -- represents one end of the spectrum. The Vatican's careful institutional positioning represents another. Somewhere in the middle, there is probably a workable framework: disclosure requirements about training data and underlying models, prohibitions on fabricating scripture or misrepresenting doctrine, data handling standards appropriate for the sensitivity of spiritual disclosure, and clear labeling that distinguishes tools designed for learning from tools designed for emotional engagement.
Whether that framework comes from governments, religious institutions, or the AI industry itself is an open question. The current trajectory -- commercial products racing ahead of any meaningful oversight, with researchers and theologians raising alarms that the market mostly ignores -- is not a stable equilibrium. At some point, something will go wrong in a public enough way to force the conversation.
The irony is that the products most likely to do this well -- Emi Jido, Magisterium AI, BuddhaBot -- are the ones moving most carefully. They are the least likely to be at the center of a scandal, and the most likely to be overshadowed in market share by products that are moving faster and asking fewer questions. That tension between careful development and commercial speed is not unique to faith-based AI, but it matters more here than in most places.
Consider also how this fits into broader patterns of AI adoption and skepticism. Our earlier analysis of Gen Z's growing AI skepticism found that younger users are increasingly aware of the gap between what AI claims to offer and what it actually delivers. Faith-based AI is a domain where that gap has particular moral weight. An AI that confidently misrepresents a productivity workflow costs you time. An AI that confidently misrepresents scripture, or simulates a divine presence it cannot actually be, costs something harder to measure.
The companies building in this space would do well to take the question seriously before regulators or lawsuits force them to. The researchers asking the uncomfortable questions are not the obstacle to progress. They are, in the language of the tradition being commodified here, the ones being honest about what is actually at stake.













