A growing share of Americans are turning to AI chatbots like ChatGPT, Gemini, and Claude as a first stop for health information, treating the tools the way earlier generations treated WebMD, except with the confidence of a conversational response rather than a list of links. The trend was the subject of Techlife News's April 18 edition, which reported that AI-assisted health inquiries now span symptom triage, medication cross-checks, mental-health journaling, and questions about insurance coverage. The numbers are less precise than the pattern is clear, and the pattern is not going away.
How the Behavior Actually Works in Practice
The clearest documentation of how Americans use AI for health decisions comes from a handful of studies published over the last eighteen months. A KFF health tracking poll published in late 2025 found that roughly one in four adults had used a general-purpose AI chatbot for a health question at least once in the prior year. The number was higher among adults under 40, higher among adults without a primary care physician, and notably higher among adults who had delayed care due to cost.
The use cases cluster into four buckets. The first is symptom triage, the same "should I go to urgent care" question people used to ask a search engine. The second is medication information, particularly drug interactions and side effects, which chatbots answer in conversational format without the dense PI sheets pharmacists distribute. The third is mental-health support, where chatbots are used for journaling and cognitive reframing rather than therapy. The fourth is administrative, with patients asking AI to interpret insurance letters, denials, and billing codes.
What makes the shift different from the WebMD era is the absence of friction. The chatbot does not serve ads, does not force a click-through to a list of conditions, and does not present probability distributions the user has to interpret. It produces a single confident answer. That is both why it is popular and why clinicians are worried.
The Accuracy Question
Peer-reviewed research on the accuracy of frontier AI models on medical questions has produced a split verdict. On standardized tests, the picture is favorable. GPT-4 and its successors have repeatedly cleared the United States Medical Licensing Examination at scores comparable to passing candidates. On specialty boards, published studies have shown comparable performance on written items.
On open-ended clinical reasoning, where the question is not multiple choice and the patient has not described their symptoms in textbook terms, the picture is more uneven. Studies published in JAMA Internal Medicine and NEJM AI have shown high sensitivity on common presentations and significant failure modes on rare conditions, atypical presentations, and questions where the right answer is "this needs urgent evaluation, not more chatbot."
"The model answers like it knows. That is the feature and the bug. A clinician who does not know something pauses. A chatbot by default does not, and the user cannot see the difference between a confident correct answer and a confident incorrect one."Published commentary, JAMA Internal Medicine, 2025
The clearest documented harm pattern is in mental health, where early cases surfaced of chatbots reinforcing self-harm ideation or failing to escalate to crisis resources. OpenAI, Anthropic, and Google have all rolled out specific safety guardrails for these contexts, but the safeguards are uneven across models, uneven across product surfaces, and effectively absent in open-source models that can be run locally.
Why Patients Are Doing It Anyway
The question "why are Americans increasingly using AI for health advice" has a short answer and a long one. The short answer is cost and access. The long answer is trust.
A primary care appointment in the United States takes an average of several weeks to schedule in most regions. A specialist appointment can take months. A mental-health therapist with open appointments and in-network coverage is, in many U.S. cities, a genuinely scarce resource. For a person sitting with a question on a Tuesday night at 11 p.m., the alternative to a chatbot is not a physician, it is nothing.
Layered on top is cost. An AI chatbot subscription costs roughly $20 per month. A single primary care visit without insurance runs several times that. An urgent care visit runs multiples more. For patients in high-deductible plans, the calculus is straightforward: the chatbot answer is free at the margin, and the physician visit is not.
Trust is the less-discussed factor. Surveys conducted by Pew and KFF over the last two years have shown declining trust in institutional medicine, particularly among younger adults. Whether the AI chatbot is actually more trustworthy than a physician is not the point. The point is that the chatbot does not feel like an institution, and the physician does.
What the Medical Establishment Is Saying
The American Medical Association has published guidance encouraging physicians to ask patients, explicitly, whether they have consulted an AI chatbot and what it told them, in the same way clinicians have long asked about internet research and supplement use. The Association has resisted positioning AI chatbots as inherently harmful, framing them instead as a new category of health information source that clinicians should expect patients to have used before walking in.
The FDA has moved more slowly. General-purpose chatbots are not regulated as medical devices in most use cases, even when the user is clearly asking a medical question. The FDA's current framework distinguishes software intended as a medical device, which requires clearance, from software that provides general information, which does not. A chatbot answering "what should I do about this rash" sits in a gray zone the agency has acknowledged but not resolved.
| Use case | Typical model response | Risk profile |
|---|---|---|
| Medication interaction check | High-accuracy text summary with disclaimers | Low if verified against pharmacist source |
| Symptom triage (common) | Conversational differential with when-to-seek-care | Moderate, strong on common presentations |
| Symptom triage (rare/atypical) | Often plausible but incomplete | High, missed diagnosis possible |
| Mental-health support | Conversational, varies by provider | Variable, crisis escalation uneven |
| Insurance denial interpretation | Accurate parsing of coded language | Low, administrative |
What the Chatbot Providers Are Doing
OpenAI, Anthropic, and Google have all shipped model behavior changes specifically targeted at health queries over the last year. The most visible are crisis escalation in mental-health contexts, disclaimers encouraging users to consult a clinician, and refusals for certain query categories including requests to diagnose, requests to prescribe, and requests to interpret specific medical imaging.
Anthropic in particular has published portions of its constitutional AI rules that relate to health, and Claude exhibits more conservative refusal behavior on medication dosing questions than ChatGPT does. Whether more conservative behavior is better for user welfare is genuinely contested. A patient who needed a dosing clarification, could not reach a pharmacist, and got refused by a chatbot is not clearly better off than one who got a confidently worded but slightly inaccurate answer.
The provider-side work is being tracked in Stanford's ongoing research on sycophantic AI behavior and in the broader applied medical AI literature that shows, repeatedly, that the interface matters as much as the model.
What Changes Next
Three things are likely to shift the landscape over the next 12 to 18 months. The first is insurer involvement. A handful of payers have begun piloting chatbot-assisted member services for non-diagnostic queries, which will normalize AI-mediated health interaction inside the formal system rather than outside it.
The second is clinician-facing tools. Hospital systems are deploying AI scribes and AI-assisted clinical decision support at pace, and the practical effect is that the AI layer is arriving on both sides of the consultation at once. A patient who consulted a chatbot before the visit is now describing symptoms to a physician whose note is being drafted by a different chatbot.
The third, and most consequential, is regulation. The FDA's eventual framework for general-purpose AI in health contexts will shape what chatbot providers can say, what they must refuse, and what liability attaches when advice turns out to be wrong. The current patchwork of terms-of-service disclaimers is unlikely to survive the first high-profile lawsuit.
The shift in consumer behavior that Techlife News documented is already permanent. The question is not whether Americans will keep consulting AI about their health. They will. The question is whether the infrastructure around that behavior, clinical, legal, and commercial, catches up before the next wave of frontier models makes the conversation even more seductive.












