Arrow iconArrow icon used in website

AI in healthcare: How it's changing patient search behaviour

How healthcare institutions can prepare their content for the new patient

Patients now expect 24/7 quick and relevant answers to their medical questions. But instead of trawling through Google search results, they're increasingly turning to ChatGPT or Perplexity with their health queries: "What should I do about heart palpitations?" But are they then reaching your healthcare institution, or an unreliable source of information?

Man smiling at iO office

Don't lose your patient to a digital quack

Do you recognise the scenario? Your patient feels unwell and has to wait three weeks for a GP appointment. So they consult Dr Google. Previously, this would generate a long list of search results that would lead them to the right (medical) source. 

Not anymore. More and more people are asking their health questions directly to AI assistants like ChatGPT, Google Gemini, or Perplexity. They receive the answer in one clear text - sometimes with, but often without, source attribution or verification. 

This shift has major consequences for healthcare institutions. Where you previously had some control over what information your patients found via search engine optimisation, that control now shifts to the AI models that generate answers. And this brings both opportunities and risks.

Evolution of medical search behaviour

More and more patients search online for medical information before visiting the doctor. Simultaneously, AI tools like ChatGPT are taking a more prominent role in answering health questions. 74% of Dutch people aged 12 and over searched online for health and lifestyle information in the first half of 2022, according to Statistics Netherlands (CBS).

How does AI change patient search behaviour?

The difference between traditional searching and AI-driven searching is fundamental. With Google, you get a list of links to various websites. With AI tools and search engines, you get one direct answer, compiled from multiple and often anonymous sources. Based on that first answer, patients can engage in dialogue with the search bot, asking questions in natural language, as if talking to a doctor. 

Example of a patient question: "I'm 35 years old, have type 2 diabetes, and have been feeling dizzy after standing up for the past few days. Should I be concerned and what can I do about it myself?"

Based on a question your patient poses in natural language, an AI tool generates a comprehensive answer that mentions various possible causes, gives self-help tips, and advises when a doctor's visit is necessary. 

The AI search engine or chatbot compiles this information from various sources: the tool draws from the offline data it was trained on, uses its own self-built information index, or falls back on traditional search engines like Bing, Google, and Yahoo. 

A person in a pink shirt appears with a blurred, double exposure effect against a plain background.

Hallucinating with deadly consequences

AI searches for information across multiple sources. The problem? It's not always clear which sources these are and how reliable they are. Moreover, an AI chatbot sometimes dares to 'hallucinate'. They then invent facts or circumstances and present them to users with great certainty as truth. 

This hallucinating happens for several reasons: 

  • AI language models are trained on data, and these can be incorrect or outdated 
    The tools misinterpret the data or take them out of context 

  • AI tools simply can't find the right knowledge and then predict an answer based on what seems logical 

  • But above all, this is an inherent problem: AI uses a neural network and statistics to give answers, but it has no built-in system to distinguish truth from lies. 

For instance, a study showed that 69% of cited scientific sources were completely fabricated, yet seemed credible due to realistic-sounding titles and author names. 

We increasingly see examples of AI tools giving dangerous medical advice. From wrong dosages to discouraging necessary treatments. A Stanford Medicine study showed that AI chatbots provided medically incorrect information that could be harmful to patients in 22% of cases. 

Concrete examples of erroneous AI advice:

  • A chatbot recommends aspirin for stomach complaints (whilst this can actually be harmful)

  • AI search engine provides incorrect dosage information for children

  • The tool discourages COVID vaccinations for certain risk groups 

  • Self-diagnoses of skin conditions that overlooked untreated cancer

Close-up of two people hugging, showing intertwined hands and a smiling face. One person wears a gray suit.

How do you remain relevant and trustworthy as a healthcare organisation?

Want to fulfil your mission as a healthcare organisation and correctly answer and treat your patients' questions? Then you must ensure that your reliable, correct, and verified content makes the difference. 

Just as with Search Engine Optimisation, where you make your content findable by using words that your patients search for, you must now also align that content with AI search engines. 

You start by structuring your information so that it can be adopted as a ready-made answer by the AI tool or search engine. And by writing your texts according to the E-E-A-T principle that Google has applied for years, but which now also becomes crucial for AI: ensure your texts demonstrate Experience, Expertise, Authoritativeness, and Trustworthiness. 

For example, ensure your content clearly shows who the author is and link through to LinkedIn for proof of their qualifications. Reference official guidelines and scientific studies. Share testimonials and case studies that show how effective an approach was... But whatever you do: be transparent about the limitations of online medical advice. 

Case: Thuisarts.nl

Thuisarts.nl is a perfect example of how to apply E-E-A-T principles. All content is compiled by GPs, based on NHG standards, and regularly updated. The result? With 4 million page views per month, it's the most trusted medical information source in the Netherlands. AI tools regularly cite Thuisarts.nl as a reliable source, precisely because the E-E-A-T signals are so strong.

A glossy blue liquid swirl on a smooth beige surface, creating a dynamic and elegant visual contrast.

360° SEO in healthcare

Ensure your verified information can be found on all channels your patients frequent, including online thematic or association sites, health forums, communities like Reddit, and yes, social media.

Modern search engine optimisation in healthcare today goes beyond traditional SEO. You must account for local searches, social media, online reviews, and AI tools. We call this 360° SEO.

Optimise specifically for AI tools by structuring your content in question-and-answer formats, drawing clear conclusions, and adding source citations and especially links. AI tools prefer content that directly answers specific questions.

Whitepaper: Building a human digital patient journey

Discover how a smart patient journey delivers better care experiences, relieves care teams & makes digital tools human.

patientjourney

Curious how AI-proof your content is?

Request an audit or schedule a session with our patient journey experts.

Peter Smit - iO
Your contact:PeterBusiness Consultant
Clarissa_Filius - iO
Your contact:ClarissaSEO Strategist
Telephone iconTelephone icon used in website+32 3 361 40 00
Telephone iconTelephone icon used in website+31 88 201 3101
Mail iconMail icon to signify link to contact em-mail address
business@iodigital.com

Related articles