Dr Jazlan Jamaluddin and Dr Megat Mohamad Amirul Amzar Megat Hashim
Vaccination is one of the most successful public health interventions in modern medicine. Yet despite decades of scientific progress, vaccine hesitancy remains a persistent challenge. Concerns about safety, long-term effects, religious permissibility, or misinformation circulating online continue to influence public attitudes toward immunisation. At the same time, the way people seek health information is rapidly changing. Increasingly, individuals turn to digital platforms rather than healthcare professionals as their first source of advice.
In this evolving information environment, artificial intelligence chatbots powered by large language models are emerging as a new intermediary between the public and medical knowledge. These systems can generate conversational responses to health questions in real time, providing explanations that feel interactive and personalised. If deployed responsibly, such tools could help strengthen vaccine communication and potentially improve uptake.
One reason chatbots may be useful is their accessibility. Traditional public health messaging often relies on static information sources such as government websites, pamphlets, or public campaigns. These resources are important but they cannot answer individual questions directly. Chatbots, in contrast, allow users to ask specific concerns such as “Are vaccines safe?” or “Why do children need vaccines so early?” and receive immediate responses. This conversational format mirrors how patients might speak with a clinician, making complex information easier to explore.
Another potential advantage is scalability. Vaccine communication requires reaching large populations quickly, especially during outbreaks or public health emergencies. Healthcare workers cannot answer every online question, but AI systems can handle thousands of queries simultaneously. In principle, this could help reduce the influence of misinformation by making reliable explanations available whenever people search for answers.
Language accessibility also matters. In multilingual societies, official health information is often available only in a limited number of languages, which may restrict understanding among diverse communities. AI chatbots can generate responses across multiple languages and adapt explanations to different linguistic contexts. Evaluations of chatbot responses to vaccine questions have shown that these systems can produce accurate and comprehensive explanations in more than one language, sometimes even outperforming traditional guideline-based text responses in expert assessments.
Beyond language, conversational AI may also improve how information is structured and presented. Traditional health communication materials are often written for institutional or regulatory purposes rather than for everyday readers. Chatbots can present answers in structured formats such as bullet points or short explanations, which experts often find easier to follow. When done well, this format can simplify complex scientific concepts without sacrificing accuracy.
However, the promise of AI chatbots must be balanced with careful scrutiny. Generative AI systems do not retrieve facts in the same way as search engines or databases. Instead, they generate responses based on patterns learned during training. This means they can sometimes produce answers that sound plausible but are incomplete, outdated, or misleading.
Several technical studies have highlighted vulnerabilities in medical language models. Some systems can be manipulated through prompt injection or other adversarial inputs, which may alter the advice they provide. Others may fail to recognise when local guidelines or regulatory frameworks differ across countries. In vaccine communication, such contextual errors matter. Information about vaccine safety monitoring, approval pathways, or recommended schedules often depends on national policies and healthcare systems.
There are also broader concerns about the quality of AI-generated health information. Research evaluating chatbot responses to vaccine questions has found that while many answers are factually correct, the overall usefulness of those responses depends on more than accuracy alone. Experts reviewing these responses often focus on factors such as readability, cultural sensitivity, credibility of referenced sources, and the tone used to address public concerns.
For example, vaccine hesitancy is not purely a scientific issue. It often reflects social, cultural, or ethical questions. In some communities, concerns may relate to religious beliefs or historical mistrust of institutions. Simply presenting scientific facts may not be enough to address these worries. Communication experts frequently recommend acknowledging concerns respectfully before explaining the evidence. Chatbots that adopt this empathetic approach may be perceived as more trustworthy.
Local context also plays a significant role. Vaccine communication is shaped by national regulatory systems, healthcare infrastructure, and cultural expectations. Experts evaluating AI-generated responses have noted that answers are more persuasive when they reference locally relevant institutions, monitoring systems, or guidance documents. In contrast, responses that rely on foreign examples or generic information may appear less credible to readers seeking advice relevant to their own country.
Another important issue is the credibility of sources. Even when chatbot responses are accurate, they may include references that are incomplete, inaccessible, or insufficiently authoritative. Public trust in health information often depends on transparency about where that information comes from. Reliable references to peer-reviewed research, national health authorities, or recognised international organisations can strengthen confidence in the message.
The complexity of language also matters. Some AI-generated explanations contain technical terminology that may be difficult for the general public to understand. Readability analyses have shown that many chatbot responses fall within moderately difficult reading levels, meaning that additional simplification may be needed for broader audiences. Effective vaccine communication requires balancing scientific completeness with clarity and accessibility.
These limitations highlight an important point: AI chatbots should not be treated as independent public health authorities. Rather, they are tools that can support communication when integrated into broader health systems. Human oversight remains essential to ensure that information remains accurate, contextually appropriate, and aligned with national guidelines.
Responsible deployment of AI chatbots in health communication would therefore require governance and oversight. Health authorities may need mechanisms to audit chatbot outputs, verify sources, and monitor potential misinformation risks. Collaboration between clinicians, public health experts, and technologists will be necessary to design systems that prioritise safety and transparency.
Despite these challenges, the potential benefits remain significant. If designed carefully, AI chatbots could complement traditional vaccine communication strategies by expanding access to reliable information. They could help answer common questions, counter misinformation, and support individuals who are hesitant or uncertain about vaccination.
Technology alone, however, will not solve vaccine hesitancy. Trust in vaccines depends on a complex combination of scientific evidence, cultural context, institutional credibility, and effective communication. Artificial intelligence can assist with information delivery, but it cannot replace the role of healthcare professionals, community engagement, or public health leadership.
The future of vaccine communication will likely involve a hybrid approach. Conversational AI systems may provide immediate information and guidance, while clinicians and public health authorities continue to provide expert advice and oversight. In this model, AI acts as a bridge between complex medical knowledge and everyday questions from the public.
As digital technologies continue to reshape how people seek information, the challenge for public health will be ensuring that reliable, trustworthy explanations remain accessible. Artificial intelligence chatbots may become an important part of that ecosystem. But their success will depend on careful design, responsible governance, and a clear understanding that technology should support, not replace, human expertise in protecting public health.
Dr Jazlan Jamaluddin is a senior lecturer and Family Medicine specialist at the Department of Primary Care Medicine, Faculty of Medicine, University Malaya. Dr Megat Mohamad Amirul Amzar Megat Hashim is a Family Medicine specialist at the Department of Primary Care Medicine, Universiti Malaya Medical Centre and the vice-president of Medical Mythbusters Malaysia.
