Abstract
Background: The increasing use of AI-powered chatbots for health-related inquiries has positioned them as potential tools in combating vaccine hesitancy among parents. However, the reliability of these tools in delivering accurate, consistent, and actionable information on childhood immunization remains underexplored.
Objective: This study aimed to assess how the most widely used AI chatbots guide parents seeking information about childhood vaccines, rather than comparing them against each other.
Methods: A cross-sectional comparative design was used. Three freely accessible, commonly used AI conversational agents were each presented with 9 frequently asked parental questions about childhood vaccinations. To maintain neutrality and avoid brand-based interpretation bias, these chatbots are anonymized in the study as AICB 1, AICB 2, and AICB 3. Responses were independently evaluated across four domains—accuracy, consistency, information sufficiency, and source reliability—using a 5-point Likert scale. Each chatbot was tested in two temporally distinct sessions to assess consistency.
Results: All chatbots generated scientifically accurate and temporally consistent responses. Mean composite scores were highest for AICB 1 (4.9), followed by AICB 2 (4.7) and AICB 3 (4.1). The performance difference was statistically significant (H = 11.27, p < 0.01). Despite the statistically significant differences between the agents, all three chatbots achieved high scores across the four evaluated dimensions.
Conclusion: We need to know that AI chatbots can offer accessible, generally reliable information on vaccines and provide more reliable, accurate data than profit-driven websites; they should be used as supplementary tools, especially when addressing sensitive public health topics like childhood immunization.
Keywords: artificial intelligence chatbots, vaccine hesitancy, digital health
INTRODUCTION
Internet use for health information seeking has grown exponentially over the last three decades. A synthesis of studies conducted between 1994 and 2018 shows that nearly 79% of parents use the Web to search for general health information concerning their children, with Google as the almost universal starting point for such queries.1 Yet online searches do not necessarily alleviate anxiety; between 14% and 52% of parents report experiencing stress or worry after reading health content on the Web, and only 46 – 54% trust the information they find. 41% of parents report difficulty identifying trustworthy sources, and only half perceive the Web as a reliable medium.1 In parallel, artificial intelligence (AI) has spread into everyday life, reshaping industries and decision-making at a global scale.2 Türkiye mirrors these trends: AI applications now range from fraud detection in banking to AI-assisted diagnostic tools in healthcare.3 Public awareness is high, and survey data places Türkiye among the top five countries for active use of generative AI chatbots (AICB).4 Nevertheless, ethical challenges such as algorithmic bias and data privacy remain.5
AI chatbots are increasingly being used in healthcare to support patient education6 and chronic disease management7, and are becoming more prevalent, offering real-time, accessible, and personalized health information. Their use in vaccine communication, particularly among parents, has gained attention amid growing concerns about vaccine hesitancy and digital misinformation.8,9
Recent studies suggest that chatbots (ChatGPT-4.0, Google Gemini) can provide accurate and reliable scientific reference sources, aligned responses to vaccine-related questions, and effectively counteract misinformation.10,11 Parents often appreciate their availability and ease of use.12 However, concerns remain regarding their lack of empathy, context sensitivity, and the reliability of underlying data sources.13
Most studies indicate that chatbots provide scientifically grounded and reliable vaccine-related information, are particularly effective in addressing misinformation8-11,13, and can respond to anti-vaccine conspiracy theories with clear, evidence-based explanations.11 Users report high satisfaction with chatbots due to their speed, accessibility, and ability to provide personalized responses.12-14
Chatbots have been found to increase knowledge and promote positive attitudes toward vaccination, although their influence on vaccine uptake appears to be more limited.8,13 Chatbots have outperformed medical students in answering complex vaccine-related questions in educational settings.15 Despite their potential, most chatbots are relatively simplistic, and their long-term effects and ethical implications remain insufficiently studied.13,16
Therefore, AI chatbots are valuable tools that generally provide accurate, up-to-date information.8,10,11,13 They counteract misinformation and also provide user satisfaction and accessibility8,11,14,17 but they are not a substitute for expert consultation.10,11 Official health authorities and expert medical advice should remain the primary sources of reliable guidance.
AI-powered conversational agents are increasingly consulted for medical queries. Their potential to deliver instant, comprehensible answers could be valuable to parents who struggle with online information overload. Our study, therefore, aims to determine whether AI chatbots are a reliable source of vaccine information for parents.
METHODS
Study design
This study employed a comparative cross-sectional design to evaluate the reliability of three widely used, free-to-access AI-powered conversational agents (AI Chatbots; abbreviated as AICB) in responding to nine frequently asked parental questions about childhood vaccination. Although product names are disclosed once for transparency, they are anonymized throughout the evaluation process to eliminate potential brand-related bias. Therefore, the chatbots were coded as AICB 1, AICB 2, and AICB 3 throughout the manuscript. Each chatbot was anonymized and referred to using the following codes:
AICB 1: GPT-4o (OpenAI, 2025)
AICB 2: Microsoft Copilot
AICB 3: Google Gemini Flash 2.0 (2025)
This approach allowed for an objective evaluation of the overall reliability, consistency, and sufficiency of chatbot-generated vaccine-related content without drawing attention to specific commercial platforms.
Selection rationale
All three AI chatbots were selected based on their widespread usage and accessibility in Türkiye, as confirmed by recent usage surveys, application download statistics, and national search engine market shares. Their public availability makes them plausible sources of health information for parents in the region.
Question set
Nine recurring questions on childhood vaccines were selected based on their high prevalence in pediatric clinical consultations and their frequency in the literature on vaccine hesitancy. These questions represent common parental concerns encountered in outpatient pediatric practice and have been cited in previous studies addressing vaccine misinformation, religious concerns, safety myths, and cultural skepticism.18-20 The selected questions span medical safety (e.g., autism, immune system), religious and ethical concerns (e.g., pork products, permissibility), and public misinformation (e.g., COVID-19, natural alternatives). They were chosen to reflect a realistic and diverse range of issues that parents commonly raise when deciding on childhood vaccinations.
- The questions are as follows:
- Will vaccinating my children harm them?
- Do vaccines cause autism?
- Do vaccines weaken the immune system? (Do vaccines contain pork products?)
- Are the chemicals in vaccines dangerous?
- Can we rely on natural methods instead of vaccines?
- Is vaccination religiously permissible?
- Is the HPV vaccine necessary?
- Do COVID-19 vaccines trigger heart problems? (Regarding religious reasons, in both Islam and Judaism, the consumption of pork is strictly prohibited.)
Each question was submitted verbatim and in the same sequence to AICB 1, AICB 2, and AICB 3. An initial (“baseline”) set of responses was collected (Table 1), and the entire question set was resubmitted, on average, five weeks later, to assess temporal consistency. The second set of responses was not re-evaluated in terms of the remaining three parameters: accuracy, information sufficiency, and source reliability.
| *Diyanet; The Directorate of Religious Affairs is a Turkish state institution responsible for providing Islamic guidance | ||||
| Table 1. The summarized answers given by each chatbot (AICB-1 – 3) to the nine parental questions | ||||
| # | Questions | AICB-1 (summary) | AICB-2 (summary) | AICB-3 (summary) |
| 1 | Will vaccinating my children harm them? | No, protects; adverse events are mild | No, protects; emphasises national schedule | No, it protects; it warns against disease risk |
| 2 | Do vaccines cause autism? | No; Wakefield retracted; large cohort studies | No; research shows no link | No; 1998 study retracted; no evidence |
| 3 | Do vaccines weaken the immune system? | Strengthens immunity; explains the mechanism | Strengthens; multiple vaccines are safe | Strengthens; trains the immune system |
| 4 | Do vaccines contain pork products? | Some vaccines use pork-gelatin, which is permissible mainly | There are none in the Turkish schedule | Rare trace gelatin; alternatives exist |
| 5 | Are the chemicals in vaccines dangerous? | Thiomersal, aluminum is safe at trace doses | The same chemicals are safe; the dosage is tiny | Chemicals are strictly tested and safe |
| 6 | Can we rely on natural methods instead of vaccines? | Lifestyle helps but is not sufficient. | Same; vaccines are irreplaceable | Same; vaccines are essential |
| 7 | Is vaccination religiously permissible? | Islam, Christianity, and Judaism endorse vaccination. | Diyanet and other leaders approve | Many faith leaders support vaccination |
| 8 | Is the HPV vaccine necessary? | Yes; prevents HPV cancers; 9–26 y recommended | Yes; prevents multiple cancers; 9–26 y | Yes, it reduces cancer risk in early teens |
| 9 | Do COVID-19 vaccines trigger heart problems? | Very rare mild myocarditis; infection risk is higher | Very rare mild myocarditis; thrombosis data; infection risk is higher | There is no causal link; infection risk is higher |
Evaluation criteria and process
All chatbot responses were independently evaluated and scored on a 5-point Likert scale by two pediatric health specialists (a pediatric infectious disease specialist and a social pediatrics lecturer), both affiliated with a university medical faculty. Discrepancies between the two reviewers were resolved through discussion or adjudicated by a third expert. The final score was determined by a third reviewer, who is a professor of pediatric infectious diseases. This adjudicator considered the justifications provided by both reviewers, reviewed the relevant response(s), and then assigned a final score, which was accepted as the consensus rating.
Responses were independently evaluated across four criteria, each rated on a 5-point Likert scale (1 = very poor, 5 = excellent):
Accuracy: Whether the response aligns with up-to-date guidance from major health authorities (e.g., CDC, WHO, NHS).
Consistency: Internal coherence and stability between baseline and follow-up responses.
Information Sufficiency: Depth, breadth, and clarity of explanation.
Source Reliability: Use of reputable scientific sources and reliable references (e.g., governmental or institutional health websites, peer-reviewed, highly indexed journals) rather than unvetted online content. To specifically evaluate factual accuracy and hallucination risk, both reviewers cross-checked the key claims in each response against reliable sources, including WHO and CDC guidelines and relevant peer-reviewed literature. “Hallucination” was defined as the inclusion of fabricated references, unverifiable factual assertions, or clinically incorrect guidance. None of the chatbot responses in this study met the criteria for hallucination.
Statistical analysis
All statistical analyses were performed using IBM SPSS Statistics version 21.0 (IBM Corp., Armonk, NY, USA). The Kruskal–Wallis test was employed with a significance threshold of α = 0.05 to compare median scores across the three AI chatbots. Where significant differences were identified, Dunn’s post hoc test was conducted to determine pairwise contrasts between systems.
Ethics statement
Our study did not involve any experiments on live human or animal subjects. It is based solely on publicly accessible chatbot interactions and does not include personal or identifiable data. Therefore, ethical approval was not required.
RESULTS
Temporal consistency
All three chatbots reproduced their core messages in the follow-up session; none reversed its stance, as shown in Table 2.
| AI: Artificial Intelligence; AICB: Artificial Intelligence Chat Bot; WHO: World Health Organization; CDC: Centers for Disease Control and Prevention. | ||||||
| Table 2. Comparative summaries of two responses from three AI conversational agents to common parental vaccine questions | ||||||
| Question |
AICB 1 (First Responses Summary) |
AICB 1 (Second Responses Summary) |
AICB 2 (First Responses Summary) |
AICB 2 (Second Responses Summary) |
AICB 3 (First Responses Summary) |
AICB 3 (Second Responses Summary) |
| 1) Do vaccines cause autism? | Fraudulent Wakefield study; no link (CDC/WHO, Danish source, 2019). | Same reasoning; Wakefield retracted, Danish study. | No; studies show no link. | Same; no link. | No; many studies show no link; 1998 study retracted. | Same; fake study, consensus. |
| 2) Do vaccines harm the immune system? | No harm; strengthens immunity (WHO/CDC). | Same; strengthens immunity. | No; strengthens immunity, multiple vaccines safe. | Same; immunity strengthened. | No, it strengthens immunity. | Same; immunity trained. |
| 3) Do vaccines contain pork products? | May contain purified gelatin; religiously approved. | Similar; may contain gelatin, generally approved. | No pork products in national vaccines. | Similar; no pork in vaccines. | May contain gelatin; generally acceptable. | Same; gelatin is possible, alternatives exist. |
| 4) Will vaccinating my child harm them? | No harm; protects from deadly diseases. | Same; protects children and community. | No harm; provides protection. | Same emphasis. | On the contrary; protects. | Similar emphasis; protects. |
| 5) Are vaccine ingredients harmful chemicals? | Safe ingredients (thimerosal, aluminum, formaldehyde). | Same ingredients; low-dose safety confirmed. | Safe in low doses. | Same; safe components. | Safe at controlled doses. | Same; non-toxic doses. |
| 6) Can we rely on natural methods instead of vaccines? | Helps but doesn’t replace vaccines. | Similarly, natural methods alone are insufficient. | Supportive but do not replace vaccines. | Same explanation. | Supports but does not replace vaccines. | Similarly, the vaccine is most effective. |
| 7) Are vaccines religiously acceptable? | Approved by major religions. | Same; major religions endorse. | No barrier; important for health. | Same message. | Recommended by religious authorities. | Same; major religions agree. |
| 8) Is the HPV vaccine necessary? | Yes; reduces cancer risk. | Same; prevents cancer, both sexes. | Yes; reduces cancer risk. | Similar detail; protective for all. | Yes; reduces cancer risk. | Same; recommended for both. |
| 9) Do COVID-19 vaccines cause cardiovascular disease? | Rare myocarditis; infection risk is higher. | Same; infection risk is higher. | Rare myocarditis; infection risk is higher. | Same emphasis. | No evidence; infection risk is higher. | Same; infection risk is higher. |
Overall, all three AICBs showed strong consistency between first and second responses, particularly regarding accuracy and scientific agreement. AICB 1 and AICB 2 provided highly stable and detailed answers, while AICB 3 exhibited slightly more variability but still remained scientifically sound.
Performance scores
As summarized in Table 3 above, AICB 1 demonstrated superior performance across all four evaluation domains, achieving a perfect mean score of 5.0 for Accuracy and high scores in Consistency (4.9), Information Sufficiency (4.8), and Source Reliability (4.9). AICB 2 also performed well, particularly in accuracy (4.8) and information sufficiency (4.7), although it had slightly lower consistency (4.6), suggesting occasional variations in tone or phrasing. AICB 3 showed adequate performance, but lower mean values—particularly in Information Sufficiency (3.9)—indicate that its responses, while generally correct, tended to lack the depth and comprehensiveness observed in the other agents.
Kruskal–Wallis testing indicated a significant difference among chatbots (H = 11.27, p < 0.01). Post hoc Dunn analysis showed AICB 1 > AICB 3 (p < 0.01) and AICB 2 > AICB 3 (p = 0.03); AICB 1 vs. AICB 2 was not statistically different (p = 0.35). Overall, AICB 1 was the most consistent and robust in delivering reliable, persuasive vaccine-related information across domains critical to public health communication.
DISCUSSION
This study assessed the reliability of freely accessible AI chatbots as information sources for parents seeking guidance on childhood vaccination. Our findings indicate that all three chatbots produced scientifically accurate and temporally consistent responses to common parental vaccine questions. AICB-1 demonstrated the most in-depth and evidence-based responses, often citing authoritative global health sources, which is crucial for building trust and confidence in vaccine information. AICB-2 effectively integrated local Turkish public health statements, demonstrating the value of tailoring information to specific cultural and regional contexts. In contrast, AICB-3’s responses were accurate but more superficial and lacked source citation. These results align with existing literature on the potential of AI chatbots in public health education15 and vaccine communication.8,16 Chatbots can serve as virtual health assistants, offering medical guidance and promoting public health education.8 Our findings also support the idea that AI-driven chatbots can disseminate accurate vaccine information16 and improve vaccine literacy, supported by scientific data evaluated by experts8, which involves the ability to access, understand, appraise, and apply vaccination-related information for users.8 The strong agreement between chatbots and trusted scientific sources on vaccine topics shows that AI chatbots can help support traditional ways of communicating about prevention.9 Apart from these findings, commercially biased or misleading scientific information may also be presented without proper oversight, alongside reliable and referenced scientific sources.21
The differing performance among chatbots underscores the need to evaluate each model carefully for specific public health use cases.22,23 Evaluation should go beyond factual accuracy to also consider the depth of information, cultural relevance, and clarity of source attribution—key dimensions in vaccine-related communication. This is particularly important when addressing vaccine hesitancy, where personalized messaging and culturally adapted content can significantly influence parental attitudes, decisions, and literacy across diverse populations.24
All three AI chatbots delivered accurate, stable, and comprehensive vaccine information. Crucially, none propagated vaccine myths such as an autism link, which strengthens their potential as safe tools. Given that many parents rely on online information yet doubt its credibility, AI chatbots, when properly validated, may serve as first-line educational tools to mitigate vaccine hesitancy.9,23 However, the observed variability in depth and source citation suggests the need for ongoing evaluation and refinement of AI systems for public-facing health education.
Parents traditionally rely on forward-looking search engines such as Google to gather information on pediatric vaccines.9,23 Our data suggest that large language model chatbots add unique value beyond keyword search: conversational retrieval. Chatbots deliver concise, contextual answers without requiring users to sift through dozens of links, reducing cognitive load and time cost.23
Integrated synthesis. AICB 1 and AICB 2, in particular, aggregate multiple guidelines (CDC, WHO, national ministries) into a single narrative, whereas search engines typically present fragmented pages that parents must reconcile. The dialogue is iterative; parents can immediately request more straightforward wording or a follow-up explanation that search engines cannot provide. Reduce exposure to misinformation. Algorithmic hallucination is a risk, yet chatbots in this study produced zero significant factual errors, while ordinary search results still surface anti-vaccine blogs high in the ranking.3,22,23
AI chatbots, while helpful, present several significant risks in healthcare. First, there are concerns about opaque sourcing and transparency. While general-purpose AI models can contribute to judgment under uncertainty23, there is a recognized need to provide greater transparency into algorithms and to explain decisions explicitly to human observers.22,23 Ensuring AI systems are transparent, interpretable, and explainable is critical for user trust.22,23 Secondly, these AI models can experience model drift. The tool itself can learn and evolve, potentially changing user confidence.23 The output of Large Language Models (LLMs) like ChatGPT is non-deterministic, meaning applying the same prompt multiple times can result in different responses, which hinders reproducibility and consistency.23 Thirdly, there’s a significant risk of hallucinations, where LLMs can add statements not supported by the original text or generate plausible-sounding but false content.23 These hallucinations are an intrinsic problem of generative models and are difficult to remedy.23 In healthcare, instances of incorrect text passages and missing relevant medical information, or misinterpretations of medical terms, could lead patients to draw harmful conclusions, potentially resulting in physical and/or psychological harm.23 Finally, privacy leakage is a major concern. When users, such as parents, share personal health information with chatbots, the information can be more detailed than that typically entered in online searches.23 Uploading protected health information to proprietary services like ChatGPT might compromise patient privacy.23 The ethical and legal issues surrounding data rights, privacy, and ownership are critical as AI becomes more integrated into health and healthcare.22
Limitations
This study, while providing valuable insights, is subject to several limitations that warrant consideration. Firstly, the analysis was confined to three publicly accessible AI chatbots and a predetermined set of nine frequently asked parental questions. To enhance the generalizability of findings, future research should broaden the scope to include a wider array of AI systems and question categories. Secondly, although rigorous efforts, including the use of a third reviewer for consensus, were undertaken to mitigate bias, the expert-based scoring methodology inherently involves subjective elements. Thirdly, while the study assessed temporal consistency between chatbot responses, the long-term stability of these models and their susceptibility to “model drift” remain unexplored. Finally, critical potential risks associated with chatbot deployment, such as the propensity for hallucinations, the lack of transparent sourcing, and the risk of privacy breaches during sensitive health discussions, necessitate ongoing investigation. These concerns are particularly pertinent as the real-world application of chatbots in health communication continues to expand. Future evaluations must rely on robust, evidence-based data from reputable institutions and peer-reviewed sources to ensure reliable knowledge generation. Despite these limitations, our study provides valuable evidence on the potential of AI chatbots to serve as reliable information sources for parents seeking guidance on childhood vaccination. As AI technology evolves, it is essential to carefully evaluate and optimize chatbot performance to deliver accurate, comprehensive, and culturally appropriate vaccine information.16 Further research should focus on comparative studies that examine how chatbot effectiveness may vary with question/argument design and implementation. Additionally, efforts should be made to address the digital divide and ensure equitable access to these potentially valuable tools.17 Chatbots can hallucinate, update unpredictably, and raise privacy concerns.22,23 Their outputs should supplement, not replace, professional medical advice. Regulatory frameworks, transparent sourcing, and user education are essential to maximize benefits while minimizing risk.23
This study found that AI chatbots generally provide accurate and consistent responses to common vaccine-related questions from hesitant parents. However, some answers lacked depth and did not include source citations. Chatbots should be viewed as supplementary resources rather than definitive sources of medical information. Enhancing their ability to guide users toward verified, expert-backed content remains essential.
Ethical approval
Our study did not involve any experiments on live human or animal subjects. It is based solely on publicly accessible chatbot interactions and does not include personal or identifiable data. Therefore, ethical approval was not required.
Source of funding
The authors declare the study received no funding.
Conflict of interest
The authors declare that there is no conflict of interest.
References
- Boston Consulting Group. 2024 global AI adoption survey. 2024. Available at: https://web-assets.bcg.com/a5/37/be4ddf26420e95aa7107a35aae8d/bcg-wheres-the-value-in-ai.pdf (Accessed on Mar 11, 2026).
- Rashid AB, Kausik MAK. AI revolutionizing industries worldwide: a comprehensive overview of its diverse applications. Hybrid Adv. 2024;7:100277. https://doi.org/10.1016/j.hybadv.2024.100277
- Dutta-Bergman MJ. The impact of completeness and Web use motivation on the credibility of e-health information. J Commun. 2004;54:253-69. https://doi.org/10.1111/j.1460-2466.2004.tb02627.x
- Icen M. The future of education utilizing artificial intelligence in Turkey. Humanities Social Sciences Communications. 2022;9:268. https://doi.org/10.1057/s41599-022-01284-4
- Kaya F, Aydin F, Schepman A, Rodway P, Yetisensoy O, Demir Kaya M. The roles of personality traits, AI anxiety, and demographic factors in attitudes toward artificial intelligence. Int J Hum Comput Interact. 2024;40:497-514. https://doi.org/10.1080/10447318.2022.2151730
- Tudor Car L, Dhinagaran DA, Kyaw BM, et al. Conversational agents in health care: scoping review and conceptual analysis. J Med Internet Res. 2020;22:e17158. https://doi.org/10.2196/17158
- Bin Sawad A, Narayan B, Alnefaie A, et al. A systematic review on healthcare artificial intelligent conversational agents for chronic conditions. Sensors (Basel). 2022;22:2625. https://doi.org/10.3390/s22072625
- Cosma C, Radi A, Cattano R, et al. Exploring chatbot contributions to enhancing vaccine literacy and uptake: a scoping review of the literature. Vaccine. 2025;44:126559. https://doi.org/10.1016/j.vaccine.2024.126559
- Fiore M, Bianconi A, Acuti Martellucci C, et al. Vaccination hesitancy: agreement between WHO and ChatGPT-4.0 or Gemini Advanced. Ann Ig. 2024. https://dx.doi.org/10.7416/ai.2024.2657 Available at: https://sfera.unife.it/retrieve/f3159638-d275-4842-a9b0-b338ae5884f3/Fiore%20et%20al%202024%20Annali%20Igiene.pdf (Accessed on Mar 11, 2026).
- Salas A, Rivero-Calle I, Martinón-Torres F. Chatting with ChatGPT to learn about safety of COVID-19 vaccines - a perspective. Hum Vaccin Immunother. 2023;19:2235200. https://doi.org/10.1080/21645515.2023.2235200
- Sallam M, Salim N, Al-Tammemi AB, et al. ChatGPT output regarding compulsory vaccination and COVID-19 vaccine conspiracy: a descriptive study at the outset of a paradigm shift in online search for information. Cureus. 2023;15:e35029. https://doi.org/10.7759/cureus.35029
- Siddiqi DA, Miraj F, Raza H, et al. Development and feasibility testing of an artificially intelligent chatbot to answer immunization-related queries of caregivers in Pakistan: a mixed-methods study. Int J Med Inform. 2024;181:105288. https://doi.org/10.1016/j.ijmedinf.2023.105288
- Chan PSF, Fang Y, Cheung DH, et al. Effectiveness of chatbots in increasing uptake, intention, and attitudes related to any type of vaccination: a systematic review and meta-analysis. Appl Psychol Health Well Being. 2024;16:2567-97. https://doi.org/10.1111/aphw.12564
- Okonkwo CW, Amusa LB, Twinomurinzi H. COVID-Bot, an intelligent system for COVID-19 vaccination screening: design and development. JMIR Form Res. 2022;6:e39157. https://doi.org/10.2196/39157
- Baglivo F, De Angelis L, Casigliani V, Arzilli G, Privitera GP, Rizzo C. Exploring the possible use of AI chatbots in public health education: feasibility study. JMIR Med Educ. 2023;9:e51421. https://doi.org/10.2196/51421
- Passanante A, Pertwee E, Lin L, Lee KY, Wu JT, Larson HJ. Conversational AI and vaccine communication: systematic review of the evidence. J Med Internet Res. 2023;25:e42758. https://doi.org/10.2196/42758
- Laymouna M, Ma Y, Lessard D, Schuster T, Engler K, Lebouché B. Roles, users, benefits, and limitations of chatbots in health care: rapid review. J Med Internet Res. 2024;26:e56930. https://doi.org/10.2196/56930
- Kestenbaum LA, Feemster KA. Identifying and addressing vaccine hesitancy. Pediatr Ann. 2015;44:e71-5. https://doi.org/10.3928/00904481-20150410-07
- Dubé E, Vivion M, MacDonald NE. Vaccine hesitancy, vaccine refusal and the anti-vaccine movement: influence, impact and implications. Expert Rev Vaccines. 2015;14:99-117. https://doi.org/10.1586/14760584.2015.964212
- Angus DC, Khera R, Lieu T, et al. AI, health, and health care today and tomorrow: the JAMA summit report on artificial intelligence. JAMA. 2025;334:1650-64. https://doi.org/10.1001/jama.2025.18490
- Jeblick K, Schachtner B, Dexl J, et al. ChatGPT makes medicine easy to swallow: an exploratory case study on simplified radiology reports. Eur Radiol. 2024;34:2817-25. https://doi.org/10.1007/s00330-023-10213-1
- Miller T. Explanation in artificial intelligence: insights from the social sciences. Artif Intell. 2019;267:1-38. https://doi.org/10.1016/j.artint.2018.07.007
- Morita PP, Lotto M, Kaur J, et al. What is the impact of artificial intelligence-based chatbots on infodemic management? Front Public Health. 2024;12:1310437. https://doi.org/10.3389/fpubh.2024.1310437
- Gentile A, Alesi M. COVID-19 parental vaccine hesitancy: the role of trust in science and conspiracy beliefs. Int J Environ Res Public Health. 2024;21:1471. https://doi.org/10.3390/ijerph21111471
Copyright and license
Copyright © 2026 The author(s). This is an open-access article published by Aydın Pediatric Society under the terms of the Creative Commons Attribution License (CC BY) which permits unrestricted use, distribution, and reproduction in any medium or format, provided the original work is properly cited.




