BioSupply Trends Quarterly logo
Search
Close this search box.
Spring 2024 - Safety

Mitigating AI Risks for Consumer Health Misinformation

While agencies race to put in place regulations for using artificial intelligence to reduce the spread of health misinformation, healthcare providers can help to correct this by engaging with their patients.

WHILE ARTIFICIAL intelligence (AI) has the power to positively impact the healthcare industry, it also has the potential to cause harm if left unchecked. While AI is heralded for supporting diagnostics, decision-making and administration, detractors of AI warn of significant risks, chief among them cybersecurity, privacy and ethical and legal considerations surrounding the Health Insurance Portability and Accountability Act (HIPAA). Increasingly, generative AI is also being used to create news-related stories, including stories about healthcare. And this, too, can pose risks due to the inability of AI to discern between factual information and misinformation. When factually inaccurate content based on poorly vetted sources is used to create and publish new content, it leads to consumer confusion, often creating distrust in legitimate scientific evidence.

Whether content is legitimate or not, its spread online reverberates echo chambers of information algorithms and social media shares, creating an increasingly challenging literacy environment. In the healthcare space, for instance, patients are tasked with parsing out sound medical science from fluff to make sense of increasingly complex information. Add to this the compounding risks of social discord caused by competing narratives, and patients may begin to question factual messaging from legitimate public health sources. This poses a threat to health literacy and can create patient-imposed limits on their own access to care.1

Regulatory Environment

AI’s use in healthcare is a regulatory gray area with no overarching AI-specific regulations currently in place. Instead, oversight is largely piecemealed in the United States between federal agencies such as the Department of Health and Human Services (HHS), Office of Civil Rights (OCR), Food and Drug Administration, Federal Trade Commission and Department of Justice, as well as numerous state and local agencies,2 all of which are working to develop proactive strategies that protect patients and ensure the integrity of the healthcare system. 

Foreign regulatory agencies also have their own rules concerning AI such as the European Union’s draft Artificial Intelligence Act that, when finalized, will classify AI systems by risk category, each with their own requirements and the World Health Organization’s (WHO) 2021 guidance document pertaining to ethical and governance considerations when using AI in health.2

Even so, misinformation continues to pervade Internet searches as evidenced by anecdotal accounts of people’s use of ChatGPT when searching for health information online and accepting and even sharing that information on social media without understanding its validity and truthfulness.3 In 2023, the HHS Health Sector Cybersecurity Coordination Center warned that nefarious actors are using AI to develop malware, evade security and spread targeted phishing emails. 

That being said, numerous entities are working diligently to counter these potentially ill effects by using AI to conduct risk assessments and develop appropriate mitigation plans. Indeed, ensuring appropriate use of AI to support legitimate healthcare needs has great potential, so getting ahead of the risks remains a top priority. 

AI Engineering and Large Language Models 

Large language models (LLM), a type of generative AI trained to recognize and generate text, are used to create Internet-based content. Whether they’re articles, websites and/or emails, use of generative AI can enhance productivity, but its challenge lies in the inability of LLMs to parse out and exclude misinformation from newly generated content. That means as LLMs generate new text, unchecked information can be incorporated and used as source information again and again, lending apparent legitimacy to content regardless of its factual nature.

Likewise, AI can be used to alter text, images, audio and videos, further lending credence to inaccurate information and even falsely attributing the information to legitimate journalists and repurposing copyrighted information without attribution.4

Teaching LLMs to unlearn information is complex and currently requires human intervention to identify and exclude erroneous content. However, with the volume of information available, the practicalities of this strategy are limited. Therefore, many legitimate sources use human review and verification tools to fact check information prior to publication. These “reviewed by” and “fact checked by” statements incorporated into blogs and articles are important steps in lending legitimacy regardless of how content was generated.

New strategies for countering the capture and use of erroneous data are in development and include flagging AI-generated content so readers can recognize it and further investigate any claims and statements. Another concept is the development of language models that use smaller data sets in an effort to both create a more impactful AI and to provide healthcare entities an opportunity to participate in its design in an evidence-based way.3

 

Addressing Misinformation

 Although the prolific nature of false and misleading health information causes mental and social stress, changing one’s interpretation of health information is challenging, particularly when that information is viewed as coming from a trusted source. Social media in particular has shown to be a conduit of spreading poor-quality information, as evidenced during the COVID-19 public health emergency, and has led to a host of problems, including vaccine hesitancy.1 A Kaiser Family Foundation review found that 24 percent of adults surveyed look to social media at least weekly for health information, and 54 percent said they believed at least one false COVID-19 and vaccine statement to be true.5

Increasingly, misinformation incorporates more scientific language and fewer emotional statements in an attempt to lend credibility. Therefore, it is important for providers to proactively identify health misinformation trends to which their patients will be exposed, as well as understand how patients are seeking this information and identify the context and framing of how it is presented. For instance, not only is inaccurate information regarding diabetes prolific on YouTube, these inaccurate videos tend to be more popular than others containing evidence-based information.6

The American Medical Association (AMA) formally addressed AI in 2018, touting the potential benefits for supporting clinicians. However, they also warned of the potential for AI’s misuse, the introduction of bias and how the dissemination of incorrect information can harm patients, providers and the industry as a whole. AMA urges education be at the forefront of patient conversations, including both the benefits and risks of AI-generated information.7

Direct and Lasting Impacts

It can be difficult for laypersons to parse out factual health information and to use that information to develop and sustain appropriate health-promoting behaviors. This is where healthcare providers can have a direct and lasting impact through active engagement with patients to discern how well they understand their own health conditions and available treatments. Providers, as trusted entities, can help to correct misinformation regardless of its source in an empathetic and personalized way. These conversations can happen one on one in the treatment room or can take place more broadly through online and in-person community forums that promote health literacy with sound, publicly available information.8

Providers are further tasked with staying ahead of available health information that is both factual and that contains misinformation. This will better enable meeting patients where they are. It is important that providers acknowledge any information presented by patients that is in fact factual as opposed to completely disregarding their beliefs. By asking patients open-ended questions and having valuable information readily available, providers can help patients learn to think critically and evaluate their own health assumptions. 

Countering the Narratives of Misinformation

One of the best tools available when countering the narratives of misinformation is a discerning eye. Providers should encourage patients to assess the credibility of content by looking at the “About Us” section of a publisher’s website and cross-referencing health-related information through CDC.gov and other public health department websites. Patients should also be encouraged to ask questions of their providers.

American Family Physician (AFP) recommends that physicians not engage with false information, even in an effort to correct it online. Algorithms are an important part of Internet search returns, and engaging with content can influence those algorithms, increasing the content’s visibility. Instead, AFP recommends providers only “like” and “share” verified information from reliable sources.9 Likewise, misinformation can be reported directly to most social media platforms, some of which are additionally working with WHO and other authorities to screen for medical misinformation.10

All of this being said, without question, social media can be an incredibly valuable tool to quickly gather and disseminate health information. In fact, some believe that government and health institutions should increase their social media presence because of its power, particularly during pandemics,6 to get and stay ahead of potentially harmful information that could have negative consequences on public health.

It takes time, active listening and empathy to help patients counter their fears. Providers must arm patients with written information in a language in which they are fluent, and include infographics for particularly complicated topics so they have valuable facts to consider even after they’ve left the office. They should make sure patients understand what they have heard during their office visit, and inquire how they feel about it in an effort to address any lasting concerns.9

HIPAA, Ethical and Legal Considerations

A final note about use of AI in healthcare information concerns privacy. AI use introduces liability concerns for HIPAA-covered entities. The HHS OCR urges a thorough AI-focused threat analysis and the development and strengthening of comprehensive mitigation plans. Employees and contractors should have appropriate cybersecurity training, and entities can execute incidence response protocols and regularly review regulatory and best practice frameworks that affect AI in healthcare settings. Importantly, the use of AI tools must be disclosed to patients. And, there must be consideration of billing development practices when using AI-assisted tools.2

Powerful but Challenging

Although AI is a powerful tool for healthcare entities, it can also introduce serious challenges, including the introduction and proliferation of misinformation. Unchecked, this information can spread, potentially causing widespread harm. While tools are in development to counter these threats, providers can help alleviate patient stress through open communication and empathetic listening so patients can learn to vet information and make sound decisions regarding their health.

References

1. Borges do Nasciemento, I, Pizarro, AB, Almeida, JM, et al. Infodemics and Health Misinformation: A Systematic Review of Reviews. Bulletin World Health Organization, 2022 Sept. 1;100(9):544–561. Accessed at www.ncbi.nlm.nih.gov/pmc/articles/PMC9421549.

2. Debevoise and Plimpton Law Firm. Artificial Intelligence in Healthcare: Balancing Risks and Rewards, July 31, 2023. Accessed at www.debevoise.com/-/media/files/insights/publications/2023/07/31_artificial-intelligence-in-healthcare.pdf?rev=3dd4532cd50f4f8a89fa7819f344e976&hash=1752A83C797CD691F117F825DE38B631.

3. Harrer, S. Attention Is Not All You Need: The Complicated Case of Ethically Using Large Language Models in Healthcare and Medicine. The Lancet, 2023 April;90:104512. Accessed at www.thelancet.com/journals/ebiom/article/PIIS2352-3964(23)00077-4/fulltext.

4. Chin-Rothmann, C. Navigating the Risks of Artificial Intelligence on the Digital News Landscape. Center for Strategic and International Studies, Aug. 31, 2023. Accessed at www.csis.org/analysis/navigating-risks-artificial-intelligence-digital-news-landscape.

5. Kaiser Family Foundation. Poll: Most Americans Encounter Health Misinformation, and Most Aren’t Sure Whether It’s True or False, Aug. 22, 2023. Accessed at www.kff.org/coronavirus-covid-19/press-release/poll-most-americans-encounter-health-misinformation-and-most-arent-sure-whether-its-true-or-false.

6. Suarez-Lledo, V, and Alvarez-Galvez, J. Prevalence of Health Misinformation on Social Media: Systematic Review. Journal of Medical Internet Research, 2021 Jan. 20;23(1):e17187. Accessed at www.ncbi.nlm.nih.gov/pmc/articles/PMC7857950.

7. Robeznieks, A. Educate Patients About Misleading AI-Generated Medical Advice. American Medical Association, June 13, 2023. Accessed at www.ama-assn.org/practice-management/digital/educate-patients-about-misleading-ai-generated-medical-advice.

8. Confronting Health Misinformation. The U.S. Surgeon General’s Advisory on Building a Healthy Information Environment. Accessed at www.hhs.gov/sites/default/files/surgeon-general-misinformation-advisory.pdf.

9. Shajahan, A, and Pasquetto, I. Countering Medical Misinformation Online and in the Clinic. American Family Physician, August 2022. Accessed at www.aafp.org/pubs/afp/issues/2022/0800/editorial-countering-medical-misinformtion.html.

10. Neylan, JH, Patel, SS, and Erickson, TB. Strategies to Counter Disinformation for Healthcare Practitioners and Policymakers. World Medical and Health Policy, Nov. 24, 2021. Accessed at www.ncbi.nlm.nih.gov/pmc/articles/PMC9216217.

Amy Scanlin, MS
Amy Scanlin, MS, is a freelance writer and editor specializing in medical and fitness topics.