Artificial Intelligence and Big Data
A Crossroads of Interoperability and Capability
- By Amy Scanlin, MS
ON APRIL 11, the U.S. Food and Drug Administration (FDA) approved the first medical device for use as part of an artificial intelligence (AI) algorithm, further solidifying the expanding role of AI’s use in the medical community. By some estimates, AI growth in healthcare is expected to increase by 40 percent per year to reach $6.6 billion in 2021,1 and AI technologies could overtake human performance in surgeries by 2053.2 This could amount to an annual savings of over $1 billion for the healthcare industry.3
While touted as a revolution and the future of medicine, AI is at the same time feared for its potential unknowns. How will it threaten healthcare as we know it? Will it take over jobs, making more mundane tasks obsolete? Will it render diagnoses with increasingly greater accuracy — perhaps even more so than those made by humans? Is there a risk to good governance as the potential for innovation reaches new technological boundaries?
AI has the potential to provide patients a wealth of expertise beyond the walls of their doctor offices, and it offers providers an opportunity to put more personal time back into patient care. It is gaining momentum in reading common language medical records, looking for supporting information that can answer any number of questions. It is successfully being used in diagnostics such as assessing the likelihood of cancers through photos and MRIs. And, it can scan for contraindications and support the development of personalized medicine. AI’s potential is unending, but at what cost? Cautionary histories, such as those of AI’s use in the early days of genetics studies, remind us that although AI is expanding rapidly, ethical considerations must always be kept at the forefront.
AI in Diagnostics: “Deep Learning”
What if a radiology report could be interpreted accurately in the blink of an eye? Incredibly, AI is teaching itself to do just that via “deep learning.” Deep learning goes well beyond “if then” scenarios, although exactly how it does so is still largely a mystery. It uses AI black boxes that look at tens of thousands of scanned images, such as melanoma, abnormal EKGs and blood clots, to learn what they do and don’t look like, and they are learning to do this with increasing sensitivity. In fact, researchers feeding images into this learning tool must remove extraneous blips or annotations, such as circles and arrows pointing to anomalies, so as the machine scans and learns the images, it doesn’t associate those blips with the anomalies themselves.4
The black box part of the equation means the machine is teaching itself to learn in a manner akin to how brains learn, strengthening its electronic synapsis through repetition. Much like as children we begin to recognize a dog from a cat and a horse from a cow, these machines also develop sensitivities to help them discern. Through scanning, calculating and then recalculating as new images are fed into the system, the machines generate new and improved outputs. When an output is incorrect, such as in a case of a patient who does eventually develop cancer, a correction can be fed back into the machine so it can learn again as it continues to improve.
The results are impressive. In one example, researchers at Stanford fed 14,000 master images of various types of diagnosed skin cancers and abnormal growths into a system. Their 2015 tests of new images against validation tests found their machine, which provided results in probabilities, was correct 72 percent of the time, beating two board-certified dermatologists who, when assessing the same new images, were accurate only 66 percent of the time. Further, an expanded test with 25 board-certified dermatologists produced similar results, with the machine showing an overall greater sensitivity and specificity.4 Even more impressive, an industry competition in which assessments were made with the combined skills of AI and humans saw a reduction in errors of breast cancer detection by 85 percent.1
But what about AI’s challenges? In addition to the many serious HIPAA implications, one very big concern is how far to ethically go, particularly in the case of AI versus human. If we remove the human factor, would there be an increased risk, due to the incredible sensitivities, for unnecessary biopsies, particularly in those cases where the identified lesion may be less aggressive? Or, as others would argue, is any early diagnosis worth that risk? If machines have the capability to improve outcomes, should they be allowed to?
Many who study AI, however, do not fear the inevitability of machines taking over healthcare. Instead, they see technology as augmenting it, with the machine doing more of the “yes/no” diagnoses, allowing for a more evolving role of increasingly involved caregivers who have more available time to spend with patients. After all, patients feel better about their care when they have meaningful interactions with their providers, learning not just the “whats” but, equally important, the “whys.” While machines may someday be able to provide the “what” in an office prescreen, they can’t ask questions and they can’t determine the “why.” Perhaps, then, by using AI as an enhancement to medicine, physicians may be better able to spend more time looking for a root cause, in addition to discussing treatment options and providing their patients better peace of mind.
AI and the Data Mine: “Deep Patient”
As we continue to feed data into these machine aggregators, the ability to scan medical records, search vast amounts of medical literature (estimated to grow by 8,000 academic articles published daily5), assess images, formulate predictions and extrapolate data from personal devices is also gaining ground. With so much information to sift through, finding meaningful connections far outweighs our traditional analytical capabilities. After all, as more and more data pours in, how do we extract meaningful information?
AI can sift through huge quantities of information in rapid time. It can find linkages and trends, and it eliminates the need to discard data that might otherwise be assumed irrelevant or just too vast to include in the equation. AI can provide a more complete picture of health and health history (even familial history) and develop a better roadmap to care. Dubbed “deep patient,” early studies are showing machines can connect data humans may not be able to easily see. Machines aren’t looking at any one thing; they are looking at everything. By combining de-identified hospital data with other inputs, without any specific limiting parameters, AI avoids zeroing in on any one thing at the exclusion of others.1 Information is combined in unique ways to build a comprehensive picture — a predictive model — and helps humans make decisions.
As data becomes less and less expensive to collect and store, and computer processing capabilities become faster, cheaper and more precise, data provides the opportunity to gather and sort information from increasingly expanding sources. It can lower healthcare costs, improve outcomes, save time and potentially eliminate unnecessary tests and treatments. It can drive the economic machine that has become healthcare as it satisfies demands for improved results. It also has the potential to lower the risk and impact of medical insurance fraud.
But, AI has a huge limitation: interoperability challenges. A lack of common language between many of the systems has led to an inability to achieve true connectivity. Our medical records, devices and more, at least today, don’t easily speak to one another. As much as AI has the potential, it is also limited by siloed systems keeping information boxed into their current configurations. The Affordable Care Act is attempting to encourage inroads through its meaningful use incentives, but for AI, there is still a long way to go.
Currently, FDA is actively developing a new regulatory framework to promote innovation in the AI space and to support AI-based technologies — even within a system in which trusted entities are precertified as innovators without requiring additional submissions for each successive minor improvement. This is a real regulatory challenge, particularly in the area of machine learning. How does the agency regulate something when even its designers don’t fully understand how it works?
Genetics and Genomics
As capabilities and ethical considerations abound, AI’s resurgence in the field of genetics is both an opportunity and a challenge. The question is: Even though we can alter some of our 20,000-some genes, providing new instructions to build, repair or maintain status quo, should we? Machines can assess a patient’s specific tumor, genetic mutations and available drugs to determine a pathway forward with the greatest chance for success. That sounds good, but what is the subjective definition of success? Could subjective definitions and subsequent treatments that have the potential to eliminate a disease be too far-reaching for man to decide?
This question is very pertinent to the study of genomics, particularly germline cells. It is conceivable science could evolve to where genes could be altered for creation of a “healthier” individual. That raises concerns about man’s ability to control destiny and whether a certain condition should be eliminated just because we have the capability. In some circles, the answer is yes, but only for the most devastating conditions that cause immense suffering or are incompatible with life. However, how the criteria are defined is another question, as is the subjectiveness of the definition. Currently, heritable germline therapy is illegal in the U.S., and a number of other countries have signed an agreement prohibiting germline modification.6
The American Society of Human Genetics board adopted a position stating it is inappropriate to conduct germline therapy that culminates in human pregnancy. However, it also stated if there is appropriate oversight and consent, it is acceptable to edit in vitro germline genomes of human embryos for the benefit of scientific study. In addition, its position states any “future clinical application of human germline genome editing should not proceed unless, at a minimum, there is a) a compelling medical rationale, b) an evidence base that supports its clinical use, c) an ethical justification and d) a transparent public process to solicit and incorporate stakeholder input.”7
Siddhartha Mukherjee reminds us in his book The Gene — An Intimate History that genes are recipes, not blueprints. What this means is even in cases in which a gene could be permanently altered, the end result cannot be predictably known due to determinants such as environmental and behavioral triggers, and even chance. The challenge becomes exponentially harder when considering combination gene variants in which outcomes are governed by multiple genes.8
Still, studies progress, particularly with the use of the CRISPR-Cas9, a technology adapted to function similarly to that of a human genome. In humans, bacteria capture snippets of DNA from invading viruses and use them to create DNA segments known as CRISPR arrays. These CRISPR arrays allow the bacteria to “remember” the viruses (or closely related ones) so that if the viruses attack again, the bacteria produce RNA segments from the CRISPR arrays to target the viruses’ DNA. The bacteria then use Cas9 (or a similar enzyme) to cut the DNA apart, disabling the virus. In the laboratory, it works much the same way, and the eventual result is the cell’s own DNA repair machinery adds or deletes pieces of genetic material, or makes changes to the DNA by replacing an existing segment with a customized DNA sequence. Research using CRISPR-Cas9 technology in humans has only just gotten underway in the West for a wide variety of diseases, including single-gene disorders such as cystic fibrosis, hemophilia and sickle cell disease. It also holds promise for the treatment and prevention of more complex diseases such as cancer, heart disease, mental illness and HIV.9
One area of agreement on the potential for genomic editing is the need for more discussion about its scientific potential and future opportunities and utility, as well as the ethical question of how far this line of study should be pursued. The National Academies of Sciences, Engineering and Medicine’s (NASEM) Human Gene-Editing Initiative, while firm in its position that safety, technical and ethical issues bar a wide application of germline therapy studies beyond treatment of disease or disability, does encourage additional discussion on the topic. NASEM recommends strict conditions for the study of germline therapy as part of its 2017 report “Human Genome Editing: Science, Ethics and Governance.”10
A Need for Intelligent Discussion
While there are many questions and much debate about how AI should move forward for the enhancement of medical care, there is no question the rapid pace of progress requires an ongoing discussion to happen simultaneously. How will AI be integrated into medical care? Is it possible to unintentionally insert bias into decision-making algorithms? What are the legal ramifications when a prediction is wrong, and how will FDA regulate this rapidly changing technology?
More and more data is available to us every day, although it is fractured and, in some cases, unusable in its current state. The future capabilities of capturing, storing, translating and analyzing data to provide meaningful information for the improvement of patient care is at the root of AI’s interoperability. AI is here to stay, and we need to be intelligent about how its growth is nurtured and used.
References
- Tirrell, M. From Coding to Cancer: How AI Is Changing Medicine. CNBC, May 11, 2017. Accessed at www.cnbc.com/2017/05/11/from-coding-to-cancer-how-ai-is-changing-medicine.html.
- Fogel, AL and Kvedar, JC. Artificial Intelligence Powers Digital Medicine. Nature, March 14, 2018. Accessed at www.nature.com/articles/s41746-017-0012-2.
- Molteni, M. Healthcare Is Hemorrhaging Data. AI Is Here to Help. Wired, Dec. 30, 2017. Accessed at www.wired.com/story/health-care-is-hemorrhaging-data-ai-is-here-to-help.
- Mukherjee, S. AI Versus MD. The New Yorker, April 3, 2017. Accessed at www.newyorker.com/magazine/ 2017/04/03/ai-versus-md.
- Healthcare 2020: The e-Doctor Will See You Now. Wired. Accessed at www.wired.com/brandlab/2016/12/ healthcare-2020-e-doctor-will-see-now.
- With Stringent Oversight, Heritable Germline Editing Clinical Trials Could One Day Be Permitted for Serious Conditions; Non-Heritable Clinical Trials Should Be Limited to Treating or Preventing Disease or Disability at This Time. National Academies of Sciences, Engineering and Medicine press release, Feb. 17, 2017. Accessed at www8.nationalacademies.org/onpinews/newsitem.aspx?RecordID=24623.
- Ormond, KE, Mortlock, DP, Scholes, DT, et al. American Society of Human Genetics Position Statement: Human Germline Genome Editing. American Journal of Human Genetics, Volume 101, Issue 2, pp.167-176, Aug. 3, 2017. Accessed at www.cell.com/ajhg/fulltext/S0002-9297(17)30247-1.
- Mukherjee, S. The Gene — An Intimate History. Scribner, 2016; ISBN: 978-1-4767-3550-0.
- U.S. National Library of Medicine Genetics Home Reference. What Are Genome Editing and CRISPR-Cas9? Accesed at ghr.nlm.nih.gov/primer/genomicresearch/genomeediting.
- National Human Genome Research Institute. Genome Editing. Accessed at www.genome.gov/27569227/ whats-happening-in-genome-editing-right-now.