, , , ,

Minimizing Risks and Maximizing Rewards from Machine Learning

Kathryn Marchesini; Jeff Smith and Jordan Everson | September 7, 2022

When talking about artificial intelligence (AI) today, people are usually referring to predictive models—often driven by machine learning (ML) techniques—that “learn” from historic data and make predictions, recommendations, or classifications (outputs) which inform or drive decision making. The power of ML is in its enormous flexibility. You can build a model to predict or recommend just about anything, and we have seen it transform many sectors.

The potential for ML and related technologies in health care is exciting. For instance, the National Academy of Medicine (NAM) described ML and other forms of AI as having the potential to represent the “payback” of using health IT, “by facilitating tasks that every clinician, patient, and family would want, but are impossible without electronic assistance.”

ONC plays an active role in making this “payback” possible. Much of the health data that fuels ML and AI applications is generated by certified health IT and is underpinned by technical standards and specifications required through the ONC Health IT Certification Program (“Certification Program”). We are excited about the enormous potential these tools could have to improve health care, but we are also aware of potential risks, challenges, and unmet needs.

At a recent Health Information Technology Advisory Committee hearing on “health equity by design,” stakeholders commented on a range of issues about the use of ML in health care. We heard that clinicians have unmet needs for information and transparency, and that until these needs are met, they are unlikely to use ML-driven tools or risk misapplying them to their patients. For example, panelists noted that clinicians need to know that an AI product has been evaluated in their setting of care, that the technology was trained on data that reflects their practice population, and that the product will be continuously monitored. Stakeholders also noted that clinicians want to be able to communicate back to developers of such AI products when a predictive recommendation did not work well for a patient. We also heard general concern that ML-driven technology does not create or recreate systemic inequalities that come with the lack of access to quality health insurance and quality care.

As ONC considers this evolving landscape alongside industry, academia, and other stakeholders, our collective challenge is clear: how can we effectively capitalize on the potential of AI, ML, and related technologies, harnessing the technology’s ability to predict outcomes, while avoiding risks related to the use of invalid, inappropriate, unfair, or unsafe predictions?

In considering how to best meet this challenge, it is helpful to think about parallels from our past. ONC understands that in the near-term, as the NAM report puts it, AI, “…should focus on promoting, developing, and evaluating tools that support humans rather than replace them with full automation.”  When put this way, ML sounds a lot like another brand of artificial intelligence: the rules-based decision support interventions that have been around for decades, are already widely used and “support humans rather than replace” them. We at ONC have taken to calling this type of technology predictive decision support.

ONC’s Role in Advancing the Development and Use of Technology for Decision Support

ONC has supported a certification criterion for clinical decision support (CDS) since the Certification Program’s inception. Like the technology it represents, this criterion has evolved over time.

In 2010, ONC first understood CDS to be more than a “rule,” or “alert,” but comprised of a variety of functions that help improve clinical performance and outcomes. In 2012 (as part of the 2014 Edition rulemaking), we noted that a CDS intervention should be more broadly interpreted, and we established requirements for certified health IT to support distinct “newer types” of interventions including “evidence-based decision support interventions” and “linked referential CDS.”

We also established requirements that these new CDS intervention types provide “source attribute” information, such as bibliography information, to promote transparency. These source attribute data points allow users to evaluate the interventions and thus improve their utility for patient care. This requirement was consistent with recommendations made by a former ONC Advisory Committee, and the availability of source attribute information for user review would later be considered by NAM to be a best practice in designing CDS.

ONC’s policy goal has been and continues to be centered on ensuring that certified health IT can support broad categories of decision support intervention types, while being agnostic as to the intended purpose of such decision support. This approach has led to a dynamic and flourishing landscape of decision support technologies, varied in purpose and scope, ranging from patient safety and clinical management to administrative and documentation functions.

Transparency for ML in Healthcare

Many stakeholders have called for transparency initiatives related to ML and related predictive models, including their use in health care. Greater transparency could serve as the tipping point for the vast potential of ML and related technologies in healthcare while also ensuring their fair, appropriate, valid, effective and safe use.

Stay tuned as we plan to discuss in a future post certain potential challenges for stakeholders to consider (like information asymmetry when dealing with so-called “black box” models) as they explore, support, and foster advancements in the use of predictive models and the decision support interventions they drive in health care.

This is part of the Artificial Intelligence & Machine Learning Blog Series.