The American Medical Informatics Association has issued some new recommendations for how clinical decision support tools that adapt their algorithms as they’re trained with new data should be overseen to ensure safety and efficacy.
WHY IT MATTERS
In board-approved position paper, published in the Journal of the American Medical Informatics Association, the clinical informatics group offers suggestions to the U.S. Food & Drug Administration and other policymakers for how such “adaptive CDS” should be managed as it evolves, given the “unique challenges and considerations,” it represents.
AMIA says the vast array of emerging new tools and applications – now and in the future – need a concrete framework “to ensure safe and effective use of AI-driven CDS for patient care and facilitate a wider discussion of policies needed to build trust in the broader use of AI in healthcare.”
“Debates about the scope and force of oversight for the safety and effectiveness of CDS have tended to emphasize legal regulation, of which little exists, and institutional governance, which frequently is wholly lacking,” authors said. “Organizational leaders have recognized content creation, analytics and reporting, and governance and management as critical components in the development of CDS, but achieving all 3 in sufficient depth remains challenging for organizations.”
AMIA is focused on two use cases of adaptive CDS:
- Those tools sold to customers for use in a healthcare setting (“marketed ACDS”).
- Those developed in-house by healthcare systems for their own use (“self-Developed ACDS”),
While FDA has oversight over most marketed adaptive CDS tools, even as the agency itself is still sorting through quite what that means and what shape it will take, “self-Developed ACDS is likely unregulated by any federal entity and is already used routinely without oversight by any authoritative body – public, private, or non-profit,” the AMIA position paper notes.
“The current policy and oversight landscape for Adaptive CDS is inadequate,” said Dr. Joseph Kannry, AMIA Policy Committee Chair and paper author. “Gaps in federal jurisdiction of Adaptive CDS have left patients subject to algorithmic bias and potentially exposed to patient safety issues. In this paper we present a policy framework that spans the design and development, implementation, evaluation, and on-going maintenance of Adaptive CDS.”
Specifically, the group put the focus on two must-haves:
- First, it makes the case that “transparency in how Adaptive CDS is trained is paramount.” The new framework requires standards for how decision support algorithms are trained, “including the semantics and provenance of training datasets … necessary for validation prior to deployment.” To prevent introduction of bias, the group also calls for visibility into how the data fueling the models was acquired, what criteria were involved in its selection and the attributes that could influence how models might perform on new data.
- Second, AMIA calls for “communications standards to convey specific attributes of how the model was trained, how it is designed, and how it should operate in situ.” Such guidelines are necessary to “objectively compare, evaluate, and guide ongoing maintenance of the algorithm,” it argues.
Toward those ends, beyond regulatory bodies such as FDA, the position paper advocates for the creation of new groups to govern AI deployments within specific healthcare organizations, and calls for a new system of oversight across institutions. It suggests the launch of centers of excellence to continue ongoing work on development, testing and evaluation of safe, adaptive CDS.
THE LARGER TREND
“Clinical decision support represents one of the most important applications of computing to patient care and continues to advance as does the widespread deployment of EHRs and the professionalization of clinical informatics,” said AMIA in the position paper.
New and emerging decision support systems are now able “adapt to dynamic and growing bodies of knowledge and data,” powered by AI and ML and made available as software-as-a-medical-device.
“These trends offer significant promise for reducing alert fatigue, improving cognitive load, and delivering better evidence to point-of-care decisions in the form of Adaptive CDS.”
But first, it’s imperative to ensure data integrity and algorithmic clarity, and as recent reports have shown, even FDA-cleared products can’t promise that.
The adverse effects of algorithmic bias – with “under-developed and potentially biased models” potentially worsening COVID-19 health disparities for people of color, for instance – is well-recognized by now.
FDA understands that, and says it is making it a priority. This past month, the agencies put forth an action plan focused on AI-enabled software as medical device, and said it plans a “multi-pronged approach” to advancing oversight of machine learning-enabled devices with an eye toward ensuring patient safety, algorithm transparency and real-world results.
ON THE RECORD
“The informatics community invented CDS, and AMIA members have championed the advancement of CDS for decades,” said Patricia C. Dykes, AMIA board chair and program director of research at the Brigham and Women’s Center for Patient Safety, Research and Practice, in a statement.
“An exponential growth in health data, combined with growing capacities to store and analyze such data through cloud computing and machine learning, obligates the informatics community to lead a discussion on ways to ensure safe, effective CDS in such a dynamic landscape,” she said.
“The use of AI in healthcare presents clinicians and patients with opportunities to improve care in unparalleled ways,” added AMIA Public Policy Committee Member Carolyn Petersen, lead author on the position paper. “Equally unparalleled is the urgency to create safeguards and oversight mechanisms for the use of machine learning-driven applications for patient care.”
Twitter: @MikeMiliardHITN
Email the writer: [email protected]
Healthcare IT News is a HIMSS publication.
Source: Read Full Article