Physician leaders are well equipped to protect health care workers from workplace violence, Ramin Davidoff, MD, writes in Physician’s Weekly.
Edward Lee, MD, quoted in Becker’s Health IT on the need for diverse data to inform AI algorithms
Edward Lee, MD, executive vice president and chief information officer of The Permanente Federation, told Becker’s Health IT that training artificial intelligence (AI) algorithms on insufficiently diverse data can result in bias.
“At a time when we are incorporating more and more AI in medicine, this bias can inadvertently contribute to the widening of health care disparities,” said Dr. Lee, who also serves as associate executive director of The Permanente Medical Group. “One of the first steps we need to take is to be intentional in looking for bias. If we don’t look we’ll never find it, so understanding that AI bias can be part of any algorithm is essential.”
Dr. Lee was 1 of 3 leading health-system executives who provided perspectives on the composition of the datasets that health care systems use to train AI-based medical devices. In the past decade, hospitals have increasingly adopted artificial intelligence tools to reduce inefficiencies, optimize clinician workflow, and expedite diagnostics. However, as the story notes, a diverse range of patients should be considered when AI-powered medical tools are developed and regulated.
“Because bias can be introduced at multiple points throughout the algorithm development process, careful consideration is needed during all of the steps,” Dr. Lee said. “This can start as early as building a diverse team that can bring different perspectives and expand thinking about the way data is collected, curated, and analyzed. Additional key mitigating steps are including as broad a dataset as possible, and continually validating and revalidating results of an AI algorithm to confirm the output makes clinical sense.”
Note: To read the full article, visit the Becker’s Health IT website.