Auditing Framework Provides Insights into ‘Black Box’ Medical AI

News Talk

Lifestyle / News Talk 33 Views 0 comments

By Shania Kennedy February 16, 2024 – Researchers from Stanford University and the University of Washington have developed an auditing framework designed to shed light on the ‘black box’ decision-making processes of healthcare artificial intelligence (AI) models. The ‘black box’ problem is a persistent issue in which users of an AI tool cannot see how it makes decisions. Since the system’s inner workings are invisible, users often have a more difficult time trusting and accepting the model’s outputs. This lack of trust is a major barrier to AI implementation in healthcare, leading stakeholders to push for increased explainability in the tools. Dig Deeper To that end, the researchers set out to develop an auditing approach for these models to reveal their inference processes. The framework utilizes human expertise and generative AI to assess classifiers, algorithms that help categorize data inputs. To test the approach, the research team studied five dermatology AI classifiers by tasking them with characterizing images of skin lesions as either “likely benign” or “likely malignant.” From there, trained generative AI models paired with each classifier were used to create modified lesion images designed to appear either “more benign” or “more malignant” to each classifier. Then, two dermatologists...

0 Comments