Background
As artificial intelligence (AI) becomes more common in health care, ensuring these models are accurate and fair for all patient groups is crucial. Our project tackles a key issue: how can healthcare providers trust that AI models will work effectively for their specific patients?
Project Description and Objectives
Our project aims to develop an innovative, ethical, and scalable approach to assessing and refining AI models for local patient populations. We will create a framework and toolkit for responsible AI development in healthcare, incorporating performance monitoring and correction mechanisms to ensure AI models are locally optimized. Initially, we will focus on skin cancer detection in a primary care setting as a proof-of-concept. However, our approach is designed to be applicable across various health conditions and specialties.
Our objective is to develop and validate a broadly applicable method for AI model evaluation, refinement, and maintenance that enhances provider trust and adoption. By doing so, we aim to ensure that AI tools are accurate, fair, and trustworthy, ultimately improving patient outcomes across diverse communities.
Research Methods
Our research methods will leverage advanced data processing and the ethical use of local population data to enhance AI models, making them more representative and effective for specific communities. We will begin by collaborating closely with community stakeholders, including primary care physicians, specialists, patient advocates, and other key representatives. This collaboration will help us generate an ethical framework that will guide data collection, governance, technology development, validation, and implementation.
We will develop methods for extracting and integrating clinical data to establish a robust “local ground truth” for model evaluation. Our platform will incorporate continuous monitoring of model performance and adaptation mechanisms to create locally optimized models. Additionally, we will create innovative ways to quantify and communicate model uncertainty to clinicians and other stakeholders, thereby improving trust and adoption.
Our interdisciplinary team, with expertise in health disparities research, AI, health informatics, data science, and clinical practice, will be well-positioned to tackle the complex challenges of creating reliable, locally-optimized AI models for healthcare. This collaborative approach will address the inherent risks associated with such an ambitious and transformative agenda. Ultimately, our work has the potential to transform the landscape of AI use in health care, ensuring these powerful tools remain accurate, fair, and trustworthy as they support critical medical decisions across diverse populations.
Assistant Dean for Health Product Innovation, Dell Medical School, Managing Director, CoLab at Dell Med, Associate Professor, Medical Education
Director of Research & Education, Office of Health Equity, Dell Medical School
Director of Health Equity Strategy & Transformation, Office of Health Equity, Dell Medical School
Chief Data Officer and Assistant Vice Provost and Director of Institutional Reporting, Research, Information and Surveys at UT Austin
Professor, Ernest Cockrell, Jr. Memorial Chair in Engineering, Cockrell School of Engineering, Department of Aerospace Engineering and Engineering Mechanics
Area Head for Virtual Production, Assistant Professor of Practice, Moody College of Communication