How the IC² Institute is Advancing Responsible, Human-Complementary AI in Health Care
October 13, 2025
With three major initiatives complete — and more in the works — the IC² Institute is doubling down on its efforts to advance the responsible, ethical development of AI in health care. The three IC² initiatives, described below, are deepening our collective understanding of how AI is actually being used and perceived, how AI is impacting the doctor-patient relationship and clinical decision-making, and what AI could mean for the future of work.
- AI for Safety-Net Providers – The Institute recently conducted a survey of doctors, nurses and other practitioners working in safety-net settings. The 230 respondents, who work in both urban and rural areas across the state, shared their views on AI and identified factors that would influence patient and provider trust in AI applications.
- AI Projects to Improve Health Outcomes – In partnership with Dell Medical School, IC² is sponsoring four interdisciplinary research projects that promise to improve health outcomes using AI. Three of the projects aim to increase diagnostic accuracy/speed — in colorectal disease, epilepsy, and skin cancer — and the fourth project is focused on predicting cardiovascular and brain health among tweens and teens.
- AI in Clinical Practice – This fall, IC² is wrapping up a series of in-depth interviews with doctors who are working at the forefront of AI. The interviewees come from top medical schools and practices throughout the country. The doctors are sharing how, specifically, they are incorporating AI into their workflows, how AI is impacting the doctor-patient relationship, and where they draw the line between human and machine capabilities.
Taken together, the three initiatives will make significant contributions to how researchers and clinicians understand the exploding influence and use of AI in health care. A recent study conducted by the American Medical Association found that 66% of doctors reported using AI in their practice in 2024 – up significantly from the 38% who reported using AI in 2023. Most are using AI for administrative tasks such as documenting billing codes and visit notes; a smaller number are using it to generate patient-facing health recommendations and deploy patient-facing chatbots.
While the use of AI in health care is rapidly expanding, there is little in the way of governance, leading many doctors to feel that they are practicing without institutional guidance. IC² Institute Executive Director S. Craig Watkins explains, “To date, there’s no real governance mechanism. Not even the best medical schools in the world have a clear governance framework for how they will integrate AI into the clinical workflow and decision-making.”
As IC² continues to engage in deep conversations with frontline clinicians, it is illuminating the path for practitioners who are experimenting with AI and fellow thought leaders who are wrestling with its ethical, social and economic implications. And, as it rolls out actionable insights and recommendations, the Institute is helping to chart a responsible future for AI in medicine.
Mitigating Bias | Addressing Social Context

Dr. Joga Ivatury uses a joystick to guide endoscopy during colonoscopy.
The Institute is examining AI in health care from several distinct angles — one of these is bias mitigation. Despite its enormous potential for improving health care, bias in AI systems can perpetuate, or worsen, health disparities. For example, the application of a predictive algorithm may function differently in a rural versus urban setting. Similarly, a model that has been trained on data from an insured population may be less reliable when applied to uninsured populations.
As a result, Watkins explains, we must be attentive to social context when deploying AI solutions. He adds, “Developers and clinicians must be vigilant at every step along the AI deployment process in order to mitigate things such as hallucinations in large language models, AI-generated solutions that lack clinical validation, and overreliance on AI when making clinical decisions.”
To further probe these issues, IC² has challenged the four IC²/Dell Med-sponsored research teams to submit monthly reports detailing where and how they detect deficiencies in the design and deployment of AI and how best to mitigate them. Once the teams conclude their work, the Institute will outline a set of best practices for reducing the risks posed by AI in health care.
AI in Safety-Net Settings
IC²’s second area of focus is AI specifically used for safety-net settings — urban and rural health centers where practitioners are providing services to underserved and vulnerable populations. AI has the potential to be a great equalizer by freeing up doctors to see more patients, making patient education more accessible, predicting which patients might be at risk for certain conditions, and providing greater access to healthcare services.
But AI’s potential may not be realized. Will community health centers be able to afford the latest technologies and develop the capacity to effectively use them? Will inadequate infrastructure prevent AI adoption in rural areas? Can AI technologies adapt to language and cultural differences that may exist in under-resourced areas? These are the types of questions IC² hopes to answer with their research.
The Future of Work: Human + Machine.
The final lens through which IC² is examining AI is its impact on clinical decision-making and, in a broader context, the future of work. While much of the current Health AI discussion centers around back-office efficiencies, the Institute is focusing on a critical, emerging area: the impact of AI on clinical decision-making, diagnosis, and the doctor-patient relationship.
The adoption of generative AI systems is an essential aspect of the Institute’s current and future work. Watkins notes, “We know that Gen AI will impact clinical decision-making — we’re exploring how and to what effect.”
As the IC² researchers engage with clinicians, they are examining how doctors are drawing the line between tasks that should remain exclusively human versus tasks that can be handed over to AI. They are also looking for empirical evidence of AI leading to the development of new skills and types of human expertise.
Watkins believes that AI may, in fact, lead to a novel set of human skills and tasks that will be of service to humanity and could make healthcare professionals even more valuable in an AI-augmented clinical environment.
He explains, “Will AI create an apocalyptic, jobless future? That’s a common narrative, but I don’t subscribe to it. There’s a whole range of skills and forms of intelligence that humans have that are difficult to assign to an algorithm.”
What’s Ahead
The Institute will continue to carry out its health AI research through a new graduate student research fellowship, collaborative research with clinician/researchers at Dell Med and beyond, and a future call for proposals from UT faculty. (Details for the faculty CFP will be announced in early 2026.)
Already, the IC² research is helping to shape discourse regarding the future of AI in health care. Down the road, Watkins hopes that the Institute’s insights and recommendations will inform an AI/ethics curriculum in medical schools.
And Watkins is open to applying IC²’s investigative methods and insights to other spheres. He explains:
“We’re exploring responsible development of AI in a way that complements human work and expands opportunities for human intelligence. Health care is just one laboratory for exploring this — and there’s no domain where the stakes are higher. But there are other laboratories, other high-stakes environments, including the educational system, and financial system that, too, will require rigorous and sustained research.”
__________________________________________________________________________________________
PARTNER WITH US
Together with our partners, the IC² Institute is investigating how to develop and deploy ethical, human-complementary AI systems that will transform health care for the better. We welcome policymakers, funders, health care professionals, faculty and students to join us in this mission!
Learn more about our research.


