PROJECT DESCRIPTION AND OBJECTIVES
With funding from the Episcopal Health Foundation, researchers are seeking to understand how safety-net providers in rural and underserved areas perceive health AI. Research centered on three core questions:
WHY IS THIS RESEARCH IMPORTANT?
Health AI promises the delivery of care that is more predictive, preventative, and personalized. But little research or experimentation in Health AI has focused on safety-net providers. We believe this is a missed opportunity and serves to perpetuate one of the core concerns about AI: the degree to which its design excludes participation from diverse populations and perspectives.
RESEARCH METHODS
The IC² Institute surveyed health care practitioners in rural and urban areas across the state. The map depicts locations of the survey respondents.
In addition to a literature review and substantial interviews with thought leaders, researchers conducted a survey of clinicians and health care administrators across Texas. The survey posed questions aimed at discerning participants’ understanding of AI and machine learning and evaluating participants’ perceptions of the usefulness of AI to their work. A total of 229 practitioners, from cities and towns all across Texas, completed the survey.
FINDINGS
See full report for detailed findings.
A significant portion of survey respondents, 73%, were either somewhat or very familiar with AI. Just over a quarter, 27%, reported that they were either somewhat or completely unfamiliar with AI.
About half, 48%, of respondents reported that they somewhat or completely trust AI. The other half – 52% – were either neutral, distrustful or completely distrustful, with 4% reporting that they completely distrust AI. As noted in the paper, a low level of trust in AI is a major barrier to AI adoption in safety-net health care settings. Of note: A high degree of trust was associated with survey participants who were more familiar with AI technologies in health care and perceived themselves to have a better understanding of AI and machine learning in general. Providers with low trust cited a need to build AI experience and exposure, and they had concerns around data bias and privacy/security.
Just under half of respondents, 45%, believed that their patients would be responsive to AI tools being used in their care. This is compared to 30% who believed their patients would be somewhat or very non-responsive to AI, and 31% who said they were “neutral”. Practitioners noted several primary barriers for patients typically seen in safety-net settings: technology access, technological literacy, and cultural differences such as language.
When it comes to integrating AI technology into health care, more than half of respondents, 57%, were neutral or not very confident in their organization’s ability to integrate AI into their workflow. Respondents cited a number of challenges: concerns about data privacy/security; insufficient training/knowledge; inadequate funding for AI implementation, inadequate technical infrastructure, and resistance to change among providers and patients.
When asked to consider what they perceived to the top benefit AI in safety-net service settings, participants responded most often with these three benefits: streamlining of administrative tasks, enhanced patient outcomes, and improved diagnostic accuracy.
ADDITIONAL INSIGHTS: TRUST AND HUMAN-CENTERED CARE
Through in-depth interviews and open-ended survey questions, practitioners frequently raised the issue of trust and identified a handful of factors that would enhance trust in Health AI: mitigating bias; AI literacy for practitioners; policy/governance to ensure ethical development of AI; evidence that AI helps deliver more effective and efficient health care; extending care to patients in underserved areas by way of remote patient monitoring and tele-health; and successful navigation of linguistic and cultural differences.
A common theme across their responses is the importance of keeping human-centered care at the core of health care. Practitioners are adamant that health care must remain deeply empathetic and responsive to the unique clinical and social needs of each patient.
RECOMMENDATIONS
Based on our findings, we have outlined four recommendations to guide leaders in developing a holistic and strategic approach to AI that will improve care for rural and safety-net populations. See full report for expanded version of recommendations, including action steps.
Executive Director and Ernest A. Sharpe Centennial Professor, Moody College of Communication