During the COVID-19 pandemic, the disproportionate effects of the virus and its most severe outcomes on minority communities have cast a spotlight on racial bias in health care, raising questions about how health care providers can reduce bias and improve health equity for all patients.
And racial bias is not always as obvious as one might think; sometimes it can be as simple as the language used to describe a patient during their visit.
In a study published online on Jan. 19 in Health Affairs, researchers at the University of Chicago Medicine used machine learning to search the electronic health records of over 18,000 adult patients, including over 40,000 history and physical notes, for sentences containing a negative patient descriptor such as “resistant” or “noncompliant.”
After controlling for socioeconomic and health characteristics, the study found that Black patients were 2.54 times as likely to have at least one negative descriptor in their records compared to white patients. Other groups more likely to have negative descriptors included patients on government insurance such as Medicare or Medicaid and those who were unmarried.
The study was inspired by a desire to better understand the ways in which biased language may affect patient care.
“The language we use can exacerbate existing health disparities,” said Michael Sun, first author on the study and a third-year student at the University of Chicago Pritzker School of Medicine. “Patients can see these notes and if they see that they’ve been identified as being defensive or angry, they might not come back to see that provider again. Or, they might not feel empowered to speak up regarding their health care needs out of a fear of being viewed negatively by their providers.”
In the study, the researchers used a machine learning algorithm to comb electronic health records collected at an urban academic medical center from January 2019 through October 2020. They defined a list of negative descriptors that included terms such as aggressive, combative, defensive, hysterical and resistant. Using natural language-processing techniques, they were able to parse out the contexts in which these words were used to negatively describe patients or their behaviors. When compared to demographically matched white patients, Black patients were more than twice as likely to be described in their charts using some of these negative terms.
“I was surprised to find that some words I expected to see didn’t show up very often—words such as ‘difficult,’” said Sun. “That shows that providers are coding our language better than we think. And, interestingly, we saw that there was a relative decrease in the use of negative terms after the COVID-19 pandemic began. That raises new questions about how the pandemic affected medical practice and how providers have interacted with and cared for patients during this time.”
The researchers hope that this study will help illuminate areas that are easily infiltrated by unconscious bias and help encourage health care providers to reconsider how they think and talk about their patients.
“Everyday bias is happening in our world, whether people know it or not,” said Sun. “This work helps to show what biases exist, even within institutions and professions that are striving to be good. These biases affect patient care and they matter to the patients. As physicians, we come from a position of privilege and power. It’s our responsibility to advocate for our patients and enact change within our own organizations.”
Additional authors include Tomasz Oliwa, a senior scientific software engineer in the Center for Research Informatics at UChicago, and Prof. Monica E. Peek and Asst. Prof. Elizabeth L. Tung of UChicago Medicine.
Citation: “Negative Patient Descriptors: Documenting Racial Bias In The Electronic Health Record,” Sun et al., Health Affairs, Jan. 19, 2021.
Funding: National Heart, Lung and Blood Institute, Chicago Center for Diabetes Translation Research, National Institute of Diabetes and Digestive and Kidney Diseases