New UChicago initiative aims to improve health care algorithms for underrepresented groups

Chicago Booth researchers release free guide to help identify, correct algorithmic bias

Algorithms have become increasingly pervasive as organizations in both the public and private sectors have sought to automate tasks that once required human intelligence. From facial recognition to decisions about creditworthiness to medical assessments, decision-makers rely on algorithms to help improve their own perceptions and judgment.

But the use of algorithms in so many domains has been accompanied by equally pervasive concerns that those algorithms may not produce equitable outcomes. What if algorithms output results that are biased, intentionally or unintentionally, against a subset of people, particularly underrepresented groups such as women and people of color? Given their application in contexts with enormous human consequences, and at tremendous scale, a biased algorithm could do significant harm.

Researchers with Chicago Booth’s Center for Applied Artificial Intelligence (CAAI) have seen the kind of harm even well-intentioned algorithms can produce. In an 2019 study, Sendhil Mullainathan—the Roman Family University Professor of Computation and Behavioral Science and the center’s faculty director—found that an algorithm used to evaluate millions of patients across the United States for enrollment in care-management programs was biased against Black patients, excluding many who should have qualified for such programs from being enrolled. Mullainathan co-authored the study with with Ziad Obermeyer of the University of California, Berkeley; Brian Powers of Boston’s Brigham and Women’s Hospital; and Christine Vogeli of Partners HealthCare.

The publication of that research in the journal Science generated significant engagement from policy makers and health-care organizations: New York state, for instance, launched an investigation of one major health system over its use of the algorithm. Building on this impact, the CAAI established the Algorithmic Bias Initiative, with the goal of helping the creators, users, and regulators of health-care algorithms apply the insights of research to their own work.

Thanks in part to the visibility of the Science study, interest from the health-care industry has been strong. “We didn’t really have to do much outreach,” said Emily Bembeneck, the director of the CAAI. “People came to us.”

Since its founding in November 2019, the initiative has worked with individual health-care systems and other groups to help them tackle such tasks as taking stock of the algorithms they use, evaluating their organizational structures with algorithmic management in mind, and scrutinizing algorithms themselves for bias. When algorithms go wrong, the problem is often something called label-choice bias, the same shortcoming identified in the Science study, in which an algorithm may be predicting its target variable accurately, but that variable is ill-suited to the decision it’s being used to make.

Health care makes a good setting for studying and combatting algorithmic bias, Mullainathan said, because the stakes involved—personal health, comfort and pain, even life and death—are so high, because algorithms are widely used throughout the industry, and because “health care is a domain notorious for inequities.”

“Historically disadvantaged groups have struggled with systemic and individual racism [in health care] for decades,” Mullainathan said. “Against this backdrop, we are at a crossroads. Algorithms can do a lot of harm or good. They will either reify (or even worsen) existing biases, or they can—if carefully built—help to fix the inequities in health care.”

A playbook for preventing bias

To scale up the help it can provide to organizations working in health care, in June the initiative released the Algorithmic Bias Playbook, an action-oriented guide synthesizing many of the insights the initiative has drawn both from research and from experience in the field. The free playbook offers a framework for identifying, correcting and preventing algorithmic bias following four steps:

  1. Creating an inventory of all the algorithms being used by a given organization
  2. Screening the algorithms for bias
  3. Retraining or suspending the use of biased algorithms
  4. Establishing organizational structures to prevent future bias

The playbook guides users through every step, breaking each down into discrete actions and offering practical advice and examples drawn from the CAAI’s work with health-care organizations. Although its specific focus is health care, “the lessons we’ve learned are very general,” the authors write. “We have applied them in follow-on work in financial technology, criminal justice, and a range of other fields.”

Bembeneck is one of the authors of the playbook, along with Mullainathan; Obermeyer; Rebecca Nissan of ideas42, a nonprofit that focuses on applying behavioral science insights; Michael Stern of the health-care startup Forward; and Stephanie Eaneff of Woebot Health, which provides virtual mental health resources. Bembeneck said that the playbook can serve as a blueprint not only for organizations hoping to improve their use of algorithms, but also for future regulation of algorithms: “We think one of the best ways to address algorithmic bias is better regulation.”

To further encourage the implementation of best practices for algorithmic management in health care, the CAAI will co-sponsor a conference with Booth’s Healthcare Initiative focused on helping those working in health care to take concrete steps toward eliminating algorithmic bias. Taking place in Chicago and online in early spring, the conference will bring together policymakers, health-care providers, payers, providers of A.I. software, and technical experts from outside the health-care industry—groups that may not be in regular contact with each other but could nonetheless benefit from an opportunity for dialogue.

Matthew J. Notowidigdo, professor of economics and co-director of the Healthcare Initiative, said that he’s eager to connect health-care experts with those who have backgrounds in machine learning. Health care poses some unique challenges when it comes to algorithms, he said—for instance, privacy restrictions may limit how data can be shared, including with the designers of algorithms—but “I’m of the belief that there’s a lot that health care can learn from other settings” where algorithms are used.

The conference will feature user stories from organizations that have put the Algorithmic Bias Playbook to use. Panels will focus on topics such as data sharing and building teams for algorithmic management. The conference’s organizers also emphasize the importance of allowing attendees to network, particularly given their diverse professional backgrounds, so they have the opportunity to share knowledge and build connections that can help them take action within their organizations.

The playbook and conference, as well as the ongoing support the CAAI offers through the Algorithmic Bias Initiative, reflect a rising concern within a portion of the health-care industry about how algorithms are used. Bembeneck said that her experience with the Algorithmic Bias Initiative has shown her that many health-care organizations—and, consequently, many of the vendors that supply algorithmic products—are acutely aware of the importance of equitable A.I.

“Not only because they don’t want to be on the wrong side of the law,” she said, “but there’s a keen desire from everyone we’ve talked to that they want to give better health care.”

—This story was first published by the Booth School of Business.