From identifying potential job candidates to predicting future criminals, computer algorithms have taken on the role of making complex decisions in place of humans. Many social sectors, such as the healthcare industry, have invested in the development and expansion of computer software in an effort to reduce operating costs and minimize the influence of human bias.
One of the most widespread applications of computer software in medicine is the “healthcare risk management algorithm,” an example of a risk-prediction system that is used to guide the treatment of millions of Americans.
This system aids healthcare and insurance companies in identifying which patients would benefit from “high-risk care management programs.” These programs seek to improve the care of individuals with more complex healthcare needs, such as those with chronic illnesses, by providing them with more specialized care and individualized resources.
The goal of closely monitoring these patients is ultimately to prevent serious complications — therefore avoiding more expensive healthcare costs down the road. Due to the high initial upfront cost associated with these programs, an effective computer algorithm is critical to ensuring that the appropriate resources are devoted to the people who will benefit the most.
However, research led by Ziad Obermeyer at the University of California, Berkeley reveals that this widely used risk-prevention software displays the same racial and socio-economic biases it was created to overcome. This study, which utilized self-reported healthcare records from a major hospital, found that the algorithm identified healthier white patients as candidates for healthcare risk management programs over less healthy black patients due to the fact that white patients were more costly on average.
This bias arises because the algorithm relies on the assumption that healthcare cost is directly correlated with patient illness. Since sicker patients are more likely to require more frequent and expensive care, cost is used as the main factor in identifying candidates that would benefit the most from more specialized treatment programs.
However, this simplification only holds true under the condition that medical care is equally accessible to all people. The extensively documented history of racial and ethnic disparities within the US healthcare system consistently cite the influence of socio-economic and racial status in determining the quality and amount of care a patient receives. Therefore, Black patients cost less on average not because they are healthier overall, but because they are receiving less medical care.
The researchers found that if the algorithm were instead modified so that it no longer utilizes cost as an indicator of treatment need, the percentage of Black patients receiving additional specialized care would increase from 17.7 to 46.5%, closing a significant gap in treatment among racial groups.
While Obermeyer’s research represents a major achievement in identifying the inadvertent bias of widely used healthcare software, further investigations are likely to be limited. The issue is that many of these large healthcare algorithms are proprietary, which significantly restricts the amount of information that independent researchers can access. Without greater access to these algorithms, it would be extremely difficult for researchers to understand the exact source of these biases.
This inadvertent bias not only reveals a significant flaw evident in widely used healthcare software, but could be linked to the larger racial and socioeconomic disparities in the US healthcare system. Other research has linked the inequality in healthcare access to factors such as lack of transportation, doctors’ unconscious attitudes towards patients of color, and a distrust in the healthcare system.
A study published in Social Science and Medicine revealed that socio-demographic characteristics influenced how physicians viewed their patients. Researchers found that Black people were viewed more negatively on a number of factors including intelligence, personality, abilities, and likelihood to adhere to medical advice in comparison to white patients.
These judgments of physicians, often automatic and unconscious, cause many people of color to be wary of healthcare providers. Not only does this lack of trust lead to strained doctor-patient relationships, but it ultimately results in a lower-quality level of patient care. This combined with physical barriers, such as time and being located far away from medical facilities, contribute to gaps in healthcare access even among individuals with insurance.
As our society becomes increasingly dependent on technology to make complex ethical decisions, many experts fear that passively accepting automated systems as completely objective will further the socioeconomic and racial disparities embedded within the healthcare system.
Ruha Benjamin, Associate Professor of African American Studies at Princeton University, warns against the danger of these automated systems that "hide, speed and deepen racial discrimination behind a veneer of technical neutrality."
However, it is important not to let the flaws of the current risk-predictor algorithm discourage the use of technological systems in healthcare. The reality is that it would be impossible to manage millions of patient records without the extensive use of computer algorithms. Instead of dismissing this relatively new technology due to its current flaws, software developers are working to correct the algorithm’s programmed assumptions in order to ultimately reduce disparities within the healthcare system.