Most film and TV fans are happy when their favorite streaming service recommends a good movie based on their previous viewing habits. Welcome to the age of algorithms. Unfortunately, although these recommendation models may provide entertaining options that you hadn’t considered, there’s a price to be paid: They usually reinforce one’s narrow view of the world of arts and entertainment. While that may not matter much to the average movie fan, it can pose real problems when such algorithms are used to suggest news stories on current events, shape your opinion on healthcare issues, or influence the decisions of medical professionals.
Evidence strongly suggests that social media sites like Facebook reinforce users’ prejudices and belief systems by feeding them stories with a point of view like their own. Users who lean liberal are likely to be encouraged to read new content that takes a liberal point of view, while those with a conservative perspective are likely to be fed stories that agree with their perspective, in effect creating an echo chamber that’s sometimes referred to as a “filter bubble.”
Independent studies conducted to determine whether this filter bubble actually exists were recently published in Science and Nature based on an analysis of data from Facebook users. They found that “the platform and its algorithms wielded considerable influence over what information people saw, how much time they spent scrolling and tapping online, and their knowledge about news events. Facebook also tended to show users information from sources they already agreed with, creating political “filter bubbles” that reinforced people’s worldviews.” On the other hand, exposure to this echo chamber did not seem to increase polarization among users.
Healthcare’s bias
As we have pointed out in the past, the data sets being used by many healthcare organizations are also biased and likely suffer from the same reinforcement problem. This has resulted in a long list of disparities that have impacted patient care. In 2022, BMJ Health & Care Informatics published a review outlining several ways in which algorithms can be biased, thereby reinforcing prejudices against certain segments of the population, including people of color, women, and persons in lower socioeconomic groups.
One of the most glaring examples of such prejudice was documented by Ziad Obermeyer and his colleagues at the School of Public Health, University of California, Berkley and elsewhere. They identified over 43,000 white and more than 6,000 Black patients who were part of risk-based contracts that determined the eligibility for insured medical care. They found that at each risk score, Black patients were sicker than their white counterparts — based on the signs and symptoms. However, the commercial data set used to determine their eligibility to receive care did not recognize the greater disease burden in Black patients because it assigned risk scores based on total healthcare costs accrued in a year.
It doesn’t take a data scientist with an in-depth knowledge of algorithms to recognize the problem with this reasoning. As we pointed out in our BMJ HCI review: “Using this metric as a proxy for their medical need was flawed because the lower cost among Blacks may have been due to less access to care, which in turn resulted from their distrust of the healthcare system and direct racial discrimination from providers.”
Unfortunately, this type of reinforcement of social stereotypes still persists in healthcare. On a more positive note, a long list of stakeholders are doing their part to chip away at these biases. The Coalition for Health AI, which was created to advocate for equitable, representative AI, includes many well-respected healthcare organizations, including Mayo Clinic, Mass General Brigham, Providence, AdventHealth, University of California Health System, and University of North Carolina Health, among others.
These healthcare providers, in conjunction with several high-profile technology companies and federal agencies, can turn the tide, which it turn will create a more balanced, fair ecosystem.
This piece was written by John Halamka, MD, President, and Paul Cerrato, senior research analyst and communications specialist, Mayo Clinic Platform. To view their blog, click here.
Share Your Thoughts
You must be logged in to post a comment.