Addressing inherent racial and gender bias in artificial intelligence

(Riddhi Jani)

Artificial intelligence (AI) uses algorithms to find associations in data that can be used to make predictions. 

While there is enormous potential for AI to improve the speed and effectiveness of diagnosis and treatment, its use in healthcare has led to unintended consequences, such as systems that consistently underestimate the health risk of Black patients, according to Science.org.

Racial and gender bias enters AI algorithms through datasets that do not accurately reflect the population they are meant to represent, and unconscious biases held by researchers and clinicians. When an AI is trained on data from marginalized groups who are provided a lower standard of care, discrimination is built into the model. 

According to Bret Nestor, a University of Toronto PhD candidate studying the use of AI in healthcare, “There are known biases in healthcare that manifest in AI — the models are a direct reflection of how society already operates, not how society aspires to operate.”

Research on the subject also found that sex and gender differences were also not consdered in most biomedical AI algorithms, despite many documented differences that come in health conditions based on gender. 

These algorithms are created by engineers and statisticians — two fields that lack gender and racial diversity. Only 26 per cent of AI and data science professionals identify as women, and they are over-represented in lower status jobs

“The lack of women in AI research excludes key perspectives and experiences,” says Anshikha Kumar, a fourth-year global health student and project lead at Empowering Women in Health. “AI research will not be as helpful as we hypothesize if we are unable to deliver social justice.”

Dr. Ian Stedman, an assistant professor and member of SickKids’ Research Ethics Board and CIHR Institute of Genetics Advisory Board, states that AI is not yet used as widely in Canada as in the U.S., largely because Canada’s healthcare system is publicly funded. 

Developing AI in the public sphere can offer opportunities, including the chance not to repeat past mistakes.

Calls for more accurate data collection have emerged during the pandemic. According to Dr. Nicola Bragazzi, a scientist, medical doctor, and biomathematician at York’s Laboratory for Industrial and Applied Mathematics, “COVID-19 has shown, once again, how diseases are not gender-neutral but gendered, affecting some populations in a disproportionate way.”

If data is not disaggregated by race, sex, and gender, then differences between marginalized groups and identities are hidden and may bias the data. 

To Dr. Stedman, tackling this hinges upon “patient-centred research where doctors sit around the table with the people being impacted before they do the work to eliminate bias.”

Dr. Stedman stresses the importance of bringing the provinces together to build a Canada-wide health database that would provide a larger and more diverse dataset than provinces could generate on their own.

“Representatives of these populations and other relevant stakeholders should be actively and directly engaged in every step of the statistical process to ensure the risk of biases (gender gap, race gap, sex and gender minority gap, disability gap, etc. — overall, what I call the ‘social vulnerability gap’) is significantly reduced and AI is used in a meaningful and inclusive way,” adds Dr. Bragazzi.

Finally, transparency and accountability are crucial. 

Dr. Stedman continues, “In healthcare AI, it is incredibly important that we have, and that we enforce, high standards of development, testing, deployment, and if necessary, post-market oversight. 

“If an algorithm is going to continue to learn and improve, then we need to make sure that we stop using it to inform healthcare decisions if it has learned something that causes it to become inaccurate at making predictions for a particular population that we previously thought it capable of making.”

A report by the Artificial Intelligence and Society Task Force released in November 2021, recommended the development of a centralized AI research space at York, capable of attracting a diverse and multidisciplinary team. 
According to Dr. Stedman, a member of the task force, “While the university has not yet committed any funding towards the realization of this space, the fact that this task force was assembled in the first place makes it seem highly likely that the university is considering a major move in this direction.”

About the Author

By Fiona Harris

Contributor

Interested in becoming a contributor? Check out our Get Involved Page

Topics

Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments