AI in Global Health
AI in Global Health It is no secret that the deployment of global health interventions favors high-income countries. After all, global health is dominated by them. Nearly every major global health agency like the WHO and research fund like US NIH are led by experts from high-income countries.
However, the rise of AI technology gives hope that it could help address health challenges unique to low and middle income countries (LMICs), given we proceed with caution.
In recent years, a greater focus has been placed on achieving the UN’s Sustainable Development Goal (SDG) of Health and Well-Being through AI applications. This SDG target includes reducing preventable premature mortality rates, achieving universal health coverage, and improving testing and treatment programmes for infectious diseases1.
The UN now hosts AI for Good Global Summit and AI for Health annually to find upcoming AI applications in global health, especially for LMICs2.
However, LMICs face distinct health challenges that undermine health-related SDGs. For example, these resource-poor countries are more likely to have acute health workforce shortages, weak public health surveillance systems, and a lack of educational resources.
Addressing these challenges is especially important to accomplish the SDGs, as people living in developing countries face a higher risk of premature death and live a higher proportion of their lives in poor health3.
Thankfully, there are a number of AI initiatives moving from pilot to scale in LMICs that aim to do so. For example, Naps and Nibbles is a mobile app covering child sleep, breastfeeding, and nutrition for Indian parents4. Aero Therapeutics, developed in Ethiopia, aims to “help physicians in low-resource settings treat neonatal respiratory issues” with their affordable, sustainable devices5.
While these interventions were rolled out before news of Sars-CoV-2, many AI-driven health interventions are being rapidly deployed today in response to the pandemic without appropriate safeguards6.
Improper ethical considerations may leave LMICs vulnerable to biases that increase equity disparities. According to Schwalbe and Wahl’s landmark review, The Lancet’s Artificial Intelligence and the Future of Global Health, there needs to be greater caution in the development and deployment of AI-enabled interventions.
Ethical issues in AI development and deployment are greater in LMICs than in high-income countries. A number of experts have also raised concerns that some AI applications could exacerbate ethnic, socioeconomic, and gender-related inequities7.
Even within the United States, we can find examples of AI exacerbating systemic inequalities. A 2016 report found that an application relying on arrest records, postal codes, and socioeconomic data to assess chances of re-offense was biased against Black residents8.
A 2019 report on a health-related application found similar biases. This application predicted how long an individual’s hospital stay would be so that patients most likely to be eligible for discharge be moved to doctor’s priority lists. Researchers found that postal codes were the greatest determinant in length of stay9.
Predominantly Black neighborhoods were correlated to longer stays, meaning hospital resources would be diverted away from poor, black people and toward wealthy, white people.10
In developing countries, the development and deployment of AI solutions will require increased attention to ethical issues. A lack of equitable access to datasets representative of the target population compounds machine learning’s ability to acquire stereotyped biases from written text11. Cultural prejudices, like racial bias, can also be reflected in aspects of AI design.
While there are a number of regulatory standards already in place, these are not sufficient. As Schwalbe and Wahl note, most AI studies report approvals by institutional review boards, but few describe how they address ethical concerns of large datasets and even fewer report the usability of these tools12.
Further, many health-related AI applications deployed in LMICs are based on available data and tools rather than to address greatest health concerns13. In all stages of development and deployment, culture differences, literacy, available IT infrastructure, needs of the population, etc. must be accounted for.
To that note, The Lancet review, among other publications, calls for an incorporation of human-centered design, the inclusion of the beneficiaries of these applications in the process, a needs-based rather than tools-based approach, and the development and implementation of global and ethical regulatory standards.Especially during the ongoing COVID pandemic, we must ensure that ethical standards are met through a human-centered design in addition to addressing the algorithm and data biases. Otherwise, as Schwalbe states, “we risk undermining the vulnerable populations we are best trying to support14.”