Biases embedded in artificial intelligence systems increasingly used in healthcare risk deepening discrimination against older people, the World Health Organisation warned on Wednesday.
AI technologies hold enormous potential for improving care for older people, but they also carry significant risk, the UN health agency said in a policy brief.
“Encoding of stereotypes, prejudice, or discrimination in AI technology or their manifestation in its use could undermine… the quality of health care for older people,” it said.
The brief highlighted how AI systems rely on large, historical datasets with information about people collected, shared, merged, and analysed in often opaque ways.
The datasets themselves can be faulty or discriminatory, reflecting for instance existing biases in healthcare settings, where ageist practices are widespread.
Doctor Vania de la Fuente Nunez, of the WHO’s Healthy Ageing unit, pointed to practices seen during the Covid-19 pandemic of allowing a patient’s age to determine whether they could access oxygen, or a bed in a crowded intensive care unit.
If such discriminatory patterns are reflected in the datasets used to train AI algorithms they can become entrenched.
AI algorithms can solidify existing disparities in health care and “systematically discriminate on a much larger scale than biased individuals”, the policy brief warned.
In addition, the brief pointed out that datasets used to train AI algorithms often exclude or significantly underrepresent older people.
Since the health predictions and diagnoses produced are based on data from younger people, they could miss the mark for older populations, it said.
The brief meanwhile stressed that there were true benefits to be gained from AI systems in the care of older people, including remote monitoring of people susceptible to falls or other health emergencies.
AI technologies can mimic human supervision by collecting data on individuals from monitors and wearable sensors embedded in things like smartwatches.
They can compensate for understaffing, and the continuous data collection offers the possibility of better predictive analysis of disease progression and health risks.
But Wednesday’s brief cautioned that they risked reducing contact between caregivers and older people.
“This can limit the opportunities that we may have to reduce ageism through intergenerational contact,” De la Fuente Nunez said.
She cautioned that those designing and testing new AI technologies targeting the health sector also risk reflecting pervasive ageist attitudes in society, especially since older people are rarely included in the process.
(AFP)