<p>Biases embedded in artificial intelligence systems increasingly used in healthcare risk deepening discrimination against older people, the World Health Organization warned Wednesday.</p>.<p>AI technologies hold enormous potential for improving care for older people, but they also carry significant risk, the UN health agency said in a policy brief.</p>.<p>"Encoding of stereotypes, prejudice, or discrimination in AI technology or their manifestation in its use could undermine... the quality of health care for older people," it said.</p>.<p>The brief highlighted how AI systems rely on large, historical datasets with information about people collected, shared, merged and analysed in often opaque ways.</p>.<p>The datasets themselves can be faulty or discriminatory, reflecting for instance existing biases in healthcare settings, where ageist practices are widespread.</p>.<p>Doctor Vania de la Fuente Nunez, of the WHO's Healthy Ageing unit, pointed to practices seen during the Covid-19 pandemic of allowing a patient's age to determine whether they could access oxygen or a bed in a crowded intensive care unit.</p>.<p>If such discriminatory patterns are reflected in the datasets used to train AI algorithms they can become entrenched.</p>.<p>AI algorithms can solidify existing disparities in health care and "systematically discriminate on a much larger scale than biased individuals", the policy brief warned.</p>.<p>In addition, the brief pointed out that datasets used to train AI algorithms often exclude or significantly underrepresent older people.</p>.<p>Since the health predictions and diagnoses produced are based on data from younger people, they could miss the mark for older populations, it said.</p>.<p>The brief meanwhile stressed that there were true benefits to be gained from AI systems in the care of older people, including for remote monitoring of people susceptible to falls or other health emergencies.</p>.<p>AI technologies can mimic human supervision by collecting data on individuals from monitors and wearable sensors embedded in things like smartwatches.</p>.<p>They can compensate for understaffing, and the continuous data collection offers the possibility of better predictive analysis of disease progression and health risks.</p>.<p>But Wednesday's brief cautioned that they risked reducing contact between caregivers and older people.</p>.<p>"This can limit the opportunities that we may have to reduce ageism through intergenerational contact," De la Fuente Nunez said.</p>.<p>She cautioned that those designing and testing new AI technologies targeting the health sector also risk reflecting pervasive ageist attitudes in society, especially since older people are rarely included in the process.</p>.<p><strong>Check out the latest videos from <i data-stringify-type="italic">DH</i>:</strong></p>
<p>Biases embedded in artificial intelligence systems increasingly used in healthcare risk deepening discrimination against older people, the World Health Organization warned Wednesday.</p>.<p>AI technologies hold enormous potential for improving care for older people, but they also carry significant risk, the UN health agency said in a policy brief.</p>.<p>"Encoding of stereotypes, prejudice, or discrimination in AI technology or their manifestation in its use could undermine... the quality of health care for older people," it said.</p>.<p>The brief highlighted how AI systems rely on large, historical datasets with information about people collected, shared, merged and analysed in often opaque ways.</p>.<p>The datasets themselves can be faulty or discriminatory, reflecting for instance existing biases in healthcare settings, where ageist practices are widespread.</p>.<p>Doctor Vania de la Fuente Nunez, of the WHO's Healthy Ageing unit, pointed to practices seen during the Covid-19 pandemic of allowing a patient's age to determine whether they could access oxygen or a bed in a crowded intensive care unit.</p>.<p>If such discriminatory patterns are reflected in the datasets used to train AI algorithms they can become entrenched.</p>.<p>AI algorithms can solidify existing disparities in health care and "systematically discriminate on a much larger scale than biased individuals", the policy brief warned.</p>.<p>In addition, the brief pointed out that datasets used to train AI algorithms often exclude or significantly underrepresent older people.</p>.<p>Since the health predictions and diagnoses produced are based on data from younger people, they could miss the mark for older populations, it said.</p>.<p>The brief meanwhile stressed that there were true benefits to be gained from AI systems in the care of older people, including for remote monitoring of people susceptible to falls or other health emergencies.</p>.<p>AI technologies can mimic human supervision by collecting data on individuals from monitors and wearable sensors embedded in things like smartwatches.</p>.<p>They can compensate for understaffing, and the continuous data collection offers the possibility of better predictive analysis of disease progression and health risks.</p>.<p>But Wednesday's brief cautioned that they risked reducing contact between caregivers and older people.</p>.<p>"This can limit the opportunities that we may have to reduce ageism through intergenerational contact," De la Fuente Nunez said.</p>.<p>She cautioned that those designing and testing new AI technologies targeting the health sector also risk reflecting pervasive ageist attitudes in society, especially since older people are rarely included in the process.</p>.<p><strong>Check out the latest videos from <i data-stringify-type="italic">DH</i>:</strong></p>