AI in health — can we program human dignity?


Friday, 05 August, 2022


AI in health — can we program human dignity?

A new survey designed to understand the role of dignity in machine-assisted medical treatment has brought the thorny issue of integrating artificial intelligence (AI) into human health care back into the spotlight.

Many clinical settings already use different applications of AI to analyse test results, speed up diagnoses and even guide treatment decisions — but not everyone is comfortable with outsourcing decisions about their health care to a machine.

“We need to think more deeply about the impact on people and about the way they feel about AI making various decisions about their health,” said Associate Professor Paul Formosa, a philosophy and ethics scholar at Macquarie University. Formosa is also an Associate Professor in the Department of Philosophy and a member of the Centre for Agency, Values and Ethics (CAVE).

If having AI involved in their health care makes people feel dehumanised, patients may struggle to accept its decisions or recommendations, no matter how accurate, or efficient, Formosa said.

“Image recognition is something AI is very good at — and there’s examples where AI trained on hundreds and thousands of images of eye retinas, for example, can perform as well or even better than humans in detecting certain diseases,” he said.

However, just because AI is effective at some functions doesn’t mean it is warranted in all circumstances, he said.

Preferring humans

Formosa and fellow researchers recently surveyed more than 470 people about different healthcare scenarios involving AI or human decision-makers, asking them whether they felt they were treated in a dignified and respectful way.

Respondents showed some clear preferences about how and where they would like to see AI used in health care, he said.

People are concerned that AI, compared to humans, can’t account for their uniqueness, and that some things can’t just be reduced to a number.

“People have a general preference for ‘assistive AI’, where the AI is part of the decision-making process, rather than autonomous, where it’s making decisions without a clinician,” Formosa said.

They also preferred to have a human decision-maker where a diagnosis was made, he said.

“People are concerned that AI, compared to humans, can’t account for their uniqueness, and that some things can’t just be reduced to a number.”

However, there were signs that people were less concerned when AI was involved in decisions about resource allocation — for example, in getting an appointment with a specialist, because it was seen as fair or impartial — provided that the outcome was positive — in other words, that they were able to obtain an appointment.

Reducing stigma

Formosa said that the results summarise the majority of preferences, but the area is complex and there are some patients, and some situations, where AI may be preferred.

For example, research shows that many people have negative reactions to seeking medical treatment for things that they feel they will be judged on — such as illnesses caused by smoking or mental illnesses.

“People may be more comfortable interacting with an AI for these conditions,” he said, adding that this is an area for future research.

Perceptions of being ‘dehumanised’ can also depend on how the AI is integrated into the healthcare process, he said.

“If you only interact with an AI — it takes your symptoms, delivers diagnosis and treatment decisions — that could be quite dehumanising; but if you give your symptoms to a human doctor, who then sends off the data to the AI, receives the results and diagnosis, and then the human doctor presents the results to you personally, then the perception can be quite different,” he said.

Skill shortage

Formosa said that before integrating AI into health care, we need to step back and consider the more general question: should we offload ethical decisions to AI and machines?

“We need to drill down and work out what scenarios are fitting for AI, and where is it not appropriate.”

Offloading certain roles to AI can have broader impacts, he said, including a decision about what skills we prioritise for humans to retain.

“In diagnostic cases, for example, if doctors give over certain tasks to AI then they may lose those skills, so that also means we are making decisions about what skills really matter. And over time, this could impact the safety of these technologies, too.”

Image credit: ©stock.adobe.com/au/whyframeshot

Related Articles

Concept to clinical care: what's holding back healthtech?

Australia is globally recognised for its exceptional medical research output. So why isn't...

Why more needs to be done to support home-grown innovations

Commercialising new medical devices or drugs is highly risky, extremely expensive and returns can...

Opinion: Securing the backbone of health care

Unified, reliable databases provide healthcare organisations with immediate access to...


  • All content Copyright © 2024 Westwick-Farrow Pty Ltd