Overcoming and mitigating ethical issues raised by artificial intelligence in health and medicine: The search continues

As the implementation of artificial intelligence (AI)-based innovations in health and care services become more and more common, it is increasingly pressing to address the ethical challenges associated with AI in healthcare to find appropriate solutions. In the cross-journal BMC collectionEthics of Artificial Intelligence in Health and Medicine, we urge the research communities, industry, policy makers and other stakeholders to join forces in tackling the grand challenges of realising Ethical and fair AI in health and medicine.

Artificial intelligence and machine learning techniques hold a great potential in solving complex world problems. They can facilitate clinical decision making by providing actionable insights through ‘learning’ from large volumes of patient data. Among others,deep learning algorithms were proved able to accurately identify head CT scan abnormalitiesrequiring urgent attention, significantly increasing the efficiency of health services.

Encouraged by such exciting developments, AI is increasingly expected to be a promising means to realise high-performing medicine in the near future and is widely hoped to be the rescue for the overstretched health systems across the world in the aftermath of the COVID-19 pandemic. However, this promise comes with critical and alarming caveats for clinical decision making: there is a consensus that AI models, particularly those using data-driven technologies, are subject to, or themselves cause, bias and discrimination, exacerbating existing health inequalities.

Health inequities

Recent studies show that without proper mitigation on potential bias against underrepresented groups such as women and ethnic minorities, implementation of AI in healthcare can result in life or death consequences. Astudy by Straw and Wushowed that AI models built to identify people at high risk of liver disease from blood tests are twice as likely to miss the disease in women as in men.

This is because data-driven AI models make inferences by finding ‘patterns’ from the data they analyse, but disparities such as those on racial and ethnic basishave long existed in health and care. Without effective mitigation approaches,inferences learnt from such biased data are inevitably channeling embedded inequities into the decisions they make.

Apart from data-embedded structural health inequities, under-representation of minorities in health datasets creates a real technical challenge for machine learning to come up with sensible conclusions for such groups, creating another potential source of inequities exacerbated by AI.An insufficient number of samples of a minority group will cause computational models to draw inaccurate predictions for them. Unfortunately, such situations are pervasive for ethnic minority groups in healthcare datasets, which might notalways reflect the actual population diversity:machine learning models trained on such data would draw inaccurate conclusions regarding the incidence or the risk of a disease within a specific population subset.

In addition to those embedded in data, biases could also arise from methodological choices for AI development and deployment. Obermeyer and colleaguesanalysed an algorithm widely employed in the USthat uses health costs as a proxy for health needs and they demonstrated that it falsely concludes that Black patients are healthier than equally sick White patients. Technically speaking,bias might be unintentionally and easily induced in the feature selections (the choice of the input variables) and label determinations (the choice of target variables) of AI model developments.

Patient care

除了潜在的偏见和不平等,已经well-established as ethical concerns associated with the use of AI in health care, there are also numerous other challenges that may have significant impact on patient care. AI has the potential to impact not only diagnosis but also prevention, treatment, and disease management on systems-scale, thus raising broader questions about its role in public health, for example in anticipating epidemics and providing patient support. AI is data-driven and health care data are often difficult (or even impossible) to anonymise, raising worries about privacy and data protection for patients.

Questions of legal accountability and moral responsibility in AI-driven decision-making have been raised, drawing attention to the potential changes in doctor-patient relationships. Astudy by Hallowell et al.recently published inBMC Medical Ethicsexplores under what conditions trust could be placed in AI medical tools, highlighting that relational and epistemic trust is crucial, as clinician’s positive experience is directly correlated to future patient’s trust in AI tools. This study emphasises the need for deliberate and meticulous steps in designing trustworthy, confidence-worthy AI processes.

Increasingly data-heavy medical practice might require novel skills from the next generation of medical professionals with consequences for the future of medical education. In order to address these issues, implementation of critical frameworks such asembedded ethics in the development of medical AIis required.

The need for tools to mitigate or overcome ethical issues

All in all, the use of AI in medicine, although it may bear high reward, is currently associated with high risk, as its consequences and implications are high-stakes including widening the social gap in health services and further fragmenting an already-divided society. Some tools are already availablefor general bias auditor particularlydesigned for healthcare application, and focused research communities are emerging, such as theHealth Equity groupat the Alan Turing Instituteand the independent community ofData Science for Health Equity.

Metrics such as fairness, accountability, methods for bias mitigation, and explainability or interpretability are major aspects that impact the perception and use of AI in ethically-sensitive fields such as medicine, and methods endowed with these features might play a significant role in overcoming mistrust, although they could not guarantee trust.

However, we still do not have a clear picture about data-embedded and AI-induced bias in healthcare and their implications in our society: the biases of AI models are not quantified and reported with the same enthusiastic attention as accuracies, let alone apply effective mitigation approaches before deployments. Similarly, studies about the impact the use of AI has in healthcare, from multiple practical and theoretical perspectives, are still relatively limited compared to the impressive expansion of this field: more discussion is needed for a fair and critical consideration of these new technologies.

For these reasons, a cross-journal collection,Ethics of Artificial Intelligence in Health and Medicine, has been launched in collaboration withBMC Medical EthicsandBMC Medical Informatics and Decision Making. The collection welcomes studies focused on technical assessment and evaluation of AI-based medical decision making methods with regards to these ethics-relevant features, as well as more theoretical considerations on the medical use of AI-based methods. This includes but is not limited to the presentation of novel AI-based methods able to fulfil such ethical requirements and tools able to mitigate issues like bias, the elaboration of novel ethics-relevant metrics, research on attitudes and perceptions of physicians and the public on AI implementation, ethics surrounding AI-associated privacy and surveillance, and ethical challenges surrounding implementation of medical AI.

View the latest posts on the BMC Series blog homepage

Comments