Confronting 5 Major Ethical Risks of Artificial Intelligence in Healthcare

The DICE Group
5 min readDec 23, 2019

By Sasha Mitts

In healthcare, we’re obligated to care for people, as well as innovate and scale our offerings to the communities we serve as they grow in size and complexity. Technology and thoughtful design make this possible, but we’re also responsible for holding these systems to rigorous, evolving ethical standards in order to prevent unintended consequences. We cannot allow the momentum of technology to overwhelm our guidance of it.

A breakdown of the impact that AI has on patient experience.

Perhaps no technology should both worry and thrill us as much as artificial intelligence (AI). For decades, people have hoped, feared and speculated about the implications of building another highly complex system capable of learning and manifesting a will of its own. What we have done remarkably little of is create ethical structures around:

  • Designing and implementing AI
  • Establishing how, when and what types of AI can access which data
  • Regulating how humans and AI will meet, socialize and set boundaries with each other
  • Understanding the rights we have against or in preference of AI

While we are not near the era of a robot takeover, we can’t predict the future. Nor do we have the luxury of waiting to consider these ethical questions. Very real AI exists in our daily lives, and we’re already lagging in developing what needs to be a carefully constructed set of principles for using this technology.

This responsibility is felt twice over in healthcare, where we have a much larger obligation to be conscientious around these choices. Given the technology’s potential to impact patient care, I’d like to point out five risks AI and machine learning (ML) could pose to medicine:

Perpetuating bias in healthcare with non-representative data

In medicine, we have seen algorithms that only understand how to diagnose or treat, or are focused on treating, white patients because they were taught to make decisions using non-representative data.

For example, in some medical schools, training materials for dermatological conditions like rashes and Lyme disease disproportionately represent white skin. If AI algorithms are trained to make a diagnosis primarily using those images, it could have a limited understanding of how the same condition looks on dark skin, leading to biased results and diagnostic gaps.

We have also seen ML systems learn bias in banking, law enforcement and management. In these examples, AI has learned to treat African Americans as untrustworthy and disproportionately criminal, and women as unsuitable for management positions.

A learning system is inherently limited by the data you give it. If we teach systems to make decisions from non-representative data, we will get back worrisome recommendations.

Equitability in the age of AI relies on data sets that proportionally represent the population through truly diverse sampling. These systems ideally resist surface-level cognitive or social biases, rather than perpetuating discrimination by accepting them as correct.

Treating patients like numbers instead of people

How we treat patients matters. A system informed by massive volume will naturally trend toward reducing people to their most predictive variables.

In many cases, healthcare already leans toward the industrial. For example, electronic health records (EHRs) routinely pull doctors away from patients in order to complete clinical documentation and input data.

AI and ML could be another strong push in that direction, allowing or even requiring doctors to spend more time hands-off, while tech manages disease. If AI turns healthcare into a system even more concerned with risk management, and with doctors tracking numbers all day, patient care runs the risk of becoming more and more impersonal, instead of an opportunity for true connection.

Finding a balance between human interactions and AI is key to this technology’s success.

Diverging goals between humans and computers

As computer systems are given a wider field of view and included in more kinds of decisions, their behaviors and ethics may begin to diverge from ours.

For example, what if a system recommends an inadequate treatment for a patient in order to reduce the risk of exposing other patients to the same disease?

What about intentionally recommending a treatment that will result in death, but will produce viable organs for transplant? Is this good or bad, and where do we draw the line?

Time is running out. We must carefully consider the rights, information and influence we give machines now. The more we lean on AI solutions, the less we’ll be able to impartially consider and prepare for their consequences, intended and otherwise.

Is artificial intelligence technological magic?

Creating an impenetrable black box

In most cases, it’s impossible to determine what an AI took into account, and to what extent, when making a decision. This makes early detection of bias, error, wrongdoing and other concerns much more difficult.

In healthcare, doctors make diagnoses by understanding the mechanisms of body systems and asking why something might occur. If we can’t ask a machine those questions, we can’t learn from or teach it in a targeted manner, leading to distrust of these systems within the medical community.

A lack of information may instill public fear and distrust

Even if we build the most ethical AI in the world, the public’s willingness to engage with these systems can limit the technology’s impact and image.

When sensitive data and learning technology are both in play, “build it and they will come” is an unproductive approach. The public must be educated on the benefits, risks and challenges of AI so that we have an informed perspective on how it applies to their health, and other fields besides.

What do you want AI to look like in healthcare?

Who gets to decide when change is good, and using which criteria? When do we welcome innovation, and when should we push back?

We need to answer these questions around AI to guide our exploration of this new frontier.

Bias, diverging ethical systems, uninformed decision-making, and misunderstandings are all major risks. It is our responsibility as medical technologists, clinicians, and innovators to make sure these systems are equitable, accessible, beneficial and do no harm to our patients.

That’s why we founded the DICE AI Lab — an initiative we started to pursue research, development and education on artificial intelligence and predictive analytics in healthcare.

If you’re interested in staying in touch and exploring more of these ideas with The DICE Group, subscribe to our monthly newsletter.

Sasha Mitts

Sasha is a design researcher at The DICE Group, focused on the intersection of emerging technologies and basic human needs. In his free time, he enjoys fermenting foods, exploring the world on foot, and asking endless hypothetical questions.

--

--