Explainable AI-Health Care
Explanations are also
important to foster trust on the system as well as between patients and
doctors. An explanation can be used to improve the system model and remove bias.
AI systems are built upon training data may not be representative of the
presented case leading to a biased decision. A good example is a hospital system
for detecting skin cancer that has been trained mainly on Caucasian skin data rather
than dark-skinned people. Such a system could easily miss to accurately detect
cancer on dark skinned people leading to serious consequences. If the diagnosis
comes with an explanation then in could reveal this biased training data and
help make the necessary changes.
There are three broad
categories used in explainable AI techniques. They are feature importance,
counterfactual explanations, and natural language explanations
Feature importance: A
hospital admission systems can use certain features such as age, past
dismissions etc to recommend a patient readmission. In the case of the skin
cancer detection system, it may use features like the size, shape and color of
the lesion to make a diagnosis. The explanation should then show why the
diagnosis is made and the doctor would have an opportunity to explain or
review.
Counterfactual
explanations: This is similar to "what if” scenario analysis by showing
potential impact of changing some parameters such as age, dosage, gender etc.
It would for example show that a person in their 80s would respond
differently.
Natural language
explanations: Here the emphasis is to provide plain language explanations that
patients and doctors understand. It can come in handy with medical devices such
as a heart pacer changing the heart rate because the patient heart late has
become too slow.
Explanations should be
transparent and meaningful. A patient may only need to know that their the pace
maker increased the heart rate that was dropping but the cardiologist can get a
more detailed explanation. The explanation
should also be accurate and consistent based on the model data otherwise it
will become confusing and undermine trust. It should be able to tell when the
wrong parameters are used for its model.
Explainable AI is a very
useful and growing field. It has great potential in healthcare.