Explainable Artificial Intelligence (XAI) in healthcare: Addressing Techniques and Challenges

Explainable Artificial Intelligence (XAI) in healthcare: Addressing Techniques and Challenges

Authors

  • Sumit Research Scholar, Department of Computer Science & Application, Maharshi Dayanand University (Rohtak)
  • Dr Amrinder Kaur Assistant Professor Department of Computer Science & Application, Maharshi Dayanand University (Rohtak)

Abstract

The growing use of Artificial Intelligence (AI) models in high-stake applications like healthcare has driven the demand for transparency and explainability. This arises from the “black-box” nature of AI models because the wrong predictions from Artificial Intelligence (AI) can have high-impact consequences in crucial sectors like healthcare. The term Explainable Artificial Intelligence (XAI) includes the techniques and methodologies used to develop AI models that enable the users to comprehend the results and predictions generated by AI models. The success of XAI model integration within healthcare depends on its ability to be explainable and interpretable. Gaining the trust of healthcare professionals requires AI models to be more explainable and transparent regarding their outcomes. This paper provides an overview of XAI in healthcare, including techniques, challenges, opportunities, and emerging trends, to understand the realistic applications of XAI used in the field of healthcare. This study aims to discuss innovative perspectives and upcoming trends that can be useful to researchers and practitioners in adopting implementation of transparent and trustable AIdriven solutions in the healthcare sector

References

AAAI. (2023, April 4). Anchors: High-Precision Model-Agnostic Explanations - AAAI. Retrieved April 25, 2025, from https://aaai.org/papers/11491-anchors-high-precision- model-agnostic-explanations

Albahri, A., Duhaim, A. M., Fadhel, M. A., Alnoor, A., Baqer, N. S., Alzubaidi, L., Albahri, O., Alamoodi, A., Bai, J., Salhi, A., Santamaría, J., Ouyang, C., Gupta, A., Gu, Y., & Deveci, M. (2023). A systematic review of trustworthy and explainable artificial intelligence in healthcare: Assessment of quality, bias risk, and data fusion. Information Fusion, 96, 156–191. https://doi.org/10.1016/j.inffus.2023.03.008

Arrieta, A. B., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., Garcia, S., Gil-Lopez, S., Molina, D., Benjamins, R., Chatila, R., & Herrera, F. (2019). Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, 58, 82–115. https://doi.org/10.1016/j.inffus.2019.12.012

Bach, S., Binder, A., Montavon, G., Klauschen, F., Müller, K., & Samek, W. (2015). On Pixel-Wise explanations for Non-Linear Classifier decisions by Layer-Wise relevance propagation. PLoS ONE, 10(7), e0130140. https://doi.org/10.1371/journal.pone.0130140

Carmody, J., Shringarpure, S., & Van De Venter, G. (2021). AI and privacy concerns: A smart meter case study. Journal of Information, Communication and Ethics in Society, 19(4), 492–505. https://doi.org/10.1108/JICES-04-2021-0042

Combi, C., Amico, B., Bellazzi, R., Holzinger, A., Moore, J. H., Zitnik, M., & Holmes, J.

H. (2022). A manifesto on explainability for artificial intelligence in medicine. Artificial Intelligence in Medicine, 133, 102423. https://doi.org/10.1016/j.artmed.2022.102423

Farhood, H., Najafi, M., & Saberi, M. (2024). Improving deep learning transparency: Leveraging the power of LIME Heatmap. In Lecture notes in computer science (pp. 72– 83). https://doi.org/10.1007/978-981-97-0989-2_7

Fatima, S., Ali, S., & Kim, H. (2023). A comprehensive review on multiple instance learning. Electronics, 12(20), 4323. https://doi.org/10.3390/electronics12204323

Feizi, N., Tavakoli, M., Patel, R. V., & Atashzar, S. F. (2021). Robotics and AI for teleoperation, Tele-Assessment, and Tele-Training for Surgery in the era of COVID-19: existing challenges, and future vision. Frontiers in Robotics and AI, 8. https://doi.org/10.3389/frobt.2021.610677

Hassija, V., Chamola, V., Mahapatra, A., Singal, A., Goel, D., Huang, K., Scardapane, S., Spinelli, I., Mahmud, M., & Hussain, A. (2023). Interpreting Black-Box Models: A review on Explainable Artificial intelligence. Cognitive Computation, 16(1), 45–74. https://doi.org/10.1007/s12559-023-10179-8

Holzinger, A., Biemann, C., Pattichis, C. S., & Kell, D. B. (2017). What do we need to build explainable AI systems for the medical domain? arXiv (Cornell University). https://doi.org/10.48550/arxiv.1712.09923

Ivanovs, M., Kadikis, R., & Ozols, K. (2021). Perturbation-based methods for explaining deep neural networks: A survey. Pattern Recognition Letters, 150, 228– 234. https://doi.org/10.1016/j.patrec.2021.06.030

Kim, B., Wattenberg, M., Gilmer, J., Cai, C., Wexler, J., Viegas, F., & Sayres, R. (2017, November 30). Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV). arXiv.org. Retrieved April 25, 2025, from https://arxiv.org/abs/1711.11279

Lesley, U., & Hernández, A. K. (2024). Improving XAI Explanations for Clinical Decision-Making – Physicians’ perspective on local explanations in healthcare. In Lecture notes in computer science (pp. 296–312). https://doi.org/10.1007/978-3-031-66535-6_32

Lundberg, S. M., Erion, G., Chen, H., DeGrave, A., Prutkin, J. M., Nair, B., Katz, R., Himmelfarb, J., Bansal, N., & Lee, S. (2020). From local explanations to global understanding with explainable AI for trees. Nature Machine Intelligence, 2(1), 56–67. https://doi.org/10.1038/s42256-019-0138-9

Ou, S., Tsai, M., Lee, K., Tseng, W., Yang, C., Chen, T., Bin, P., Chen, T., Lin, Y., Sheu, W. H., Chu, Y., & Tarng, D. (2023). Prediction of the risk of developing end-stage renal diseases in newly diagnosed type 2 diabetes mellitus using artificial intelligence algorithms. BioData Mining, 16(1). https://doi.org/10.1186/s13040-023-00324-2

Petsiuk, V., Das, A., & Saenko, K. (2018, June 19). RISE: Randomized Input Sampling for explanation of black-box models. arXiv.org. Retrieved April 25, 2025, from https://arxiv.org/abs/1806.07421

Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). “Why should I trust you?”: explaining the predictions of any classifier. arXiv (Cornell University). https://doi.org/10.48550/arxiv.1602.04938

Salehi, A. W., Khan, S., Gupta, G., Alabduallah, B. I., Almjally, A., Alsolai, H., Siddiqui, T., & Mellit, A. (2023). A study of CNN and transfer learning in Medical imaging: Advantages, challenges, future scope. Sustainability, 15(7), 5930. https://doi.org/10.3390/su15075930

Shi, Q., Dong, B., He, T., Sun, Z., Zhu, J., Zhang, Z., & Lee, C. (2020). Progress in wearable electronics/photonics—Moving toward the era of artificial intelligence and internet of things. InfoMat, 2(6), 1131–1162.https://doi.org/10.1002/inf2.12122

Simonyan, K., Vedaldi, A., & Zisserman, A. (2013, December 20). Deep inside convolutional networks: visualising image classification models and saliency maps. arXiv.org. Retrieved April 25, 2025, from https://arxiv.org/abs/1312.6034

Tjoa, E., & Guan, C. (2020). A survey on Explainable Artificial Intelligence (XAI): Toward medical XAI. IEEE Transactions on Neural Networks and Learning Systems, 32(11), 4793–4813. https://doi.org/10.1109/tnnls.2020.3027314

Yang, Z., Shao, J., & Yang, Y. (2023). An improved CycleGAN for data augmentation in person Re-Identification. Big Data Research, 34, 100409.

https://doi.org/10.1016/j.bdr.2023.10

Downloads

Published

2025-07-21

Issue

Section

Articles

How to Cite

Explainable Artificial Intelligence (XAI) in healthcare: Addressing Techniques and Challenges . (2025). International Journal of Sustainable Development Through AI, ML and IoT, 4(1). https://ijsdai.com/index.php/IJSDAI/article/view/83

Most read articles by the same author(s)

1 2 3 4 5 6 > >>