A STUDY OF INTERPRETABLE AI AND MLWITHIN HEALTHCARE SYSTEMS

Abstract

The healthcare sector is particularly sensitive, as it pertains to individuals' lives, necessitating that decisions be made with great care and based on robust evidence. Nevertheless, the majority of AI and ML systems are intricate and fail to offer clarity on how issues are resolved or the rationale behind proposed decisions. This deficiency in interpretability is a primary factor hindering the widespread adoption of certain AI and ML models in practical settings like healthcare. Consequently, it would be advantageous for AI and ML models to furnish explanations that empower physicians to make informed, data-driven decisions, ultimately enhancing the quality of service. Recently, numerous initiatives have been undertaken to propose interpretable machine learning models that are more user-friendly and applicable in real-world scenarios. This paper intends to deliver a thorough survey and explore the phenomena of Interpretable AI and ML models along with their applications in healthcare. It addresses the essential characteristics, theoretical foundations required for the development of IML, and emerging technologies, as well as the top ten areas within healthcare

Similar works

This paper was published in The Bioscan.

Having an issue?

Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.

Licence: https://creativecommons.org/licenses/by/4.0