3,452 research outputs found
Explainable Artificial Intelligence for Drug Discovery and Development -- A Comprehensive Survey
The field of drug discovery has experienced a remarkable transformation with
the advent of artificial intelligence (AI) and machine learning (ML)
technologies. However, as these AI and ML models are becoming more complex,
there is a growing need for transparency and interpretability of the models.
Explainable Artificial Intelligence (XAI) is a novel approach that addresses
this issue and provides a more interpretable understanding of the predictions
made by machine learning models. In recent years, there has been an increasing
interest in the application of XAI techniques to drug discovery. This review
article provides a comprehensive overview of the current state-of-the-art in
XAI for drug discovery, including various XAI methods, their application in
drug discovery, and the challenges and limitations of XAI techniques in drug
discovery. The article also covers the application of XAI in drug discovery,
including target identification, compound design, and toxicity prediction.
Furthermore, the article suggests potential future research directions for the
application of XAI in drug discovery. The aim of this review article is to
provide a comprehensive understanding of the current state of XAI in drug
discovery and its potential to transform the field.Comment: 13 pages, 3 figure
The Role of AI in Drug Discovery: Challenges, Opportunities, and Strategies
Artificial intelligence (AI) has the potential to revolutionize the drug
discovery process, offering improved efficiency, accuracy, and speed. However,
the successful application of AI is dependent on the availability of
high-quality data, the addressing of ethical concerns, and the recognition of
the limitations of AI-based approaches. In this article, the benefits,
challenges and drawbacks of AI in this field are reviewed, and possible
strategies and approaches for overcoming the present obstacles are proposed.
The use of data augmentation, explainable AI, and the integration of AI with
traditional experimental methods, as well as the potential advantages of AI in
pharmaceutical research are also discussed. Overall, this review highlights the
potential of AI in drug discovery and provides insights into the challenges and
opportunities for realizing its potential in this field.
Note from the human-authors: This article was created to test the ability of
ChatGPT, a chatbot based on the GPT-3.5 language model, to assist human authors
in writing review articles. The text generated by the AI following our
instructions (see Supporting Information) was used as a starting point, and its
ability to automatically generate content was evaluated. After conducting a
thorough review, human authors practically rewrote the manuscript, striving to
maintain a balance between the original proposal and scientific criteria. The
advantages and limitations of using AI for this purpose are discussed in the
last section.Comment: 11 pages, 1 figur
Artificial Intelligence and Patient-Centered Decision-Making
Advanced AI systems are rapidly making their way into medical research and practice, and, arguably, it is only a matter of time before they will surpass human practitioners in terms of accuracy, reliability, and knowledge. If this is true, practitioners will have a prima facie epistemic and professional obligation to align their medical verdicts with those of advanced AI systems. However, in light of their complexity, these AI systems will often function as black boxes: the details of their contents, calculations, and procedures cannot be meaningfully understood by human practitioners. When AI systems reach this level of complexity, we can also speak of black-box medicine. In this paper, we want to argue that black-box medicine conflicts with core ideals of patient-centered medicine. In particular, we claim, black-box medicine is not conducive for supporting informed decision-making based on shared information, shared deliberation, and shared mind between practitioner and patient
A historical perspective of biomedical explainable AI research
The black-box nature of most artificial intelligence (AI) models encourages the development of explainability methods to engender trust into the AI decision-making process. Such methods can be broadly categorized into two main types: post hoc explanations and inherently interpretable algorithms. We aimed at analyzing the possible associations between COVID-19 and the push of explainable AI (XAI) to the forefront of biomedical research. We automatically extracted from the PubMed database biomedical XAI studies related to concepts of causality or explainability and manually labeled 1,603 papers with respect to XAI categories. To compare the trends pre- and post-COVID-19, we fit a change point detection model and evaluated significant changes in publication rates. We show that the advent of COVID-19 in the beginning of 2020 could be the driving factor behind an increased focus concerning XAI, playing a crucial role in accelerating an already evolving trend. Finally, we present a discussion with future societal use and impact of XAI technologies and potential future directions for those who pursue fostering clinical trust with interpretable machine learning models.</p
- …