76,534 research outputs found
Artificial Intelligence Technologies for COVID-19 De Novo Drug Design
The recent covid crisis has provided important lessons for academia and industry regarding digital reorganization. Among the fascinating lessons from these times is the huge potential of data analytics and artificial intelligence. The crisis exponentially accelerated the adoption of analytics and artificial intelligence, and this momentum is predicted to continue into the 2020s and beyond. Drug development is a costly and time-consuming business, and only a minority of approved drugs generate returns exceeding the research and development costs. As a result, there is a huge drive to make drug discovery cheaper and faster. With modern algorithms and hardware, it is not too surprising that the new technologies of artificial intelligence and other computational simulation tools can help drug developers. In only two years of covid research, many novel molecules have been designed/identified using artificial intelligence methods with astonishing results in terms of time and effectiveness. This paper reviews the most significant research on artificial intelligence in de novo drug design for COVID-19 pharmaceutical research
Advances in De Novo Drug Design : From Conventional to Machine Learning Methods
De novo drug design is a computational approach that generates novel molecular structures from atomic building blocks with no a priori relationships. Conventional methods include structure-based and ligand-based design, which depend on the properties of the active site of a biological target or its known active binders, respectively. Artificial intelligence, including ma-chine learning, is an emerging field that has positively impacted the drug discovery process. Deep reinforcement learning is a subdivision of machine learning that combines artificial neural networks with reinforcement-learning architectures. This method has successfully been em-ployed to develop novel de novo drug design approaches using a variety of artificial networks including recurrent neural networks, convolutional neural networks, generative adversarial networks, and autoencoders. This review article summarizes advances in de novo drug design, from conventional growth algorithms to advanced machine-learning methodologies and high-lights hot topics for further development.Peer reviewe
Explainable Artificial Intelligence for Drug Discovery and Development -- A Comprehensive Survey
The field of drug discovery has experienced a remarkable transformation with
the advent of artificial intelligence (AI) and machine learning (ML)
technologies. However, as these AI and ML models are becoming more complex,
there is a growing need for transparency and interpretability of the models.
Explainable Artificial Intelligence (XAI) is a novel approach that addresses
this issue and provides a more interpretable understanding of the predictions
made by machine learning models. In recent years, there has been an increasing
interest in the application of XAI techniques to drug discovery. This review
article provides a comprehensive overview of the current state-of-the-art in
XAI for drug discovery, including various XAI methods, their application in
drug discovery, and the challenges and limitations of XAI techniques in drug
discovery. The article also covers the application of XAI in drug discovery,
including target identification, compound design, and toxicity prediction.
Furthermore, the article suggests potential future research directions for the
application of XAI in drug discovery. The aim of this review article is to
provide a comprehensive understanding of the current state of XAI in drug
discovery and its potential to transform the field.Comment: 13 pages, 3 figure
AI in drug discovery and its clinical relevance
The COVID-19 pandemic has emphasized the need for novel drug discovery process. However, the journey from conceptualizing a drug to its eventual implementation in clinical settings is a long, complex, and expensive process, with many potential points of failure. Over the past decade, a vast growth in medical information has coincided with advances in computational hardware (cloud computing, GPUs, and TPUs) and the rise of deep learning. Medical data generated from large molecular screening profiles, personal health or pathology records, and public health organizations could benefit from analysis by Artificial Intelligence (AI) approaches to speed up and prevent failures in the drug discovery pipeline. We present applications of AI at various stages of drug discovery pipelines, including the inherently computational approaches of de novo design and prediction of a drug's likely properties. Open-source databases and AI-based software tools that facilitate drug design are discussed along with their associated problems of molecule representation, data collection, complexity, labeling, and disparities among labels. How contemporary AI methods, such as graph neural networks, reinforcement learning, and generated models, along with structure-based methods, (i.e., molecular dynamics simulations and molecular docking) can contribute to drug discovery applications and analysis of drug responses is also explored. Finally, recent developments and investments in AI-based start-up companies for biotechnology, drug design and their current progress, hopes and promotions are discussed in this article.
Other InformationPublished in:HeliyonLicense: https://creativecommons.org/licenses/by/4.0/See article on publisher's website: https://doi.org/10.1016/j.heliyon.2023.e17575 </p
A inteligência artificial na descoberta de novos medicamentos
Trabalho Final de Mestrado Integrado, Ciências Farmacêuticas, 2022, Universidade de Lisboa, Faculdade de Farmácia.A descoberta e desenvolvimento de medicamentos é uma área complexa, que engloba várias etapas, levando a um processo bastante moroso e dispendioso. Com o aumento da digitalização de dados, a inteligência artificial impulsionou a sua aplicação em diversos setores da sociedade, sendo que o setor farmacêutico aproveitou e implementou o seu uso neste processo.
Melhorias notáveis no poder computacional combinadas com desenvolvimentos em tecnologia artificial podem ser usadas para transformar o processo de desenvolvimento de medicamentos, uma vez que podem ser aplicadas em todas as etapas desse processo – descoberta de medicamentos, desenvolvimento pré-clínico, desenvolvimento clínico e pós- comercialização – incluindo várias aplicações em propriedades ou previsões de atividade como propriedades físico-químicas, absorção, distribuição, metabolismo, excreção e toxicidade ou relações quantitativas estrutura-propriedade (QSPR) ou relações quantitativas estrutura- atividade (QSAR).
Recentemente, o design de medicamentos entrou na era do big data e os métodos de machine learning evoluíram gradualmente para métodos de deep learning com processamento de big data mais robusto e eficaz, levando à combinação de inteligência artificial e tecnologia de design de medicamentos assistida por computador.
O design de novo ajuda no processo de descoberta de fármacos ao criar novos agentes farmacêuticos ativos com propriedades pretendidas de forma económica e tempo-eficiente.
O principal benefício da inteligência artificial é a diminuição do tempo necessário para o desenvolvimento de medicamentos e, portanto, os custos associados ao processo, melhorando o retorno do investimento, pode levar a uma redução de custos para o consumidor final.
Uma vez que um tamanho considerável de training data é necessário para que o training em deep learning seja um sucesso e a acessibilidade a estes dados por vezes não seja adequada para que a inteligência artificial seja eficaz, ainda há muito espaço para melhorias em termos de precisão do método, apesar do seu sucesso crescente.The discovery and development of drug products is a complex area, which involves numerous stages, leading to a very time consuming and expensive process. With the increase of data digitization, artificial intelligence has recently boosted its application in various sectors of society, and the pharmaceutical sector has taken advantage of and implemented its use in this process.
Notable improvements in computational power combined with developments in artificial technology could be used to transform the drug development process since they can be applied to support all steps of this process – drug discovery, preclinical development, clinical development and postmarketing, – including various applications in property or activity predictions like physicochemical properties, absorption, distribution, metabolism, excretion and toxicity or quantitative structure-property relationships (QSPR) or quantitative structure- activity relationships (QSAR).
Recently, drug design has entered the era of big data and machine learning methods have gradually evolved into deep learning methods with stronger and more effective big data processing, leading to the combination of artificial intelligence and computer-assisted drug design technology.
The de novo design helps drug discovery projects by creating novel pharmaceutically active agents with desired properties in a cost- and time-efficient manner.
The main benefit of artificial intelligence is that it decreases the time that is needed for drug development and therefore the costs associated with the process, improves the return on investment and may even lead to a cost reduction for the end user.
Since a considerable size of training data is required for deep learning training to be a success and accessibility to said data is not adequate at times for artificial intelligence to be effective, there is still a big room for improvement in terms of method accuracy, despite its rising success
Comprehensive evaluation of deep and graph learning on drug-drug interactions prediction
Recent advances and achievements of artificial intelligence (AI) as well as
deep and graph learning models have established their usefulness in biomedical
applications, especially in drug-drug interactions (DDIs). DDIs refer to a
change in the effect of one drug to the presence of another drug in the human
body, which plays an essential role in drug discovery and clinical research.
DDIs prediction through traditional clinical trials and experiments is an
expensive and time-consuming process. To correctly apply the advanced AI and
deep learning, the developer and user meet various challenges such as the
availability and encoding of data resources, and the design of computational
methods. This review summarizes chemical structure based, network based, NLP
based and hybrid methods, providing an updated and accessible guide to the
broad researchers and development community with different domain knowledge. We
introduce widely-used molecular representation and describe the theoretical
frameworks of graph neural network models for representing molecular
structures. We present the advantages and disadvantages of deep and graph
learning methods by performing comparative experiments. We discuss the
potential technical challenges and highlight future directions of deep and
graph learning models for accelerating DDIs prediction.Comment: Accepted by Briefings in Bioinformatic
VARIABLES IMPACTING GFR ESTIMATION METHOD FOR DRUG DOSING IN CKD: ARTIFICIAL NEURAL NETWORK PREDICTION MODEL
Objective: This study aimed to measure concordance between different renal function estimates in terms of drug doses and determine the potential significant clinical differences.
Methods: Around one hundred and eighty patients (≥ 18 y) with chronic kidney disease (CKD) were eligible for inclusion in this study. A paired-proportion cohort design was utilized using an artificial intelligence model. CKD patients refined into those who have drugs adjusted for renal function. For superiority of Cockcroft-Gault (CG) vs. modified diet in renal disease (MDRD) guided with references for concordance or discordance of the two equations and determined the dosing tiers of each drug. Validated artificial neural networks (ANN) was one outcome of interest. Variable impacts and performed reassignments were compared to evaluate the factors that affect the accuracy in estimating the kidney function for a better drug dosing.
Results: The best ANN model classified most cases to CG as the best dosing method (79 vs. 72). The probability was 85% and the top performance was slightly above 93%. Creatinine levels and CKD staging were the most important factors in determining the best dosing method of CG versus MDRD. Ideal and actual body weights were second (24%). Whereas drug class or the specific drug was an important third factor (14%).
Conclusion: Among many variables that affect the optimal dosing method, the top three are probably CKD staging, weight, and the drug. The contrasting CKD stages from the different methods can be used to recognize patterns, identify and predict the best dosing tactics in CKD patients
Recommended from our members
Improvements in Molecular Mechanics Sampling and Energy Models
The process of bringing drugs to market continues to be a slow and expensive affair. And despite recent advances in technology, the cost both in monetary terms and in terms of time between target identification and arrival of a new drug on the market continues to increase. High throughput screening is a first step towards testing a large number of possible bioactive compounds very quickly. However, the space of possible small molecules is limitless, and high throughput screening is limited both by the size of available libraries and the cost of running such a large number of experiments. Therefore, advancements in computational drug screening are necessary in order to maintain the current rate of progress in modern medicine.
Computational drug design, or computer assisted drug design, offers a possible way of addressing some of the shortfalls of conventional high throughput screening. Using computational methods, it is possible to estimate parameters such as binding affinity of any small molecule, even those not currently present in any small molecule library, without having to first invest in the often slow and expensive process of finding a synthetic pathway. Computational methods can be used to screen similar molecules, or mutations in small molecule space, seeking to increase binding affinity to the protein target, and thereby efficacy, while simultaneously minimizing binding affinity to other proteins, decreasing cross reactivity, and reducing toxicity and harmful side effects.Computational biology methods of drug research can be broadly classified in a number of different ways.
However, one of the most common classifications is according to the methods used to identify possible drug compounds and later optimize those leads. The first broad category is informatics or artificial intelligence based approaches. In these approaches, artificial intelligence methods such as neural networks, support vector machines, and qualitative structure-activity relationships (QSAR) are used to identify chemical or structural properties that contribute heavily to binding affinity.
The next category, ligand based approaches, is very useful when there are a large number of known binders for a specific family of proteins. In this approach, the ligands are clustered using a metric of chemical similarity and new compounds which occupy a similar chemical space are likely to also bind strongly with the protein of interest.
The final class of methods of computational drug design, and the method explored in this thesis, is the diverse class known as structural methods. These approaches in the most general sense make use of a sampling method to sample a number of protein, or protein-small-molecule interaction conformations and an energy model or scoring function to measure dimensions which would be very difficult and or expensive to measure experimentally. In this thesis, a number of different sampling methods that are applicable to different questions in computational biology are presented. Additionally, an improved algorithm for evaluating implicit solvent effects is presented, and a number of improvements in performance, reliability and utility of the molecular mechanics program used are discussed
AI and OR in management of operations: history and trends
The last decade has seen a considerable growth in the use of Artificial Intelligence (AI) for operations management with the aim of finding solutions to problems that are increasing in complexity and scale. This paper begins by setting the context for the survey through a historical perspective of OR and AI. An extensive survey of applications of AI techniques for operations management, covering a total of over 1200 papers published from 1995 to 2004 is then presented. The survey utilizes Elsevier's ScienceDirect database as a source. Hence, the survey may not cover all the relevant journals but includes a sufficiently wide range of publications to make it representative of the research in the field. The papers are categorized into four areas of operations management: (a) design, (b) scheduling, (c) process planning and control and (d) quality, maintenance and fault diagnosis. Each of the four areas is categorized in terms of the AI techniques used: genetic algorithms, case-based reasoning, knowledge-based systems, fuzzy logic and hybrid techniques. The trends over the last decade are identified, discussed with respect to expected trends and directions for future work suggested
AI for the Common Good?! Pitfalls, challenges, and Ethics Pen-Testing
Recently, many AI researchers and practitioners have embarked on research
visions that involve doing AI for "Good". This is part of a general drive
towards infusing AI research and practice with ethical thinking. One frequent
theme in current ethical guidelines is the requirement that AI be good for all,
or: contribute to the Common Good. But what is the Common Good, and is it
enough to want to be good? Via four lead questions, I will illustrate
challenges and pitfalls when determining, from an AI point of view, what the
Common Good is and how it can be enhanced by AI. The questions are: What is the
problem / What is a problem?, Who defines the problem?, What is the role of
knowledge?, and What are important side effects and dynamics? The illustration
will use an example from the domain of "AI for Social Good", more specifically
"Data Science for Social Good". Even if the importance of these questions may
be known at an abstract level, they do not get asked sufficiently in practice,
as shown by an exploratory study of 99 contributions to recent conferences in
the field. Turning these challenges and pitfalls into a positive
recommendation, as a conclusion I will draw on another characteristic of
computer-science thinking and practice to make these impediments visible and
attenuate them: "attacks" as a method for improving design. This results in the
proposal of ethics pen-testing as a method for helping AI designs to better
contribute to the Common Good.Comment: to appear in Paladyn. Journal of Behavioral Robotics; accepted on
27-10-201
- …