290 research outputs found
Revolutionizing Pharmaceuticals: applications and potential of Generative Artificial Intelligence in drug discovery
[EN]Artificial intelligence (AI) has emerged as a transformative tool in the pharmaceutical industry, revolutionizing the traditional drug discovery and development process. Through advanced generative techniques, such as Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs), the exploration and design of novel and viable therapeutic molecules has been enhanced. Additionally, AI facilitates the optimization of these molecules by guaranteeing desirable properties and accelerates the identification of therapeutic targets through deep analysis of biomedical and genomic data sets. One of the most significant advances has been drug repurposing, where AI unlocks the hidden potential of known drugs for new therapeutic indications.
Attribute Regularized Soft Introspective VAE: Towards Cardiac Attribute Regularization Through MRI Domains
Deep generative models have emerged as influential instruments for data
generation and manipulation. Enhancing the controllability of these models by
selectively modifying data attributes has been a recent focus. Variational
Autoencoders (VAEs) have shown promise in capturing hidden attributes but often
produce blurry reconstructions. Controlling these attributes through different
imaging domains is difficult in medical imaging. Recently, Soft Introspective
VAE leverage the benefits of both VAEs and Generative Adversarial Networks
(GANs), which have demonstrated impressive image synthesis capabilities, by
incorporating an adversarial loss into VAE training. In this work, we propose
the Attributed Soft Introspective VAE (Attri-SIVAE) by incorporating an
attribute regularized loss, into the Soft-Intro VAE framework. We evaluate
experimentally the proposed method on cardiac MRI data from different domains,
such as various scanner vendors and acquisition centers. The proposed method
achieves similar performance in terms of reconstruction and regularization
compared to the state-of-the-art Attributed regularized VAE but additionally
also succeeds in keeping the same regularization level when tested on a
different dataset, unlike the compared method
Recommended from our members
Partition-based Model Representation Learning
Modern machine learning consists of both task forces from classical statistics and modern computation. On the one hand, this field becomes rich and quick-growing; on the other hand, different convention from different schools becomes harder and harder to communicate over time. A lot of the times, the problem is not about who is absolutely right or wrong, but about from which angle that one should approach the problem. This is the moment when we feel there should be a unifying machine learning framework that can withhold different schools under the same umbrella. So we propose one of such a framework and call it ``representation learning''.
Representations are for the data, which is almost identical to a statistical model. However, philosophically, we would like to distinguish from classical statistical modeling such that (1) representations are interpretable to the scientist, (2) representations convey the pre-existing subject view that the scientist has towards his/her data before seeing it (in other words, representations may not align with the true data generating process), and (3) representations are task-oriented.
To build such a representation, we propose to use partition-based models. Partition-based models are easy to interpret and useful for figuring out the interactions between variables. However, the major challenge lies in the computation, since the partition numbers can grow exponentially with respect to the number of variables. To solve the problem, we need a model/representation selection method over different partition models. We proposed to use I-Score with backward dropping algorithm to achieve the goal.
In this work, we explore the connection between the I-Score variable selection methodology to other existing methods and extend the idea into developing other objective functions that can be used in other applications. We apply our ideas to analyze three datasets, one is the genome-wide association study (GWAS), one is the New York City Vision Zero, and, lastly, the MNIST handwritten digit database.
On these applications, we showed the potential of the interpretability of the representations can be useful in practice and provide practitioners with much more intuitions in explaining their results. Also, we showed a novel way to look at causal inference problems from the view of partition-based models.
We hope this work serve as an initiative for people to start thinking about approaching problems from a different angle and to involve interpretability into the consideration when building a model so that it can be easier to be used to communicate with people from other fields
Predictive Coding: a Theoretical and Experimental Review
Predictive coding offers a potentially unifying account of cortical function
-- postulating that the core function of the brain is to minimize prediction
errors with respect to a generative model of the world. The theory is closely
related to the Bayesian brain framework and, over the last two decades, has
gained substantial influence in the fields of theoretical and cognitive
neuroscience. A large body of research has arisen based on both empirically
testing improved and extended theoretical and mathematical models of predictive
coding, as well as in evaluating their potential biological plausibility for
implementation in the brain and the concrete neurophysiological and
psychological predictions made by the theory. Despite this enduring popularity,
however, no comprehensive review of predictive coding theory, and especially of
recent developments in this field, exists. Here, we provide a comprehensive
review both of the core mathematical structure and logic of predictive coding,
thus complementing recent tutorials in the literature. We also review a wide
range of classic and recent work within the framework, ranging from the
neurobiologically realistic microcircuits that could implement predictive
coding, to the close relationship between predictive coding and the widely-used
backpropagation of error algorithm, as well as surveying the close
relationships between predictive coding and modern machine learning techniques.Comment: 27/07/21 initial upload; 14/01/22 maths fix; 05/07/22 maths fix;
12/07/22 text fixe
Machine learning in bioprocess development: From promise to practice
Fostered by novel analytical techniques, digitalization and automation, modern bioprocess development provides high amounts of heterogeneous experimental data, containing valuable process information. In this context, data-driven methods like machine learning (ML) approaches have a high potential to rationally explore large design spaces while exploiting experimental facilities most efficiently. The aim of this review is to demonstrate how ML methods have been applied so far in bioprocess development, especially in strain engineering and selection, bioprocess optimization, scale-up, monitoring and control of bioprocesses. For each topic, we will highlight successful application cases, current challenges and point out domains that can potentially benefit from technology transfer and further progress in the field of ML
- âŠ