1,105 research outputs found
Recommended from our members
Closed-loop optimization of fast-charging protocols for batteries with machine learning.
Simultaneously optimizing many design parameters in time-consuming experiments causes bottlenecks in a broad range of scientific and engineering disciplines1,2. One such example is process and control optimization for lithium-ion batteries during materials selection, cell manufacturing and operation. A typical objective is to maximize battery lifetime; however, conducting even a single experiment to evaluate lifetime can take months to years3-5. Furthermore, both large parameter spaces and high sampling variability3,6,7 necessitate a large number of experiments. Hence, the key challenge is to reduce both the number and the duration of the experiments required. Here we develop and demonstrate a machine learning methodology to efficiently optimize a parameter space specifying the current and voltage profiles of six-step, ten-minute fast-charging protocols for maximizing battery cycle life, which can alleviate range anxiety for electric-vehicle users8,9. We combine two key elements to reduce the optimization cost: an early-prediction model5, which reduces the time per experiment by predicting the final cycle life using data from the first few cycles, and a Bayesian optimization algorithm10,11, which reduces the number of experiments by balancing exploration and exploitation to efficiently probe the parameter space of charging protocols. Using this methodology, we rapidly identify high-cycle-life charging protocols among 224 candidates in 16 days (compared with over 500 days using exhaustive search without early prediction), and subsequently validate the accuracy and efficiency of our optimization approach. Our closed-loop methodology automatically incorporates feedback from past experiments to inform future decisions and can be generalized to other applications in battery design and, more broadly, other scientific domains that involve time-intensive experiments and multi-dimensional design spaces
Descoberta de conhecimento biomédico através de representações continuas de grafos multi-relacionais
Knowledge graphs are multi-relational graph structures that allow to
organize data in a way that is not only query able but that also allows
the inference of implicit knowledge by both humans and, particularly,
machines. In recent years new methods have been developed in order to
maximize the knowledge that can be extracted from these structures,
especially in the machine learning field. Knowledge graph embedding
(KGE) strategies allow to map the data of these graphs to a lower dimensional
space to facilitate the application of downstream tasks such
as link prediction or node classification. In this work the capabilities
and limitations of using these techniques to derive new knowledge from
pre-existing biomedical networks was explored, since this is a field that
not only has seen efforts towards converting its large knowledge bases
into knowledge graphs, but that also can make use of the predictive
capabilities of these models in order to accelerate research in the field.
In order to do so, several KGE models were studied and a pipeline was
created in order to obtain and train such models on different biomedical
datasets. The results show that these models can make accurate predictions
on some datasets, but that their performance can be hampered
by some inherent characteristics of the networks.
Additionally, with the knowledge acquired during this research a notebook
was created that aims to be an entry point to other researchers
interested in exploring this field.Grafos de conhecimento são grafos multi-relacionais que permitem organizar
informação de maneira a que esta seja não apenas passível de
ser inquirida, mas que também permita a inferência logica de nova
informação por parte de humanos e especialmente sistemas computacionais.
Recentemente vários métodos têm vindo a ser criados de
maneira a maximizar a informação que pode ser retirada destas estruturas,
sendo a área de \Machine Learning" um dos grandes propulsores
para tal. \Knowledge graph embeddings" (KGE) permitem que
os componentes destes grafos sejam mapeados num espaço latente, de
maneira a facilitar a aplicação de tarefas como a predição de novas
ligações no grafo ou classificação de nós.
Neste trabalho foram exploradas as capacidades e limitações da
aplicação de modelos baseados em \Knowledge graph embeddings"
a redes biomédicas existentes, dado que a biomedicina é uma área na
qual têm sido feitos esforços no sentido de organizar a sua vasta base
de conhecimento em grafos de conhecimento, e onde esta capacidade
de predição pode ser usada para potenciar avanços nos seus diversos
domínios. Para tal, no presente trabalho, vários modelos foram
estudados e uma pipeline foi criada para treinar os mesmos sobre algumas
redes biomédicas. Os resultados mostram que estes modelos conseguem
de facto ser precisos no que diz respeito á tarefa de predição de
ligações em alguns conjuntos de dados, contudo esta precisão aparenta
ser afetada por características inerentes à estrutura do grafo.
Adicionalmente, com o conhecimento adquirido durante a realização
deste trabalho foi criado um \notebook" que tem como objetivo servir
como uma introdução à área de \Knowledge graph embeddings" para
investigadores interessados em explorar a mesma.Mestrado em Engenharia de Computadores e Telemátic
Learning with multiple pairwise kernels for drug bioactivity prediction
Motivation: Many inference problems in bioinformatics, including drug bioactivity prediction, can be formulated as pairwise learning problems, in which one is interested in making predictions for pairs of objects, e.g. drugs and their targets. Kernel-based approaches have emerged as powerful tools for solving problems of that kind, and especially multiple kernel learning (MKL) offers promising benefits as it enables integrating various types of complex biomedical information sources in the form of kernels, along with learning their importance for the prediction task. However, the immense size of pairwise kernel spaces remains a major bottleneck, making the existing MKL algorithms computationally infeasible even for small number of input pairs. Results: We introduce pairwiseMKL, the first method for time- and memory-efficient learning with multiple pairwise kernels. pairwiseMKL first determines the mixture weights of the input pairwise kernels, and then learns the pairwise prediction function. Both steps are performed efficiently without explicit computation of the massive pairwise matrices, therefore making the method applicable to solving large pairwise learning problems. We demonstrate the performance of pairwiseMKL in two related tasks of quantitative drug bioactivity prediction using up to 167 995 bioactivity measurements and 3120 pairwise kernels: (i) prediction of anticancer efficacy of drug compounds across a large panel of cancer cell lines; and (ii) prediction of target profiles of anticancer compounds across their kinome-wide target spaces. We show that pairwiseMKL provides accurate predictions using sparse solutions in terms of selected kernels, and therefore it automatically identifies also data sources relevant for the prediction problem.Peer reviewe
Toward real-world automated antibody design with combinatorial Bayesian optimization
Antibodies are multimeric proteins capable of highly specific molecular recognition. The complementarity determining region 3 of the antibody variable heavy chain (CDRH3) often dominates antigen-binding specificity. Hence, it is a priority to design optimal antigen-specific CDRH3 to develop therapeutic antibodies. The combinatorial structure of CDRH3 sequences makes it impossible to query binding-affinity oracles exhaustively. Moreover, antibodies are expected to have high target specificity and developability. Here, we present AntBO, a combinatorial Bayesian optimization framework utilizing a CDRH3 trust region for an in silico design of antibodies with favorable developability scores. The in silico experiments on 159 antigens demonstrate that AntBO is a step toward practically viable in vitro antibody design. In under 200 calls to the oracle, AntBO suggests antibodies outperforming the best binding sequence from 6.9 million experimentally obtained CDRH3s. Additionally, AntBO finds very-high-affinity CDRH3 in only 38 protein designs while requiring no domain knowledge
Big-Data Science in Porous Materials: Materials Genomics and Machine Learning
By combining metal nodes with organic linkers we can potentially synthesize
millions of possible metal organic frameworks (MOFs). At present, we have
libraries of over ten thousand synthesized materials and millions of in-silico
predicted materials. The fact that we have so many materials opens many
exciting avenues to tailor make a material that is optimal for a given
application. However, from an experimental and computational point of view we
simply have too many materials to screen using brute-force techniques. In this
review, we show that having so many materials allows us to use big-data methods
as a powerful technique to study these materials and to discover complex
correlations. The first part of the review gives an introduction to the
principles of big-data science. We emphasize the importance of data collection,
methods to augment small data sets, how to select appropriate training sets. An
important part of this review are the different approaches that are used to
represent these materials in feature space. The review also includes a general
overview of the different ML techniques, but as most applications in porous
materials use supervised ML our review is focused on the different approaches
for supervised ML. In particular, we review the different method to optimize
the ML process and how to quantify the performance of the different methods. In
the second part, we review how the different approaches of ML have been applied
to porous materials. In particular, we discuss applications in the field of gas
storage and separation, the stability of these materials, their electronic
properties, and their synthesis. The range of topics illustrates the large
variety of topics that can be studied with big-data science. Given the
increasing interest of the scientific community in ML, we expect this list to
rapidly expand in the coming years.Comment: Editorial changes (typos fixed, minor adjustments to figures
SVSBI: Sequence-based virtual screening of biomolecular interactions
Virtual screening (VS) is an essential technique for understanding
biomolecular interactions, particularly, drug design and discovery. The
best-performing VS models depend vitally on three-dimensional (3D) structures,
which are not available in general but can be obtained from molecular docking.
However, current docking accuracy is relatively low, rendering unreliable VS
models. We introduce sequence-based virtual screening (SVS) as a new generation
of VS models for modeling biomolecular interactions. The SVS model utilizes
advanced natural language processing (NLP) algorithms and optimizes deep
-embedding strategies to encode biomolecular interactions without invoking
3D structure-based docking. We demonstrate the state-of-art performance of SVS
for four regression datasets involving protein-ligand binding, protein-protein,
protein-nucleic acid binding, and ligand inhibition of protein-protein
interactions and five classification datasets for the protein-protein
interactions in five biological species. SVS has the potential to dramatically
change the current practice in drug discovery and protein engineering
- …