119 research outputs found
Randomized SMILES strings improve the quality of molecular generative models
Recurrent Neural Networks (RNNs) trained with a set of molecules represented as unique (canonical) SMILES strings, have shown the capacity to create large chemical spaces of valid and meaningful structures. Herein we perform an extensive benchmark on models trained with subsets of GDB-13 of different sizes (1 million, 10,000 and 1000), with different SMILES variants (canonical, randomized and DeepSMILES), with two different recurrent cell types (LSTM and GRU) and with different hyperparameter combinations. To guide the benchmarks new metrics were developed that define how well a model has generalized the training set. The generated chemical space is evaluated with respect to its uniformity, closedness and completeness. Results show that models that use LSTM cells trained with 1 million randomized SMILES, a non-unique molecular string representation, are able to generalize to larger chemical spaces than the other approaches and they represent more accurately the target chemical space. Specifically, a model was trained with randomized SMILES that was able to generate almost all molecules from GDB-13 with a quasi-uniform probability. Models trained with smaller samples show an even bigger improvement when trained with randomized SMILES models. Additionally, models were trained on molecules obtained from ChEMBL and illustrate again that training with randomized SMILES lead to models having a better representation of the drug-like chemical space. Namely, the model trained with randomized SMILES was able to generate at least double the amount of unique molecules with the same distribution of properties comparing to one trained with canonical SMILES
Deep Generative Model for Sparse Graphs using Text-Based Learning with Augmentation in Generative Examination Networks
Graphs and networks are a key research tool for a variety of science fields,
most notably chemistry, biology, engineering and social sciences. Modeling and
generation of graphs with efficient sampling is a key challenge for graphs. In
particular, the non-uniqueness, high dimensionality of the vertices and local
dependencies of the edges may render the task challenging. We apply our
recently introduced method, Generative Examination Networks (GENs) to create
the first text-based generative graph models using one-line text formats as
graph representation. In our GEN, a RNN-generative model for a one-line text
format learns autonomously to predict the next available character. The
training is stopped by an examination mechanism checking validating the
percentage of valid graphs generated. We achieved moderate to high validity
using dense g6 strings (random 67.8 +/- 0.6, canonical 99.1 +/- 0.2). Based on
these results we have adapted the widely used SMILES representation for
molecules to a new input format, which we call linear graph input (LGI). Apart
from the benefits of a short compressible text-format, a major advantage
include the possibility to randomize and augment the format. The generative
models are evaluated for overall performance and for reconstruction of the
property space. The results show that LGI strings are very well suited for
machine-learning and that augmentation is essential for the performance of the
model in terms of validity, uniqueness and novelty. Lastly, the format can
address smaller and larger dataset of graphs and the format can be easily
adapted to define another meaning of the characters used in the LGI-string and
can address sparse graph problems in used in other fields of science
Learning by stochastic serializations
Complex structures are typical in machine learning. Tailoring learning
algorithms for every structure requires an effort that may be saved by defining
a generic learning procedure adaptive to any complex structure. In this paper,
we propose to map any complex structure onto a generic form, called
serialization, over which we can apply any sequence-based density estimator. We
then show how to transfer the learned density back onto the space of original
structures. To expose the learning procedure to the structural particularities
of the original structures, we take care that the serializations reflect
accurately the structures' properties. Enumerating all serializations is
infeasible. We propose an effective way to sample representative serializations
from the complete set of serializations which preserves the statistics of the
complete set. Our method is competitive or better than state of the art
learning algorithms that have been specifically designed for given structures.
In addition, since the serialization involves sampling from a combinatorial
process it provides considerable protection from overfitting, which we clearly
demonstrate on a number of experiments.Comment: Submission to NeurIPS 201
A de novo molecular generation method using latent vector based generative adversarial network
Deep learning methods applied to drug discovery have been used to generate novel structures. In this study, we propose a new deep learning architecture, LatentGAN, which combines an autoencoder and a generative adversarial neural network for de novo molecular design. We applied the method in two scenarios: One to generate random drug-like compounds and another to generate target-biased compounds. Our results show that the method works well in both cases. Sampled compounds from the trained model can largely occupy the same chemical space as the training set and also generate a substantial fraction of novel compounds. Moreover, the drug-likeness score of compounds sampled from LatentGAN is also similar to that of the training set. Lastly, generated compounds differ from those obtained with a Recurrent Neural Network-based generative model approach, indicating that both methods can be used complementarily.[Figure not available: See fulltext.
Faster and more diverse de novo molecular optimization with double-loop reinforcement learning using augmented SMILES
Using generative deep learning models and reinforcement learning together can
effectively generate new molecules with desired properties. By employing a
multi-objective scoring function, thousands of high-scoring molecules can be
generated, making this approach useful for drug discovery and material science.
However, the application of these methods can be hindered by computationally
expensive or time-consuming scoring procedures, particularly when a large
number of function calls are required as feedback in the reinforcement learning
optimization. Here, we propose the use of double-loop reinforcement learning
with simplified molecular line entry system (SMILES) augmentation to improve
the efficiency and speed of the optimization. By adding an inner loop that
augments the generated SMILES strings to non-canonical SMILES for use in
additional reinforcement learning rounds, we can both reuse the scoring
calculations on the molecular level, thereby speeding up the learning process,
as well as offer additional protection against mode collapse. We find that
employing between 5 and 10 augmentation repetitions is optimal for the scoring
functions tested and is further associated with an increased diversity in the
generated compounds, improved reproducibility of the sampling runs and the
generation of molecules of higher similarity to known ligands.Comment: 25 pages and 18 Figures. Supplementary material include
LibINVENT: Reaction-based Generative Scaffold Decoration for in Silico Library Design
Because of the strong relationship between the desired molecular activity and its structural core, the screening of focused, core-sharing chemical libraries is a key step in lead optimization. Despite the plethora of current research focused on in silico methods for molecule generation, to our knowledge, no tool capable of designing such libraries has been proposed. In this work, we present a novel tool for de novo drug design called LibINVENT. It is capable of rapidly proposing chemical libraries of compounds sharing the same core while maximizing a range of desirable properties. To further help the process of designing focused libraries, the user can list specific chemical reactions that can be used for the library creation. LibINVENT is therefore a flexible tool for generating virtual chemical libraries for lead optimization in a broad range of scenarios. Additionally, the shared core ensures that the compounds in the library are similar, possess desirable properties, and can also be synthesized under the same or similar conditions. The LibINVENT code is freely available in our public repository at https://github.com/MolecularAI/Lib-INVENT. The code necessary for data preprocessing is further available at: https://github.com/MolecularAI/Lib-INVENT-dataset
- …