8 research outputs found
Towards Foundational Models for Molecular Learning on Large-Scale Multi-Task Datasets
Recently, pre-trained foundation models have enabled significant advancements
in multiple fields. In molecular machine learning, however, where datasets are
often hand-curated, and hence typically small, the lack of datasets with
labeled features, and codebases to manage those datasets, has hindered the
development of foundation models. In this work, we present seven novel datasets
categorized by size into three distinct categories: ToyMix, LargeMix and
UltraLarge. These datasets push the boundaries in both the scale and the
diversity of supervised labels for molecular learning. They cover nearly 100
million molecules and over 3000 sparsely defined tasks, totaling more than 13
billion individual labels of both quantum and biological nature. In comparison,
our datasets contain 300 times more data points than the widely used OGB-LSC
PCQM4Mv2 dataset, and 13 times more than the quantum-only QM1B dataset. In
addition, to support the development of foundational models based on our
proposed datasets, we present the Graphium graph machine learning library which
simplifies the process of building and training molecular machine learning
models for multi-task and multi-level molecular datasets. Finally, we present a
range of baseline results as a starting point of multi-task and multi-level
training on these datasets. Empirically, we observe that performance on
low-resource biological datasets show improvement by also training on large
amounts of quantum data. This indicates that there may be potential in
multi-task and multi-level training of a foundation model and fine-tuning it to
resource-constrained downstream tasks
Real-World Molecular Out-Of-Distribution: Specification and Investigation
This study presents a rigorous framework for investigating Molecular Out-Of-Distribution (MOOD) generalization in drug discovery. The concept of MOOD is first clarified through a problem specification that demonstrates how the covariate shifts encountered during real-world deployment can be characterized by the distribution of sample distances to the training set. We find that these shifts can cause performance to drop by up to 60% and uncertainty calibration by up to 40%. This leads us to propose a splitting protocol that aims to close the gap between deployment and testing. Then, using this protocol, a thorough investigation is conducted to assess the impact of model design, model selection and dataset characteristics on MOOD performance and uncertainty calibration. We find that appropriate representations and algorithms with built-in uncertainty estimation are crucial to improve performance and uncertainty calibration. This study sets itself apart by its exhaustiveness and opens an exciting avenue to benchmark meaningful, algorithmic progress in molecular scoring. All related code can be found on Github at https://github.com/valence-labs/mood-experiments
datamol-io/datamol: 0.11.5
<h2> Fixes</h2>
<ul>
<li>Improve the ChEMBL drugs dataset<ul>
<li>PR: #214</li>
</ul>
</li>
</ul>
datamol-io/datamol: 0.12.2
<h2> Features</h2>
<ul>
<li>Add rdkit as a pypi dep<ul>
<li>PR: #219</li>
</ul>
</li>
</ul>
datamol-io/datamol: 0.12.1
<h2> Features</h2>
<ul>
<li>Added function to get the number of stereoisomers<ul>
<li>PR: #217</li>
</ul>
</li>
</ul>
<h2> Fixes</h2>
<ul>
<li>Added function to get the number of stereoisomers<ul>
<li>PR: #217</li>
</ul>
</li>
</ul>
datamol-io/datamol: 0.12.0
<h2> Features</h2>
<ul>
<li>Compat with latest RDKit 2023.09<ul>
<li>PR: #216</li>
</ul>
</li>
</ul>
datamol-io/datamol: 0.12.3
<h2> Uncategorized</h2>
<ul>
<li>Allow additional args for colors in lasso<ul>
<li>PR: #223</li>
</ul>
</li>
</ul>