8 research outputs found

    Towards Foundational Models for Molecular Learning on Large-Scale Multi-Task Datasets

    Full text link
    Recently, pre-trained foundation models have enabled significant advancements in multiple fields. In molecular machine learning, however, where datasets are often hand-curated, and hence typically small, the lack of datasets with labeled features, and codebases to manage those datasets, has hindered the development of foundation models. In this work, we present seven novel datasets categorized by size into three distinct categories: ToyMix, LargeMix and UltraLarge. These datasets push the boundaries in both the scale and the diversity of supervised labels for molecular learning. They cover nearly 100 million molecules and over 3000 sparsely defined tasks, totaling more than 13 billion individual labels of both quantum and biological nature. In comparison, our datasets contain 300 times more data points than the widely used OGB-LSC PCQM4Mv2 dataset, and 13 times more than the quantum-only QM1B dataset. In addition, to support the development of foundational models based on our proposed datasets, we present the Graphium graph machine learning library which simplifies the process of building and training molecular machine learning models for multi-task and multi-level molecular datasets. Finally, we present a range of baseline results as a starting point of multi-task and multi-level training on these datasets. Empirically, we observe that performance on low-resource biological datasets show improvement by also training on large amounts of quantum data. This indicates that there may be potential in multi-task and multi-level training of a foundation model and fine-tuning it to resource-constrained downstream tasks

    Real-World Molecular Out-Of-Distribution: Specification and Investigation

    No full text
    This study presents a rigorous framework for investigating Molecular Out-Of-Distribution (MOOD) generalization in drug discovery. The concept of MOOD is first clarified through a problem specification that demonstrates how the covariate shifts encountered during real-world deployment can be characterized by the distribution of sample distances to the training set. We find that these shifts can cause performance to drop by up to 60% and uncertainty calibration by up to 40%. This leads us to propose a splitting protocol that aims to close the gap between deployment and testing. Then, using this protocol, a thorough investigation is conducted to assess the impact of model design, model selection and dataset characteristics on MOOD performance and uncertainty calibration. We find that appropriate representations and algorithms with built-in uncertainty estimation are crucial to improve performance and uncertainty calibration. This study sets itself apart by its exhaustiveness and opens an exciting avenue to benchmark meaningful, algorithmic progress in molecular scoring. All related code can be found on Github at https://github.com/valence-labs/mood-experiments

    The gametic synapse:RNA transfer to the bovine oocyte

    No full text
    Even after several decades of quiescent storage in the ovary, the female germ cell is capable of reinitiating transcription to build the reserves that are essential to support early embryonic development. In the current model of mammalian oogenesis, there exists bilateral communication between the gamete and the surrounding cells that is limited to paracrine signaling and direct transfer of small molecules via gap junctions existing at the end of the somatic cells' projections that are in contact with the oolemma. The purpose of this work was to explore the role of cumulus cell projections as a means of conductance of large molecules, including RNA, to the mammalian oocyte. By studying nascent RNA with confocal and transmission electron microscopy in combination with transcript detection, we show that the somatic cells surrounding the fully grown bovine oocyte contribute to the maternal reserves by actively transferring large cargo, including mRNA and long noncoding RNA. This occurrence was further demonstrated by the reconstruction of cumulus-oocyte complexes with transfected cumulus cells transferring a synthetic transcript. We propose selective transfer of transcripts occurs, the delivery of which is supported by a remarkable synapselike vesicular trafficking connection between the cumulus cells and the gamete. This unexpected exogenous contribution to the maternal stores offers a new perspective on the determinants of female fertility

    Towards Foundational Models for Molecular Learning on Large-Scale Multi-Task Datasets (Ultra Large Dataset)

    No full text
    Recently, pre-trained foundation models have shown significant advancements in multiple fields. However, the lack of datasets with labeled features and codebases has hindered the development of a supervised foundation model for molecular tasks. Here, we have carefully curated seven datasets specifically tailored for node- and graph-level prediction tasks to facilitate supervised learning on molecules. Moreover, to support the development of multi-task learning on our proposed datasets, we created the Graphium graph machine learning library. Our dataset collection encompasses two distinct categories. Firstly, the TOYMIX category modifies three small existing datasets with additional data for multi-task learning. Secondly, the LARGEMIX category includes four large-scale datasets with 344M graph-level data points and 409M node-level data points from ∼5M unique molecules. Finally, the ultra-large dataset contains 2,210M graph-level data points and 2,031M node-level data points coming from 86M molecules. Hence our datasets represent an order of magnitude increase in data volume compared to other 2D-GNN datasets. In addition, recognizing that molecule-related tasks often span multiple levels, we have designed our library to explicitly support multi-tasking, offering a diverse range of multi-level representations, i.e., representations at the graph, node, edge, and node-pair level. We equipped the library with an extensive collection of models and features to cover different levels of molecule analysis. By combining our curated datasets with this versatile library, we aim to accelerate the development of molecule foundation models. Datasets and code are available at https://github.com/datamol-io/graphium.This upload includes the latest version of the Ultra large dataset described in the paper, due to its large size, it is uploaded independently
    corecore