2 research outputs found
ChemRL-GEM: Geometry Enhanced Molecular Representation Learning for Property Prediction
Effective molecular representation learning is of great importance to
facilitate molecular property prediction, which is a fundamental task for the
drug and material industry. Recent advances in graph neural networks (GNNs)
have shown great promise in applying GNNs for molecular representation
learning. Moreover, a few recent studies have also demonstrated successful
applications of self-supervised learning methods to pre-train the GNNs to
overcome the problem of insufficient labeled molecules. However, existing GNNs
and pre-training strategies usually treat molecules as topological graph data
without fully utilizing the molecular geometry information. Whereas, the
three-dimensional (3D) spatial structure of a molecule, a.k.a molecular
geometry, is one of the most critical factors for determining molecular
physical, chemical, and biological properties. To this end, we propose a novel
Geometry Enhanced Molecular representation learning method (GEM) for Chemical
Representation Learning (ChemRL). At first, we design a geometry-based GNN
architecture that simultaneously models atoms, bonds, and bond angles in a
molecule. To be specific, we devised double graphs for a molecule: The first
one encodes the atom-bond relations; The second one encodes bond-angle
relations. Moreover, on top of the devised GNN architecture, we propose several
novel geometry-level self-supervised learning strategies to learn spatial
knowledge by utilizing the local and global molecular 3D structures. We compare
ChemRL-GEM with various state-of-the-art (SOTA) baselines on different
molecular benchmarks and exhibit that ChemRL-GEM can significantly outperform
all baselines in both regression and classification tasks. For example, the
experimental results show an overall improvement of 8.8% on average compared to
SOTA baselines on the regression tasks, demonstrating the superiority of the
proposed method
Pre-Training on Large-Scale Generated Docking Conformations with HelixDock to Unlock the Potential of Protein-ligand Structure Prediction Models
Protein-ligand structure prediction is an essential task in drug discovery,
predicting the binding interactions between small molecules (ligands) and
target proteins (receptors). Although conventional physics-based docking tools
are widely utilized, their accuracy is compromised by limited conformational
sampling and imprecise scoring functions. Recent advances have incorporated
deep learning techniques to improve the accuracy of structure prediction.
Nevertheless, the experimental validation of docking conformations remains
costly, it raises concerns regarding the generalizability of these deep
learning-based methods due to the limited training data. In this work, we show
that by pre-training a geometry-aware SE(3)-Equivariant neural network on a
large-scale docking conformation generated by traditional physics-based docking
tools and then fine-tuning with a limited set of experimentally validated
receptor-ligand complexes, we can achieve outstanding performance. This process
involved the generation of 100 million docking conformations, consuming roughly
1 million CPU core days. The proposed model, HelixDock, aims to acquire the
physical knowledge encapsulated by the physics-based docking tools during the
pre-training phase. HelixDock has been benchmarked against both physics-based
and deep learning-based baselines, showing that it outperforms its closest
competitor by over 40% for RMSD. HelixDock also exhibits enhanced performance
on a dataset that poses a greater challenge, thereby highlighting its
robustness. Moreover, our investigation reveals the scaling laws governing
pre-trained structure prediction models, indicating a consistent enhancement in
performance with increases in model parameters and pre-training data. This
study illuminates the strategic advantage of leveraging a vast and varied
repository of generated data to advance the frontiers of AI-driven drug
discovery