1,353 research outputs found

    Analyzing Learned Molecular Representations for Property Prediction

    Full text link
    Advancements in neural machinery have led to a wide range of algorithmic solutions for molecular property prediction. Two classes of models in particular have yielded promising results: neural networks applied to computed molecular fingerprints or expert-crafted descriptors, and graph convolutional neural networks that construct a learned molecular representation by operating on the graph structure of the molecule. However, recent literature has yet to clearly determine which of these two methods is superior when generalizing to new chemical space. Furthermore, prior research has rarely examined these new models in industry research settings in comparison to existing employed models. In this paper, we benchmark models extensively on 19 public and 16 proprietary industrial datasets spanning a wide variety of chemical endpoints. In addition, we introduce a graph convolutional model that consistently matches or outperforms models using fixed molecular descriptors as well as previous graph neural architectures on both public and proprietary datasets. Our empirical findings indicate that while approaches based on these representations have yet to reach the level of experimental reproducibility, our proposed model nevertheless offers significant improvements over models currently used in industrial workflows

    Efficient deep ensembles by averaging neural networks in parameter space

    Get PDF
    Although deep ensembles provide large accuracy boosts relative to individual models, their use is not widespread in environments in which computational constraints are limited, as deep ensembles require storing M models and require M forward passes at prediction time. We propose a novel, computationally efficient alternative, which we name permAVG. Although deep ensembles cannot simply be average in parameter space, as all models find distinct perhaps distant local optima, permAVG exploits the symmetries of the loss landscape by learning permutations, such that all M models can be permuted into the same local optimum and can thereafter safely be averaged

    Subnetwork ensembling and data augmentation: Effects on calibration

    Get PDF
    Deep Learning models based on convolutional neural networks are known to be uncalibrated, that is, they are either overconfident or underconfident in their predictions. Safety-critical applications of neural networks, however, require models to be well-calibrated, and there are various methods in the literature to increase model performance and calibration. Subnetwork ensembling is based on the over-parametrization of modern neural networks by fitting several subnetworks into a single network to take advantage of ensembling them without additional computational costs. Data augmentation methods have also been shown to enhance model performance in terms of accuracy and calibration. However, ensembling and data augmentation seem orthogonal to each other, and the total effect of combining these two methods is not well-known; the literature in fact is inconsistent. Through an extensive set of empirical experiments, we show that combining subnetwork ensemble methods with data augmentation methods does not degrade model calibration
    • …
    corecore