7 research outputs found

    PDBench: Evaluating Computational Methods for Protein Sequence Design

    Get PDF
    Proteins perform critical processes in all living systems: converting solar energy into chemical energy, replicating DNA, as the basis of highly performant materials, sensing and much more. While an incredible range of functionality has been sampled in nature, it accounts for a tiny fraction of the possible protein universe. If we could tap into this pool of unexplored protein structures, we could search for novel proteins with useful properties that we could apply to tackle the environmental and medical challenges facing humanity. This is the purpose of protein design. Sequence design is an important aspect of protein design, and many successful methods to do this have been developed. Recently, deep-learning methods that frame it as a classification problem have emerged as a powerful approach. Beyond their reported improvement in performance, their primary advantage over physics-based methods is that the computational burden is shifted from the user to the developers, thereby increasing accessibility to the design method. Despite this trend, the tools for assessment and comparison of such models remain quite generic. The goal of this paper is to both address the timely problem of evaluation and to shine a spotlight, within the Machine Learning community, on specific assessment criteria that will accelerate impact. We present a carefully curated benchmark set of proteins and propose a number of standard tests to assess the performance of deep learning based methods. Our robust benchmark provides biological insight into the behaviour of design methods, which is essential for evaluating their performance and utility. We compare five existing models with two novel models for sequence prediction. Finally, we test the designs produced by these models with AlphaFold2, a state-of-the-art structure-prediction algorithm, to determine if they are likely to fold into the intended 3D shapes.Comment: 9 pages, 5 figure

    Deep attention super-resolution of brain magnetic resonance images acquired under clinical protocols

    Get PDF
    Vast quantities of Magnetic Resonance Images (MRI) are routinely acquired in clinical practice but, to speed up acquisition, these scans are typically of a quality that is sufficient for clinical diagnosis but sub-optimal for large-scale precision medicine, computational diagnostics, and large-scale neuroimaging collaborative research. Here, we present a critic-guided framework to upsample low-resolution (often 2D) MRI full scans to help overcome these limitations. We incorporate feature-importance and self-attention methods into our model to improve the interpretability of this study. We evaluate our framework on paired low- and high-resolution brain MRI structural full scans (i.e., T1-, T2-weighted, and FLAIR sequences are simultaneously input) obtained in clinical and research settings from scanners manufactured by Siemens, Phillips, and GE. We show that the upsampled MRIs are qualitatively faithful to the ground-truth high-quality scans (PSNR = 35.39; MAE = 3.78E−3; NMSE = 4.32E−10; SSIM = 0.9852; mean normal-appearing gray/white matter ratio intensity differences ranging from 0.0363 to 0.0784 for FLAIR, from 0.0010 to 0.0138 for T1-weighted and from 0.0156 to 0.074 for T2-weighted sequences). The automatic raw segmentation of tissues and lesions using the super-resolved images has fewer false positives and higher accuracy than those obtained from interpolated images in protocols represented with more than three sets in the training sample, making our approach a strong candidate for practical application in clinical and collaborative research

    Astro2020 Science White Paper: Primordial Non-Gaussianity

    No full text
    5 pages + references; Submitted to the Astro2020 call for science white papers. This version: fixed author listInternational audienceOur current understanding of the Universe is established through the pristine measurements of structure in the cosmic microwave background (CMB) and the distribution and shapes of galaxies tracing the large scale structure (LSS) of the Universe. One key ingredient that underlies cosmological observables is that the field that sources the observed structure is assumed to be initially Gaussian with high precision. Nevertheless, a minimal deviation from Gaussianityis perhaps the most robust theoretical prediction of models that explain the observed Universe; itis necessarily present even in the simplest scenarios. In addition, most inflationary models produce far higher levels of non-Gaussianity. Since non-Gaussianity directly probes the dynamics in the early Universe, a detection would present a monumental discovery in cosmology, providing clues about physics at energy scales as high as the GUT scale

    Dark Matter Science in the Era of LSST

    Get PDF
    Astrophysical observations currently provide the only robust, empirical measurements of dark matter. In the coming decade, astrophysical observations will guide other experimental efforts, while simultaneously probing unique regions of dark matter parameter space. This white paper summarizes astrophysical observations that can constrain the fundamental physics of dark matter in the era of LSST. We describe how astrophysical observations will inform our understanding of the fundamental properties of dark matter, such as particle mass, self-interaction strength, non-gravitational interactions with the Standard Model, and compact object abundances. Additionally, we highlight theoretical work and experimental/observational facilities that will complement LSST to strengthen our understanding of the fundamental characteristics of dark matter
    corecore