34 research outputs found

    Molecular Surface Mesh Generation by Filtering Electron Density Map

    Get PDF
    Bioinformatics applied to macromolecules are now widely spread and in continuous expansion. In this context, representing external molecular surface such as the Van der Waals Surface or the Solvent Excluded Surface can be useful for several applications. We propose a fast and parameterizable algorithm giving good visual quality meshes representing molecular surfaces. It is obtained by isosurfacing a filtered electron density map. The density map is the result of the maximum of Gaussian functions placed around atom centers. This map is filtered by an ideal low-pass filter applied on the Fourier Transform of the density map. Applying the marching cubes algorithm on the inverse transform provides a mesh representation of the molecular surface

    Mindboggling morphometry of human brains

    Get PDF
    Mindboggle (http://mindboggle.info) is an open source brain morphometry platform that takes in preprocessed T1-weighted MRI data and outputs volume, surface, and tabular data containing label, feature, and shape information for further analysis. In this article, we document the software and demonstrate its use in studies of shape variation in healthy and diseased humans. The number of different shape measures and the size of the populations make this the largest and most detailed shape analysis of human brains ever conducted. Brain image morphometry shows great potential for providing much-needed biological markers for diagnosing, tracking, and predicting progression of mental health disorders. Very few software algorithms provide more than measures of volume and cortical thickness, while more subtle shape measures may provide more sensitive and specific biomarkers. Mindboggle computes a variety of (primarily surface-based) shapes: area, volume, thickness, curvature, depth, Laplace-Beltrami spectra, Zernike moments, etc. We evaluate Mindboggle’s algorithms using the largest set of manually labeled, publicly available brain images in the world and compare them against state-of-the-art algorithms where they exist. All data, code, and results of these evaluations are publicly available

    Regression applied to protein binding site prediction and comparison with classification

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The structural genomics centers provide hundreds of protein structures of unknown function. Therefore, developing methods enabling the determination of a protein function automatically is imperative. The determination of a protein function can be achieved by studying the network of its physical interactions. In this context, identifying a potential binding site between proteins is of primary interest. In the literature, methods for predicting a potential binding site location generally are based on classification tools. The aim of this paper is to show that regression tools are more efficient than classification tools for patches based binding site predictors. For this purpose, we developed a patches based binding site localization method usable with either regression or classification tools.</p> <p>Results</p> <p>We compared predictive performances of regression tools with performances of machine learning classifiers. Using leave-one-out cross-validation, we showed that regression tools provide better predictions than classification ones. Among regression tools, Multilayer Perceptron ranked highest in the quality of predictions. We compared also the predictive performance of our patches based method using Multilayer Perceptron with the performance of three other methods usable through a web server. Our method performed similarly to the other methods.</p> <p>Conclusion</p> <p>Regression is more efficient than classification when applied to our binding site localization method. When it is possible, using regression instead of classification for other existing binding site predictors will probably improve results. Furthermore, the method presented in this work is flexible because the size of the predicted binding site is adjustable. This adaptability is useful when either false positive or negative rates have to be limited.</p

    Protein surface properties using signal processing and statistical tools

    No full text
    This thesis is dedicated to the study of protein structure properties using 3D image processing tools. Proteins are macromolecules that rule almost every life processes by interacting with other molecules. The comprehension and the prediction of the interaction mechanisms, namely, protein docking, is of major interest for practical applications such as diagnosis and drug design. The applications related to protein docking have a high number of degrees of freedom and, moreover, there is a huge amount of available information about protein structures. Consequently, it is generally time consuming to proceed to protein docking or to screen data bases searching for a protein with some required properties. There is thus a need for fast and efficient algorithms for protein structure analysis . The contributions of this work mainly concern the localization of sites of interest on protein surfaces. To do so, surfaces were modeled as 3D polygonal meshes and algorithms were developed to extract features with a trade-off between short execution time and accuracy. The main improvements concern the generation of protein surface meshes, the approximation of geodesic distances (i.e. distances along the surface), and the computation of the travel depth (a descriptor of surface hollows depth). The locations of sites of interest were predicted by combining protein surface properties using machine learning tools. In this context, several classification and regression tools were compared on benchmark data sets. It was shown that for patch-based methods, regression is more appropriate than classification. The method was also validated for the particular case of antigens epitopes (i.e. the parts recognized by the antibodies) and provided better predictions than existing methods.(FSA 3) -- UCL, 201

    Identification of Relevant Properties for Epitopes Detection Using a Regression Model.

    No full text
    A B-cell epitope is a part of an antigen that is recognized by a specific antibody or B-cell receptor. Detecting the immunogenic region of the antigen is useful in numerous immunodetection and immunotherapeutics applications. The aim of this paper is to find relevant properties to discriminate the location of potential epitopes from the rest of the protein surface. The most relevant properties, identified using two evaluation approaches, are the geometric properties, followed by the conservation score and some chemical properties, such as the proportion of glycine. The selected properties are used in a patch based epitope localization method including a Single Layer Perceptron for regression. The output of this Single Layer Perceptron is used to construct a probability map on the antigen surface. The predictive performances of the method are assessed by computing the AUC using cross-validation on two benchmark datasets and by computing the AUC and the precision for a third independent test set

    Fast surface-based travel depth estimation algorithm for macromolecule surface shape description.

    No full text
    Travel Depth, introduced by Coleman and Sharp in 2006, is a physical interpretation of molecular depth, a term frequently used to describe the shape of a molecular active site or binding site. Travel Depth can be seen as the physical distance a solvent molecule would have to travel from a point of the surface, i.e., the Solvent-Excluded Surface (SES), to its convex hull. Existing algorithms providing an estimation of the Travel Depth are based on a regular sampling of the molecule volume and the use of the Dijkstra's shortest path algorithm. Since Travel Depth is only defined on the molecular surface, this volume-based approach is characterized by a large computational complexity due to the processing of unnecessary samples lying inside or outside the molecule. In this paper, we propose a surface-based approach that restricts the processing to data defined on the SES. This algorithm significantly reduces the complexity of Travel Depth estimation and makes possible the analysis of large macromolecule surface shape description with high resolution. Experimental results show that compared to existing methods, the proposed algorithm achieves accurate estimations with considerably reduced processing times

    Fast and accurate travel depth estimation for protein active site prediction

    No full text
    Active site prediction, well-known for drug design and medical diagnosis, is a major step in the study and prediction of interactions between proteins. The specialized literature provides studies of common physicochemical and geometric properties shared by active sites. Among these properties, this paper focuses on the travel depth which takes a major part in the binding with other molecules. The travel depth of a point on the protein solvent excluded surface (SES) can be defined as the shortest path accessible for a solvent molecule between this point and the protein convex hull. Existing algorithms providing an estimation of this depth are based on the sampling of a bounding box volume surrounding the studied protein. These techniques make use of huge amounts of memory and processing time and result in estimations with precisions that strongly depend on the chosen sampling rate. The contribution of this paper is a surface-based algorithm that only takes samples of the protein SES into account instead of the whole volume. We show this technique allows a more accurate prediction, at least 50 times faster. A validation of this method is also proposed through experiments with a statistical classifier taking as inputs the travel depth and other physicochemical and geometric measures for active site prediction.Anglai
    corecore