1,401 research outputs found

    ”Match: 3D shape correspondence for biological image data

    Full text link
    Modern microscopy technologies allow imaging biological objects in 3D over a wide range of spatial and temporal scales, opening the way for a quantitative assessment of morphology. However, establishing a correspondence between objects to be compared, a first necessary step of most shape analysis workflows, remains challenging for soft-tissue objects without striking features allowing them to be landmarked. To address this issue, we introduce the ÎŒMatch 3D shape correspondence pipeline. ÎŒMatch implements a state-of-the-art correspondence algorithm initially developed for computer graphics and packages it in a streamlined pipeline including tools to carry out all steps from input data pre-processing to classical shape analysis routines. Importantly, ÎŒMatch does not require any landmarks on the object surface and establishes correspondence in a fully automated manner. Our open-source method is implemented in Python and can be used to process collections of objects described as triangular meshes. We quantitatively assess the validity of ÎŒMatch relying on a well-known benchmark dataset and further demonstrate its reliability by reproducing published results previously obtained through manual landmarking

    A Geometric Approach for Deciphering Protein Structure from Cryo-EM Volumes

    Get PDF
    Electron Cryo-Microscopy or cryo-EM is an area that has received much attention in the recent past. Compared to the traditional methods of X-Ray Crystallography and NMR Spectroscopy, cryo-EM can be used to image much larger complexes, in many different conformations, and under a wide range of biochemical conditions. This is because it does not require the complex to be crystallisable. However, cryo-EM reconstructions are limited to intermediate resolutions, with the state-of-the-art being 3.6A, where secondary structure elements can be visually identified but not individual amino acid residues. This lack of atomic level resolution creates new computational challenges for protein structure identification. In this dissertation, we present a suite of geometric algorithms to address several aspects of protein modeling using cryo-EM density maps. Specifically, we develop novel methods to capture the shape of density volumes as geometric skeletons. We then use these skeletons to find secondary structure elements: SSEs) of a given protein, to identify the correspondence between these SSEs and those predicted from the primary sequence, and to register high-resolution protein structures onto the density volume. In addition, we designed and developed Gorgon, an interactive molecular modeling system, that integrates the above methods with other interactive routines to generate reliable and accurate protein backbone models

    3D Shape Descriptor-Based Facial Landmark Detection: A Machine Learning Approach

    Get PDF
    Facial landmark detection on 3D human faces has had numerous applications in the literature such as establishing point-to-point correspondence between 3D face models which is itself a key step for a wide range of applications like 3D face detection and authentication, matching, reconstruction, and retrieval, to name a few. Two groups of approaches, namely knowledge-driven and data-driven approaches, have been employed for facial landmarking in the literature. Knowledge-driven techniques are the traditional approaches that have been widely used to locate landmarks on human faces. In these approaches, a user with sucient knowledge and experience usually denes features to be extracted as the landmarks. Data-driven techniques, on the other hand, take advantage of machine learning algorithms to detect prominent features on 3D face models. Besides the key advantages, each category of these techniques has limitations that prevent it from generating the most reliable results. In this work we propose to combine the strengths of the two approaches to detect facial landmarks in a more ecient and precise way. The suggested approach consists of two phases. First, some salient features of the faces are extracted using expert systems. Afterwards, these points are used as the initial control points in the well-known Thin Plate Spline (TPS) technique to deform the input face towards a reference face model. Second, by exploring and utilizing multiple machine learning algorithms another group of landmarks are extracted. The data-driven landmark detection step is performed in a supervised manner providing an information-rich set of training data in which a set of local descriptors are computed and used to train the algorithm. We then, use the detected landmarks for establishing point-to-point correspondence between the 3D human faces mainly using an improved version of Iterative Closest Point (ICP) algorithms. Furthermore, we propose to use the detected landmarks for 3D face matching applications

    Mineral identification using data-mining in hyperspectral infrared imagery

    Get PDF
    Les applications de l’imagerie infrarouge dans le domaine de la gĂ©ologie sont principalement des applications hyperspectrales. Elles permettent entre autre l’identification minĂ©rale, la cartographie, ainsi que l’estimation de la portĂ©e. Le plus souvent, ces acquisitions sont rĂ©alisĂ©es in-situ soit Ă  l’aide de capteurs aĂ©roportĂ©s, soit Ă  l’aide de dispositifs portatifs. La dĂ©couverte de minĂ©raux indicateurs a permis d’amĂ©liorer grandement l’exploration minĂ©rale. Ceci est en partie dĂ» Ă  l’utilisation d’instruments portatifs. Dans ce contexte le dĂ©veloppement de systĂšmes automatisĂ©s permettrait d’augmenter Ă  la fois la qualitĂ© de l’exploration et la prĂ©cision de la dĂ©tection des indicateurs. C’est dans ce cadre que s’inscrit le travail menĂ© dans ce doctorat. Le sujet consistait en l’utilisation de mĂ©thodes d’apprentissage automatique appliquĂ©es Ă  l’analyse (au traitement) d’images hyperspectrales prises dans les longueurs d’onde infrarouge. L’objectif recherchĂ© Ă©tant l’identification de grains minĂ©raux de petites tailles utilisĂ©s comme indicateurs minĂ©ral -ogiques. Une application potentielle de cette recherche serait le dĂ©veloppement d’un outil logiciel d’assistance pour l’analyse des Ă©chantillons lors de l’exploration minĂ©rale. Les expĂ©riences ont Ă©tĂ© menĂ©es en laboratoire dans la gamme relative Ă  l’infrarouge thermique (Long Wave InfraRed, LWIR) de 7.7m Ă  11.8 m. Ces essais ont permis de proposer une mĂ©thode pour calculer l’annulation du continuum. La mĂ©thode utilisĂ©e lors de ces essais utilise la factorisation matricielle non nĂ©gative (NMF). En utlisant une factorisation du premier ordre on peut dĂ©duire le rayonnement de pĂ©nĂ©tration, lequel peut ensuite ĂȘtre comparĂ© et analysĂ© par rapport Ă  d’autres mĂ©thodes plus communes. L’analyse des rĂ©sultats spectraux en comparaison avec plusieurs bibliothĂšques existantes de donnĂ©es a permis de mettre en Ă©vidence la suppression du continuum. Les expĂ©rience ayant menĂ©s Ă  ce rĂ©sultat ont Ă©tĂ© conduites en utilisant une plaque Infragold ainsi qu’un objectif macro LWIR. L’identification automatique de grains de diffĂ©rents matĂ©riaux tels que la pyrope, l’olivine et le quartz a commencĂ©. Lors d’une phase de comparaison entre des approches supervisĂ©es et non supervisĂ©es, cette derniĂšre s’est montrĂ©e plus appropriĂ© en raison du comportement indĂ©pendant par rapport Ă  l’étape d’entraĂźnement. Afin de confirmer la qualitĂ© de ces rĂ©sultats quatre expĂ©riences ont Ă©tĂ© menĂ©es. Lors d’une premiĂšre expĂ©rience deux algorithmes ont Ă©tĂ© Ă©valuĂ©s pour application de regroupements en utilisant l’approche FCC (False Colour Composite). Cet essai a permis d’observer une vitesse de convergence, jusqu’a vingt fois plus rapide, ainsi qu’une efficacitĂ© significativement accrue concernant l’identification en comparaison des rĂ©sultats de la littĂ©rature. Cependant des essais effectuĂ©s sur des donnĂ©es LWIR ont montrĂ© un manque de prĂ©diction de la surface du grain lorsque les grains Ă©taient irrĂ©guliers avec prĂ©sence d’agrĂ©gats minĂ©raux. La seconde expĂ©rience a consistĂ©, en une analyse quantitaive comparative entre deux bases de donnĂ©es de Ground Truth (GT), nommĂ©e rigid-GT et observed-GT (rigide-GT: Ă©tiquet manuel de la rĂ©gion, observĂ©e-GT:Ă©tiquetage manuel les pixels). La prĂ©cision des rĂ©sultats Ă©tait 1.5 fois meilleur lorsque l’on a utlisĂ© la base de donnĂ©es observed-GT que rigid-GT. Pour les deux derniĂšres epxĂ©rience, des donnĂ©es venant d’un MEB (Microscope Électronique Ă  Balayage) ainsi que d’un microscopie Ă  fluorescence (XRF) ont Ă©tĂ© ajoutĂ©es. Ces donnĂ©es ont permis d’introduire des informations relatives tant aux agrĂ©gats minĂ©raux qu’à la surface des grains. Les rĂ©sultats ont Ă©tĂ© comparĂ©s par des techniques d’identification automatique des minĂ©raux, utilisant ArcGIS. Cette derniĂšre a montrĂ© une performance prometteuse quand Ă  l’identification automatique et Ă  aussi Ă©tĂ© utilisĂ©e pour la GT de validation. Dans l’ensemble, les quatre mĂ©thodes de cette thĂšse reprĂ©sentent des mĂ©thodologies bĂ©nĂ©fiques pour l’identification des minĂ©raux. Ces mĂ©thodes prĂ©sentent l’avantage d’ĂȘtre non-destructives, relativement prĂ©cises et d’avoir un faible coĂ»t en temps calcul ce qui pourrait les qualifier pour ĂȘtre utilisĂ©e dans des conditions de laboratoire ou sur le terrain.The geological applications of hyperspectral infrared imagery mainly consist in mineral identification, mapping, airborne or portable instruments, and core logging. Finding the mineral indicators offer considerable benefits in terms of mineralogy and mineral exploration which usually involves application of portable instrument and core logging. Moreover, faster and more mechanized systems development increases the precision of identifying mineral indicators and avoid any possible mis-classification. Therefore, the objective of this thesis was to create a tool to using hyperspectral infrared imagery and process the data through image analysis and machine learning methods to identify small size mineral grains used as mineral indicators. This system would be applied for different circumstances to provide an assistant for geological analysis and mineralogy exploration. The experiments were conducted in laboratory conditions in the long-wave infrared (7.7ÎŒm to 11.8ÎŒm - LWIR), with a LWIR-macro lens (to improve spatial resolution), an Infragold plate, and a heating source. The process began with a method to calculate the continuum removal. The approach is the application of Non-negative Matrix Factorization (NMF) to extract Rank-1 NMF and estimate the down-welling radiance and then compare it with other conventional methods. The results indicate successful suppression of the continuum from the spectra and enable the spectra to be compared with spectral libraries. Afterwards, to have an automated system, supervised and unsupervised approaches have been tested for identification of pyrope, olivine and quartz grains. The results indicated that the unsupervised approach was more suitable due to independent behavior against training stage. Once these results obtained, two algorithms were tested to create False Color Composites (FCC) applying a clustering approach. The results of this comparison indicate significant computational efficiency (more than 20 times faster) and promising performance for mineral identification. Finally, the reliability of the automated LWIR hyperspectral infrared mineral identification has been tested and the difficulty for identification of the irregular grain’s surface along with the mineral aggregates has been verified. The results were compared to two different Ground Truth(GT) (i.e. rigid-GT and observed-GT) for quantitative calculation. Observed-GT increased the accuracy up to 1.5 times than rigid-GT. The samples were also examined by Micro X-ray Fluorescence (XRF) and Scanning Electron Microscope (SEM) in order to retrieve information for the mineral aggregates and the grain’s surface (biotite, epidote, goethite, diopside, smithsonite, tourmaline, kyanite, scheelite, pyrope, olivine, and quartz). The results of XRF imagery compared with automatic mineral identification techniques, using ArcGIS, and represented a promising performance for automatic identification and have been used for GT validation. In overall, the four methods (i.e. 1.Continuum removal methods; 2. Classification or clustering methods for mineral identification; 3. Two algorithms for clustering of mineral spectra; 4. Reliability verification) in this thesis represent beneficial methodologies to identify minerals. These methods have the advantages to be a non-destructive, relatively accurate and have low computational complexity that might be used to identify and assess mineral grains in the laboratory conditions or in the field

    Novel Approaches to the Representation and Analysis of 3D Segmented Anatomical Districts

    Get PDF
    Nowadays, image processing and 3D shape analysis are an integral part of clinical practice and have the potentiality to support clinicians with advanced analysis and visualization techniques. Both approaches provide visual and quantitative information to medical practitioners, even if from different points of view. Indeed, shape analysis is aimed at studying the morphology of anatomical structures, while image processing is focused more on the tissue or functional information provided by the pixels/voxels intensities levels. Despite the progress obtained by research in both fields, a junction between these two complementary worlds is missing. When working with 3D models analyzing shape features, the information of the volume surrounding the structure is lost, since a segmentation process is needed to obtain the 3D shape model; however, the 3D nature of the anatomical structure is represented explicitly. With volume images, instead, the tissue information related to the imaged volume is the core of the analysis, while the shape and morphology of the structure are just implicitly represented, thus not clear enough. The aim of this Thesis work is the integration of these two approaches in order to increase the amount of information available for physicians, allowing a more accurate analysis of each patient. An augmented visualization tool able to provide information on both the anatomical structure shape and the surrounding volume through a hybrid representation, could reduce the gap between the two approaches and provide a more complete anatomical rendering of the subject. To this end, given a segmented anatomical district, we propose a novel mapping of volumetric data onto the segmented surface. The grey-levels of the image voxels are mapped through a volume-surface correspondence map, which defines a grey-level texture on the segmented surface. The resulting texture mapping is coherent to the local morphology of the segmented anatomical structure and provides an enhanced visual representation of the anatomical district. The integration of volume-based and surface-based information in a unique 3D representation also supports the identification and characterization of morphological landmarks and pathology evaluations. The main research contributions of the Ph.D. activities and Thesis are: \u2022 the development of a novel integration algorithm that combines surface-based (segmented 3D anatomical structure meshes) and volume-based (MRI volumes) information. The integration supports different criteria for the grey-levels mapping onto the segmented surface; \u2022 the development of methodological approaches for using the grey-levels mapping together with morphological analysis. The final goal is to solve problems in real clinical tasks, such as the identification of (patient-specific) ligament insertion sites on bones from segmented MR images, the characterization of the local morphology of bones/tissues, the early diagnosis, classification, and monitoring of muscle-skeletal pathologies; \u2022 the analysis of segmentation procedures, with a focus on the tissue classification process, in order to reduce operator dependency and to overcome the absence of a real gold standard for the evaluation of automatic segmentations; \u2022 the evaluation and comparison of (unsupervised) segmentation methods, finalized to define a novel segmentation method for low-field MR images, and for the local correction/improvement of a given segmentation. The proposed method is simple but effectively integrates information derived from medical image analysis and 3D shape analysis. Moreover, the algorithm is general enough to be applied to different anatomical districts independently of the segmentation method, imaging techniques (such as CT), or image resolution. The volume information can be integrated easily in different shape analysis applications, taking into consideration not only the morphology of the input shape but also the real context in which it is inserted, to solve clinical tasks. The results obtained by this combined analysis have been evaluated through statistical analysis

    DEEP LEARNING METHODS FOR PREDICTION OF AND ESCAPE FROM PROTEIN RECOGNITION

    Get PDF
    Protein interactions drive diverse processes essential to living organisms, and thus numerous biomedical applications center on understanding, predicting, and designing how proteins recognize their partners. While unfortunately the number of interactions of interest still vastly exceeds the capabilities of experimental determination methods, computational methods promise to fill the gap. My thesis pursues the development and application of computational methods for several protein interaction prediction and design tasks. First, to improve protein-glycan interaction specificity prediction, I developed GlyBERT, which learns biologically relevant glycan representations encapsulating the components most important for glycan recognition within their structures. GlyBERT encodes glycans with a branched biochemical language and employs an attention-based deep language model to embed the correlation between local and global structural contexts. This approach enables the development of predictive models from limited data, supporting applications such as lectin binding prediction. Second, to improve protein-protein interaction prediction, I developed a unified geometric deep neural network, ‘PInet’ (Protein Interface Network), which leverages the best properties of both data- and physics-driven methods, learning and utilizing models capturing both geometrical and physicochemical molecular surface complementarity. In addition to obtaining state-of-the-art performance in predicting protein-protein interactions, PInet can serve as the backbone for other protein-protein interaction modeling tasks such as binding affinity prediction. Finally, I turned from ii prediction to design, addressing two important tasks in the context of antibodyantigen recognition. The first problem is to redesign a given antigen to evade antibody recognition, e.g., to help biotherapeutics avoid pre-existing immunity or to focus vaccine responses on key portions of an antigen. The second problem is to design a panel of variants of a given antigen to use as “bait” in experimental identification of antibodies that recognize different parts of the antigen, e.g., to support classification of immune responses or to help select among different antibody candidates. I developed a geometry-based algorithm to generate variants to address these design problems, seeking to maximize utility subject to experimental constraints. During the design process, the algorithm accounts for and balances the effects of candidate mutations on antibody recognition and on antigen stability. In retrospective case studies, the algorithm demonstrated promising precision, recall, and robustness of finding good designs. This work represents the first algorithm to systematically design antigen variants for characterization and evasion of polyclonal antibody responses
    • 

    corecore