71,314 research outputs found

    MRFalign: Protein Homology Detection through Alignment of Markov Random Fields

    Full text link
    Sequence-based protein homology detection has been extensively studied and so far the most sensitive method is based upon comparison of protein sequence profiles, which are derived from multiple sequence alignment (MSA) of sequence homologs in a protein family. A sequence profile is usually represented as a position-specific scoring matrix (PSSM) or an HMM (Hidden Markov Model) and accordingly PSSM-PSSM or HMM-HMM comparison is used for homolog detection. This paper presents a new homology detection method MRFalign, consisting of three key components: 1) a Markov Random Fields (MRF) representation of a protein family; 2) a scoring function measuring similarity of two MRFs; and 3) an efficient ADMM (Alternating Direction Method of Multipliers) algorithm aligning two MRFs. Compared to HMM that can only model very short-range residue correlation, MRFs can model long-range residue interaction pattern and thus, encode information for the global 3D structure of a protein family. Consequently, MRF-MRF comparison for remote homology detection shall be much more sensitive than HMM-HMM or PSSM-PSSM comparison. Experiments confirm that MRFalign outperforms several popular HMM or PSSM-based methods in terms of both alignment accuracy and remote homology detection and that MRFalign works particularly well for mainly beta proteins. For example, tested on the benchmark SCOP40 (8353 proteins) for homology detection, PSSM-PSSM and HMM-HMM succeed on 48% and 52% of proteins, respectively, at superfamily level, and on 15% and 27% of proteins, respectively, at fold level. In contrast, MRFalign succeeds on 57.3% and 42.5% of proteins at superfamily and fold level, respectively. This study implies that long-range residue interaction patterns are very helpful for sequence-based homology detection. The software is available for download at http://raptorx.uchicago.edu/download/.Comment: Accepted by both RECOMB 2014 and PLOS Computational Biolog

    Setting intelligent city tiling strategies for urban shading simulations

    Get PDF
    Assessing accurately the solar potential of all building surfaces in cities, including shading and multiple reflections between buildings, is essential for urban energy modelling. However, since the number of surface interactions and radiation exchanges increase exponentially with the scale of the district, innovative computational strategies are needed, some of which will be introduced in the present work. They should hold the best compromise between result accuracy and computational efficiency, i.e. computational time and memory requirements. In this study, different approaches that may be used for the computation of urban solar irradiance in large areas are presented. Two concrete urban case studies of different densities have been used to compare and evaluate three different methods: the Perez Sky model, the Simplified Radiosity Algorithm and a new scene tiling method implemented in our urban simulation platform SimStadt, used for feasible estimations on a large scale. To quantify the influence of shading, the new concept of Urban Shading Ratio has been introduced and used for this evaluation process. In high density urban areas, this index may reach 60% for facades and 25% for roofs. Tiles of 500 m width and 200 m overlap are a minimum requirement in this case to compute solar irradiance with an acceptable accuracy. In medium density areas, tiles of 300 m width and 100 m overlap meet perfectly the accuracy requirements. In addition, the solar potential for various solar energy thresholds as well as the monthly variation of the Urban Shading Ratio have been quantified for both case studies, distinguishing between roofs and facades of different orientations

    Comparison of the accuracy of voxel based registration and surface based registration for 3D assessment of surgical change following orthognathic surgery

    Get PDF
    Purpose: Superimposition of two dimensional preoperative and postoperative facial images, including radiographs and photographs, are used to evaluate the surgical changes after orthognathic surgery. Recently, three dimensional (3D) imaging has been introduced allowing more accurate analysis of surgical changes. Surface based registration and voxel based registration are commonly used methods for 3D superimposition. The aim of this study was to evaluate and compare the accuracy of the two methods.<p></p> Materials and methods: Pre-operative and 6 months post-operative cone beam CT scan (CBCT) images of 31 patients were randomly selected from the orthognathic patient database at the Dental Hospital and School, University of Glasgow, UK. Voxel based registration was performed on the DICOM images (Digital Imaging Communication in Medicine) using Maxilim software (Medicim-Medical Image Computing, Belgium). Surface based registration was performed on the soft and hard tissue 3D models using VRMesh (VirtualGrid, Bellevue City, WA). The accuracy of the superimposition was evaluated by measuring the mean value of the absolute distance between the two 3D image surfaces. The results were statistically analysed using a paired Student t-test, ANOVA with post-hoc Duncan test, a one sample t-test and Pearson correlation coefficient test.<p></p> Results: The results showed no significant statistical difference between the two superimposition methods (p<0.05). However surface based registration showed a high variability in the mean distances between the corresponding surfaces compared to voxel based registration, especially for soft tissue. Within each method there was a significant difference between superimposition of the soft and hard tissue models.<p></p> Conclusions: There were no significant statistical differences between the two registration methods and it was unlikely to have any clinical significance. Voxel based registration was associated with less variability. Registering on the soft tissue in isolation from the hard tissue may not be a true reflection of the surgical change

    NTU RGB+D 120: A Large-Scale Benchmark for 3D Human Activity Understanding

    Full text link
    Research on depth-based human activity analysis achieved outstanding performance and demonstrated the effectiveness of 3D representation for action recognition. The existing depth-based and RGB+D-based action recognition benchmarks have a number of limitations, including the lack of large-scale training samples, realistic number of distinct class categories, diversity in camera views, varied environmental conditions, and variety of human subjects. In this work, we introduce a large-scale dataset for RGB+D human action recognition, which is collected from 106 distinct subjects and contains more than 114 thousand video samples and 8 million frames. This dataset contains 120 different action classes including daily, mutual, and health-related activities. We evaluate the performance of a series of existing 3D activity analysis methods on this dataset, and show the advantage of applying deep learning methods for 3D-based human action recognition. Furthermore, we investigate a novel one-shot 3D activity recognition problem on our dataset, and a simple yet effective Action-Part Semantic Relevance-aware (APSR) framework is proposed for this task, which yields promising results for recognition of the novel action classes. We believe the introduction of this large-scale dataset will enable the community to apply, adapt, and develop various data-hungry learning techniques for depth-based and RGB+D-based human activity understanding. [The dataset is available at: http://rose1.ntu.edu.sg/Datasets/actionRecognition.asp]Comment: IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI

    See the Difference: Direct Pre-Image Reconstruction and Pose Estimation by Differentiating HOG

    Full text link
    The Histogram of Oriented Gradient (HOG) descriptor has led to many advances in computer vision over the last decade and is still part of many state of the art approaches. We realize that the associated feature computation is piecewise differentiable and therefore many pipelines which build on HOG can be made differentiable. This lends to advanced introspection as well as opportunities for end-to-end optimization. We present our implementation of \nablaHOG based on the auto-differentiation toolbox Chumpy and show applications to pre-image visualization and pose estimation which extends the existing differentiable renderer OpenDR pipeline. Both applications improve on the respective state-of-the-art HOG approaches

    Controlling the Gaze of Conversational Agents

    Get PDF
    We report on a pilot experiment that investigated the effects of different eye gaze behaviours of a cartoon-like talking face on the quality of human-agent dialogues. We compared a version of the talking face that roughly implements some patterns of human-like behaviour with\ud two other versions. In one of the other versions the shifts in gaze were kept minimal and in the other version the shifts would occur randomly. The talking face has a number of restrictions. There is no speech recognition, so questions and replies have to be typed in by the users\ud of the systems. Despite this restriction we found that participants that conversed with the agent that behaved according to the human-like patterns appreciated the agent better than participants that conversed with the other agents. Conversations with the optimal version also\ud proceeded more efficiently. Participants needed less time to complete their task
    corecore