3,158 research outputs found

    Increasing the Efficiency of 6-DoF Visual Localization Using Multi-Modal Sensory Data

    Full text link
    Localization is a key requirement for mobile robot autonomy and human-robot interaction. Vision-based localization is accurate and flexible, however, it incurs a high computational burden which limits its application on many resource-constrained platforms. In this paper, we address the problem of performing real-time localization in large-scale 3D point cloud maps of ever-growing size. While most systems using multi-modal information reduce localization time by employing side-channel information in a coarse manner (eg. WiFi for a rough prior position estimate), we propose to inter-weave the map with rich sensory data. This multi-modal approach achieves two key goals simultaneously. First, it enables us to harness additional sensory data to localise against a map covering a vast area in real-time; and secondly, it also allows us to roughly localise devices which are not equipped with a camera. The key to our approach is a localization policy based on a sequential Monte Carlo estimator. The localiser uses this policy to attempt point-matching only in nodes where it is likely to succeed, significantly increasing the efficiency of the localization process. The proposed multi-modal localization system is evaluated extensively in a large museum building. The results show that our multi-modal approach not only increases the localization accuracy but significantly reduces computational time.Comment: Presented at IEEE-RAS International Conference on Humanoid Robots (Humanoids) 201

    An Unsupervised Approach to Modelling Visual Data

    Get PDF
    For very large visual datasets, producing expert ground-truth data for training supervised algorithms can represent a substantial human effort. In these situations there is scope for the use of unsupervised approaches that can model collections of images and automatically summarise their content. The primary motivation for this thesis comes from the problem of labelling large visual datasets of the seafloor obtained by an Autonomous Underwater Vehicle (AUV) for ecological analysis. It is expensive to label this data, as taxonomical experts for the specific region are required, whereas automatically generated summaries can be used to focus the efforts of experts, and inform decisions on additional sampling. The contributions in this thesis arise from modelling this visual data in entirely unsupervised ways to obtain comprehensive visual summaries. Firstly, popular unsupervised image feature learning approaches are adapted to work with large datasets and unsupervised clustering algorithms. Next, using Bayesian models the performance of rudimentary scene clustering is boosted by sharing clusters between multiple related datasets, such as regular photo albums or AUV surveys. These Bayesian scene clustering models are extended to simultaneously cluster sub-image segments to form unsupervised notions of “objects” within scenes. The frequency distribution of these objects within scenes is used as the scene descriptor for simultaneous scene clustering. Finally, this simultaneous clustering model is extended to make use of whole image descriptors, which encode rudimentary spatial information, as well as object frequency distributions to describe scenes. This is achieved by unifying the previously presented Bayesian clustering models, and in so doing rectifies some of their weaknesses and limitations. Hence, the final contribution of this thesis is a practical unsupervised algorithm for modelling images from the super-pixel to album levels, and is applicable to large datasets

    Large scale multifactorial likelihood quantitative analysis of BRCA1 and BRCA2 variants: An ENIGMA resource to support clinical variant classification

    Get PDF
    The multifactorial likelihood analysis method has demonstrated utility for quantitative assessment of variant pathogenicity for multiple cancer syndrome genes. Independent data types currently incorporated in the model for assessing BRCA1 and BRCA2 variants include clinically calibrated prior probability of pathogenicity based on variant location and bioinformatic prediction of variant effect, co-segregation, family cancer history profile, co-occurrence with a pathogenic variant in the same gene, breast tumor pathology, and case-control information. Research and clinical data for multifactorial likelihood analysis were collated for 1,395 BRCA1/2 predominantly intronic and missense variants, enabling classification based on posterior probability of pathogenicity for 734 variants: 447 variants were classified as (likely) benign, and 94 as (likely) pathogenic; and 248 classifications were new or considerably altered relative to ClinVar submissions. Classifications were compared with information not yet included in the likelihood model, and evidence strengths aligned to those recommended for ACMG/AMP classification codes. Altered mRNA splicing or function relative to known nonpathogenic variant controls were moderately to strongly predictive of variant pathogenicity. Variant absence in population datasets provided supporting evidence for variant pathogenicity. These findings have direct relevance for BRCA1 and BRCA2 variant evaluation, and justify the need for gene-specific calibration of evidence types used for variant classification
    corecore