361 research outputs found

    RANSAC for Robotic Applications: A Survey

    Get PDF
    Random Sample Consensus, most commonly abbreviated as RANSAC, is a robust estimation method for the parameters of a model contaminated by a sizable percentage of outliers. In its simplest form, the process starts with a sampling of the minimum data needed to perform an estimation, followed by an evaluation of its adequacy, and further repetitions of this process until some stopping criterion is met. Multiple variants have been proposed in which this workflow is modified, typically tweaking one or several of these steps for improvements in computing time or the quality of the estimation of the parameters. RANSAC is widely applied in the field of robotics, for example, for finding geometric shapes (planes, cylinders, spheres, etc.) in cloud points or for estimating the best transformation between different camera views. In this paper, we present a review of the current state of the art of RANSAC family methods with a special interest in applications in robotics.This work has been partially funded by the Basque Government, Spain, under Research Teams Grant number IT1427-22 and under ELKARTEK LANVERSO Grant number KK-2022/00065; the Spanish Ministry of Science (MCIU), the State Research Agency (AEI), the European Regional Development Fund (FEDER), under Grant number PID2021-122402OB-C21 (MCIU/AEI/FEDER, UE); and the Spanish Ministry of Science, Innovation and Universities, under Grant FPU18/04737

    Renewing the respect for similarity

    Get PDF
    In psychology, the concept of similarity has traditionally evoked a mixture of respect, stemming from its ubiquity and intuitive appeal, and concern, due to its dependence on the framing of the problem at hand and on its context. We argue for a renewed focus on similarity as an explanatory concept, by surveying established results and new developments in the theory and methods of similarity-preserving associative lookup and dimensionality reduction—critical components of many cognitive functions, as well as of intelligent data management in computer vision. We focus in particular on the growing family of algorithms that support associative memory by performing hashing that respects local similarity, and on the uses of similarity in representing structured objects and scenes. Insofar as these similarity-based ideas and methods are useful in cognitive modeling and in AI applications, they should be included in the core conceptual toolkit of computational neuroscience. In support of this stance, the present paper (1) offers a discussion of conceptual, mathematical, computational, and empirical aspects of similarity, as applied to the problems of visual object and scene representation, recognition, and interpretation, (2) mentions some key computational problems arising in attempts to put similarity to use, along with their possible solutions, (3) briefly states a previously developed similarity-based framework for visual object representation, the Chorus of Prototypes, along with the empirical support it enjoys, (4) presents new mathematical insights into the effectiveness of this framework, derived from its relationship to locality-sensitive hashing (LSH) and to concomitant statistics, (5) introduces a new model, the Chorus of Relational Descriptors (ChoRD), that extends this framework to scene representation and interpretation, (6) describes its implementation and testing, and finally (7) suggests possible directions in which the present research program can be extended in the future

    Super-resolution:A comprehensive survey

    Get PDF

    Object-Aware Tracking and Mapping

    Get PDF
    Reasoning about geometric properties of digital cameras and optical physics enabled researchers to build methods that localise cameras in 3D space from a video stream, while – often simultaneously – constructing a model of the environment. Related techniques have evolved substantially since the 1980s, leading to increasingly accurate estimations. Traditionally, however, the quality of results is strongly affected by the presence of moving objects, incomplete data, or difficult surfaces – i.e. surfaces that are not Lambertian or lack texture. One insight of this work is that these problems can be addressed by going beyond geometrical and optical constraints, in favour of object level and semantic constraints. Incorporating specific types of prior knowledge in the inference process, such as motion or shape priors, leads to approaches with distinct advantages and disadvantages. After introducing relevant concepts in Chapter 1 and Chapter 2, methods for building object-centric maps in dynamic environments using motion priors are investigated in Chapter 5. Chapter 6 addresses the same problem as Chapter 5, but presents an approach which relies on semantic priors rather than motion cues. To fully exploit semantic information, Chapter 7 discusses the conditioning of shape representations on prior knowledge and the practical application to monocular, object-aware reconstruction systems

    Lifting-based Global Optimisation over Riemannian Manifolds: Theory and Applications in Cryo-EM

    Full text link
    We propose ellipsoidal support lifting (ESL), a lifting-based optimisation scheme for approximating the global minimiser of a smooth function over a Riemannian manifold. Under a uniqueness assumption on the minimiser we show several theoretical results, in particular an error bound with respect to the global minimiser. Additionally, we use the developed theory to integrate the lifting-based optimisation scheme into an alternating minimisation method for joint homogeneous volume reconstruction and rotation estimation in single particle cryogenic-electron microscopy (Cryo-EM), where typically tens of thousands of manifold-valued minimisation problems have to be solved. The joint recovery method arguably overcomes the typical trade-off between noise-robustness and data-consistency -- while remaining computationally feasible --, and is used to test both the theoretical predictions and algorithmic performance through numerical experiments with Cryo-EM data

    Deep Clustering: A Comprehensive Survey

    Full text link
    Cluster analysis plays an indispensable role in machine learning and data mining. Learning a good data representation is crucial for clustering algorithms. Recently, deep clustering, which can learn clustering-friendly representations using deep neural networks, has been broadly applied in a wide range of clustering tasks. Existing surveys for deep clustering mainly focus on the single-view fields and the network architectures, ignoring the complex application scenarios of clustering. To address this issue, in this paper we provide a comprehensive survey for deep clustering in views of data sources. With different data sources and initial conditions, we systematically distinguish the clustering methods in terms of methodology, prior knowledge, and architecture. Concretely, deep clustering methods are introduced according to four categories, i.e., traditional single-view deep clustering, semi-supervised deep clustering, deep multi-view clustering, and deep transfer clustering. Finally, we discuss the open challenges and potential future opportunities in different fields of deep clustering

    Estimation of probability distribution on multiple anatomical objects and evaluation of statistical shape models

    Get PDF
    The estimation of shape probability distributions of anatomic structures is a major research area in medical image analysis. The statistical shape descriptions estimated from training samples provide means and the geometric shape variations of such structures. These are key components in many applications. This dissertation presents two approaches to the estimation of a shape probability distribution of a multi-object complex. Both approaches are applied to objects in the male pelvis, and show improvement in the estimated shape distributions of the objects. The first approach is to estimate the shape variation of each object in the complex in terms of two components: the object's variation independent of the effect of its neighboring objects; and the neighbors' effect on the object. The neighbors' effect on the target object is interpreted using the idea on which linear mixed models are based. The second approach is to estimate a conditional shape probability distribution of a target object given its neighboring objects. The estimation of the conditional probability is based on principal component regression. This dissertation also presents a measure to evaluate the estimated shape probability distribution regarding its predictive power, that is, the ability of a statistical shape model to describe unseen members of the population. This aspect of statistical shape models is of key importance to any application that uses shape models. The measure can be applied to PCA-based shape models and can be interpreted as a ratio of the variation of new data explained by the retained principal directions estimated from training data. This measure was applied to shape models of synthetic warped ellipsoids and right hippocampi. According to two surface distance measures and a volume overlap measure it was empirically verified that the predictive measure reflects what happens in the ambient space where the model lies
    corecore