6 research outputs found

    Quadtree-based eigendecomposition for pose estimation in the presence of occlusion and background clutter

    Get PDF
    Includes bibliographical references (pages 29-30).Eigendecomposition-based techniques are popular for a number of computer vision problems, e.g., object and pose estimation, because they are purely appearance based and they require few on-line computations. Unfortunately, they also typically require an unobstructed view of the object whose pose is being detected. The presence of occlusion and background clutter precludes the use of the normalizations that are typically applied and significantly alters the appearance of the object under detection. This work presents an algorithm that is based on applying eigendecomposition to a quadtree representation of the image dataset used to describe the appearance of an object. This allows decisions concerning the pose of an object to be based on only those portions of the image in which the algorithm has determined that the object is not occluded. The accuracy and computational efficiency of the proposed approach is evaluated on 16 different objects with up to 50% of the object being occluded and on images of ships in a dockyard

    Temporal nonlinear dimensionality reduction

    Full text link

    Continuity Properties of the Appearance Manifold for Mobile Robot Position Estimation

    No full text
    . This paper presents a method for solving the problem of mobile robot localisation in an indoor environment. It is based on an approach in which the scene is "learned" by taking images from a large number of positions (x, y) and orientations (`) along a 2D grid. The positioning problem is then reduced to a problem of associating a unknown input image with a "learned" image. A brute force solution requires too many image correlations, and would be computationaly too expensive to provide a system running in real-time. To overcome this problem, the image set is compressed using Principal Components Analysis, converting the search problem into an addressing problem. The aim is to obtain a low dimensional subspace, in which the visual workspace is represented as a continuous appearance manifold. This method allows a process to determine the current robot pose by projecting an unknown input image into the eigenspace, and comparing its exact position to the appearance manifold. In this paper..

    Robot environment learning with a mixed-linear probabilistic state-space model

    Get PDF
    This thesis proposes the use of a probabilistic state-space model with mixed-linear dynamics for learning to predict a robot's experiences. It is motivated by a desire to bridge the gap between traditional models with predefined objective semantics on the one hand, and the biologically-inspired "black box" behavioural paradigm on the other. A novel EM-type algorithm for the model is presented, which is less compuationally demanding than the Monte Carlo techniques developed for use in (for example) visual applications. The algorithm's E-step is slightly approximative, but an extension is described which would in principle make it asymptotically correct. Investigation using synthetically sampled data shows that the uncorrected E-step can any case make correct inferences about quite complicated systems. Results collected from two simulated mobile robot environments support the claim that mixed-linear models can capture both discontinuous and continuous structure in world in an intuitively natural manner; while they proved to perform only slightly better than simpler autoregressive hidden Markov models on these simple tasks, it is possible to claim tentatively that they might scale more effectively to environments in which trends over time played a larger role. Bayesian confidence regions—easily by mixed-linear model— proved be an effective guard for preventing it from making over-confident predictions outside its area of competence. A section on future extensions discusses how the model's easy invertibility could be harnessed to the ultimate aim of choosing actions, from a continuous space of possibilities, which maximise the robot's expected payoff over several steps into the futur

    Indoor Positioning and Navigation

    Get PDF
    In recent years, rapid development in robotics, mobile, and communication technologies has encouraged many studies in the field of localization and navigation in indoor environments. An accurate localization system that can operate in an indoor environment has considerable practical value, because it can be built into autonomous mobile systems or a personal navigation system on a smartphone for guiding people through airports, shopping malls, museums and other public institutions, etc. Such a system would be particularly useful for blind people. Modern smartphones are equipped with numerous sensors (such as inertial sensors, cameras, and barometers) and communication modules (such as WiFi, Bluetooth, NFC, LTE/5G, and UWB capabilities), which enable the implementation of various localization algorithms, namely, visual localization, inertial navigation system, and radio localization. For the mapping of indoor environments and localization of autonomous mobile sysems, LIDAR sensors are also frequently used in addition to smartphone sensors. Visual localization and inertial navigation systems are sensitive to external disturbances; therefore, sensor fusion approaches can be used for the implementation of robust localization algorithms. These have to be optimized in order to be computationally efficient, which is essential for real-time processing and low energy consumption on a smartphone or robot
    corecore