76 research outputs found

    Pose estimation from corresponding point data

    Full text link

    Fast no ground truth image registration accuracy evaluation: Comparison of bootstrap and Hessian approaches

    Get PDF
    Image registration algorithms provide a displacement field between two images. We consider the problem of estimating accuracy of the calculated displacement field from the input images only and without assuming any specific model for the deformation. We compare two algorithms: the first is based on bootstrap resampling, the second, new method, uses an estimate of the criterion Hessian matrix. We also present a block matching strategy using multiple window sizes where the final result is obtained by fusion of partial results controlled by the accuracy estimates for the blocks involved. Both accuracy estimation methods and the new registration strategy are experimentally compared on synthetic as well as real medical ultrasound data

    Optical See-Through Head Mounted Display Direct Linear Transformation Calibration Robustness in the Presence of User Alignment Noise

    Get PDF
    Augmented Reality (AR) is a technique by which computer generated signals synthesize impressions that are made to coexist with the surrounding real world as perceived by the user. Human smell, taste, touch and hearing can all be augmented, but most commonly AR refers to the human vision being overlaid with information otherwise not readily available to the user. A correct calibration is important on an application level, ensuring that e.g. data labels are presented at correct locations, but also on a system level to enable display techniques such as stereoscopy to function properly [SOURCE]. Thus, vital to AR, calibration methodology is an important research area. While great achievements already have been made, there are some properties in current calibration methods for augmenting vision which do not translate from its traditional use in automated cameras calibration to its use with a human operator. This paper uses a Monte Carlo simulation of a standard direct linear transformation camera calibration to investigate how user introduced head orientation noise affects the parameter estimation during a calibration procedure of an optical see-through head mounted display

    Robust extended Kalman filtering for camera pose tracking using 2D to 3D lines correspondences

    Get PDF
    International audienceIn this paper we present a new robust camera pose estimation approach based on 3D lines tracking. We used an Extended Kalman Filter (EKF) to incrementally update the camera pose in real-time. The principal contributions of our method includes first, the expansion of the RANSAC scheme in order to achieve a robust matching algorithm that associates 2D edges from the image with the 3D line segments from the input model. And second, a new framework for camera pose estimation using 2D-3D straight-lines within an EKF. Experimental results on real image sequences are presented to evaluate the performances and the feasibility of the proposed approach

    Positional estimation techniques for an autonomous mobile robot

    Get PDF
    Techniques for positional estimation of a mobile robot navigation in an indoor environment are described. A comprehensive review of the various positional estimation techniques studied in the literature is first presented. The techniques are divided into four different types and each of them is discussed briefly. Two different kinds of environments are considered for positional estimation; mountainous natural terrain and an urban, man-made environment with polyhedral buildings. In both cases, the robot is assumed to be equipped with single visual camera that can be panned and tilted and also a 3-D description (world model) of the environment is given. Such a description could be obtained from a stereo pair of aerial images or from the architectural plans of the buildings. Techniques for positional estimation using the camera input and the world model are presented

    From 3D Point Clouds to Pose-Normalised Depth Maps

    Get PDF
    We consider the problem of generating either pairwise-aligned or pose-normalised depth maps from noisy 3D point clouds in a relatively unrestricted poses. Our system is deployed in a 3D face alignment application and consists of the following four stages: (i) data filtering, (ii) nose tip identification and sub-vertex localisation, (iii) computation of the (relative) face orientation, (iv) generation of either a pose aligned or a pose normalised depth map. We generate an implicit radial basis function (RBF) model of the facial surface and this is employed within all four stages of the process. For example, in stage (ii), construction of novel invariant features is based on sampling this RBF over a set of concentric spheres to give a spherically-sampled RBF (SSR) shape histogram. In stage (iii), a second novel descriptor, called an isoradius contour curvature signal, is defined, which allows rotational alignment to be determined using a simple process of 1D correlation. We test our system on both the University of York (UoY) 3D face dataset and the Face Recognition Grand Challenge (FRGC) 3D data. For the more challenging UoY data, our SSR descriptors significantly outperform three variants of spin images, successfully identifying nose vertices at a rate of 99.6%. Nose localisation performance on the higher quality FRGC data, which has only small pose variations, is 99.9%. Our best system successfully normalises the pose of 3D faces at rates of 99.1% (UoY data) and 99.6% (FRGC data)

    Self-supervised learning of depth-based navigation affordances from haptic cues

    Get PDF
    This paper presents a ground vehicle capable of exploiting haptic cues to learn navigation affordances from depth cues. A simple pan-tilt telescopic antenna and a Kinect sensor, both fitted to the robot’s body frame, provide the required haptic and depth sensory feedback, respectively. With the antenna, the robot determines whether an object is traversable by the robot. Then, the interaction outcome is associated to the object’s depth-based descriptor. Later on, the robot to predict if a newly observed object is traversable just by inspecting its depth-based appearance uses this acquired knowledge. A set of field trials show the ability of the to robot progressively learn which elements of the environment are traversable.info:eu-repo/semantics/acceptedVersio

    Mind the Gap: Norm-Aware Adaptive Robust Loss for Multivariate Least-Squares Problems

    Full text link
    Measurement outliers are unavoidable when solving real-world robot state estimation problems. A large family of robust loss functions (RLFs) exists to mitigate the effects of outliers, including newly developed adaptive methods that do not require parameter tuning. All of these methods assume that residuals follow a zero-mean Gaussian-like distribution. However, in multivariate problems the residual is often defined as a norm, and norms follow a Chi-like distribution with a non-zero mode value. This produces a "mode gap" that impacts the convergence rate and accuracy of existing RLFs. The proposed approach, "Adaptive MB," accounts for this gap by first estimating the mode of the residuals using an adaptive Chi-like distribution. Applying an existing adaptive weighting scheme only to residuals greater than the mode leads to more robust performance and faster convergence times in two fundamental state estimation problems, point cloud alignment and pose averaging.Comment: 8 pages, 4 figures. This paper has been accepted for publication in IEEE Robotics and Automation Letters. V2: Update weighting in (13), (28) and re-run results. Hypothesis, methodology, and general findings remain unchanged. Update Sec. II-A to reference IRLS, and update citation [11] accordingly. Include acknowledgement to Mitchell Cohe

    Measurement errors in visual servoing

    Get PDF
    Abstract — In recent years, a number of hybrid visual servoing control algorithms have been proposed and evaluated. For some time now, it has been clear that classical control approaches — image and position based —- have some inherent problems. Hybrid approaches try to combine them in order to overcome these problems. However, most of the proposed approaches concentrate mainly on the design of the control law, neglecting the issue of errors resulting from the sensory system. This work deals with the effect of measurement errors in visual servoing. The particular contribution of this paper is the analysis of the propagation of image error through pose estimation and visual servoing control law. We have chosen to investigate the properties of the vision system and their effect to the performance of the control system. Two approaches are evaluated: i) position, and ii) 2 1/2 D visual servoing. We believe that our evaluation offers a valid tool to build and analyze hybrid control systems based on, for example, switching [1] or partitioning [2]. I
    corecore