10 research outputs found

    Motion Estimation from Disparity Images

    Get PDF
    A new method for 3D rigid motion estimation from stereo is proposed in this paper. The appealing feature of this method is that it directly uses the disparity images obtained from stereo matching. We assume that the stereo rig has parallel cameras and show, in that case, the geometric and topological properties of the disparity images. Then we introduce a rigid transformation (called d-motion) that maps two disparity images of a rigidly moving object. We show how it is related to the Euclidean rigid motion and a motion estimation algorithm is derived. We show with experiments that our approach is simple and more accurate than standard approaches

    An emprical point error model for TLS derived point clouds

    Get PDF
    The random error pattern of point clouds has significant effect on the quality of final 3D model. The magnitude and distribution of random errors should be modelled numerically. This work aims at developing such an anisotropic point error model, specifically for the terrestrial laser scanner (TLS) acquired 3D point clouds. A priori precisions of basic TLS observations, which are the range, horizontal angle and vertical angle, are determined by predefined and practical measurement configurations, performed at real-world test environments. A priori precision of horizontal (σθ) and vertical (σα) angles are constant for each point of a data set, and can directly be determined through the repetitive scanning of the same environment. In our practical tests, precisions of the horizontal and vertical angles were found as σθ=±36.6 and σα=±17.8, respectively. On the other hand, a priori precision of the range observation (σρ) is assumed to be a function of range, incidence angle of the incoming laser ray, and reflectivity of object surface. Hence, it is a variable, and computed for each point individually by employing an empirically developed formula varying as σρ=±2a'12 mm for a FARO Focus X330 laser scanner. This procedure was followed by the computation of error ellipsoids of each point using the law of variance-covariance propagation. The direction and size of the error ellipsoids were computed by the principal components transformation. The usability and feasibility of the model was investigated in real world scenarios. These investigations validated the suitability and practicality of the proposed method.This research was funded by TUBITAK - The Scientific and Technological Research Council of Turkey (Project ID: 115Y239) and by the Scientific Research Projects of Bulent Ecevit University (Project ID: 2015-47912266-01)Publisher's Versio

    Cramer-Rao Lower Bound for Point Based Image Registration with Heteroscedastic Error Model for Application in Single Molecule Microscopy

    Full text link
    The Cramer-Rao lower bound for the estimation of the affine transformation parameters in a multivariate heteroscedastic errors-in-variables model is derived. The model is suitable for feature-based image registration in which both sets of control points are localized with errors whose covariance matrices vary from point to point. With focus given to the registration of fluorescence microscopy images, the Cramer-Rao lower bound for the estimation of a feature's position (e.g. of a single molecule) in a registered image is also derived. In the particular case where all covariance matrices for the localization errors are scalar multiples of a common positive definite matrix (e.g. the identity matrix), as can be assumed in fluorescence microscopy, then simplified expressions for the Cramer-Rao lower bound are given. Under certain simplifying assumptions these expressions are shown to match asymptotic distributions for a previously presented set of estimators. Theoretical results are verified with simulations and experimental data

    Registracija stereo slika postupkom zasnovanim na RANSAC strategiji s geometrijskim ograničenjem na generiranje hipoteza.

    Get PDF
    An approach for registration of sparse feature sets detected in two stereo image pairs taken from two different views is proposed. Analogously to many existing image registration approaches, our method consists of initial matching of features using local descriptors followed by a RANSAC-based procedure. The proposed approach is especially suitable for cases where there is a high percentage of false initial matches. The strategy proposed in this paper is to modify the hypothesis generation step of the basic RANSAC approach by performing a multiple-step procedure which uses geometric constraints in order to reduce the probability of false correspondences in generated hypotheses. The algorithm needs approximate information about the relative camera pose between the two views. However, the uncertainty of this information is allowed to be rather high. The presented technique is evaluated using both synthetic data and real data obtained by a stereo camera system.U radu je predložen jedan pristup registraciji skupova značajki detektiranih na dva para stereo slika snimljenih iz dva različita pogleda. Slično mnogim postojećim pristupima registraciji slika, predložena se metoda sastoji od početnog sparivanja značajki na temelju lokalnih deskriptora iza kojeg slijedi postupak temeljen na RANSAC-strategiji. Predloženi je pristup posebno prikladan za slučajeve kada rezultat početnog sparivanja sadrži veliki postotak pogrešno sparenih značajki. Strategija koja se predlaže u ovom članku je da se korak RANSAC-algoritma u kojem se slučajnim uzorkovanjem generiraju hipoteze zamijeni postupkom u kojem se hipoteza generira u više koraka, pri čemu se u svakom koraku, korištenjem odgovarajućih geometrijskih ograničenja, smanjuje vjerojatnost izbora pogrešno sparenih značajki. Algoritam treba približnu informaciju o relativnom položaju kamera između dva pogleda, pri čemu je dopuštena nesigurnost te informacije prilično velika. Predstavljena strategija je provjerena korištenjem sintetičkih podataka te pokusima sa slikama snimljenim pomoću stereo sustava kamera

    Analysis of point based image registration errors with applications in single molecule microscopy

    No full text
    We present an asymptotic treatment of errors involved in point-based image registration where control point (CP) localization is subject to heteroscedastic noise; a suitable model for image registration in fluorescence microscopy. Assuming an affine transform, CPs are used to solve a multivariate regression problem. With measurement errors existing for both sets of CPs this is an errors-in-variable problem and linear least squares is inappropriate; the correct method being generalized least squares. To allow for point dependent errors the equivalence of a generalized maximum likelihood and heteroscedastic generalized least squares model is achieved allowing previously published asymptotic results to be extended to image registration. For a particularly useful model of heteroscedastic noise where covariance matrices are scalar multiples of a known matrix (including the case where covariance matrices are multiples of the identity) we provide closed form solutions to estimators and derive their distribution. We consider the target registration error (TRE) and define a new measure called the localization registration error (LRE) believed to be useful, especially in microscopy registration experiments. Assuming Gaussianity of the CP localization errors, it is shown that the asymptotic distribution for the TRE and LRE are themselves Gaussian and the parameterized distributions are derived. Results are successfully applied to registration in single molecule microscopy to derive the key dependence of the TRE and LRE variance on the number of CPs and their associated photon counts. Simulations show asymptotic results are robust for low CP numbers and non-Gaussianity. The method presented here is shown to outperform GLS on real imaging data.</p

    Comparing Measured and Theoretical Target Registration Error of an Optical Tracking System

    Get PDF
    The goal of this thesis is to experimentally measure the accuracy of an optical tracking system used in commercial surgical navigation systems. We measure accuracy by constructing a mechanism that allows a tracked target to move with spherical motion (i.e., there exists a single point on the mechanism—the center of the sphere—that does not change position when the tracked target is moved). We imagine that the center of the sphere is the tip of a surgical tool rigidly attached to the tracked target. The location of the tool tip cannot be measured directly by the tracking system (because it is impossible to attach a tracking marker to the tool tip) and must be calculated using the measured location and orientation of the tracking target. Any measurement error in the tracking system will cause the calculated position of the tool tip to change as the target is moved; the spread of the calculated tool tip positions is a measurement of tracking error called the target registration error (TRE). The observed TRE will be compared to an analytic model of TRE to assess the predictions of the analytic model

    Fiducial-Based Registration with Anisotropic Localization Error

    Get PDF

    Optimal Rigid Motion Estimation and Performance Evaluation with Bootstrap

    No full text
    A new method for 3D rigid motion estimation is derived under the most general assumption that the measurements are corrupted by inhomogeneous and anisotropic, i.e., heteroscedastic noise. This is the case, for example, when the motion of a calibrated stereo-head is to be determined from image pairs. Linearization in the quaternion space transforms the problem into a multivariate, heteroscedastic errorsin -variables (HEIV) regression, from which the rotation and translation estimates are obtained simultaneously. The significant performance improvementisillustrated, for real data, by comparison with the results of quaternion, subspace and renormalization basedapproaches described in the literature. Extensive use is made of bootstrap, an advanced numerical tool from statistics, both to estimate the covariances of the 3D data points and to obtain confidence regions for the rotation and translation estimates. Bootstrap enables an accurate recovery of these information using only the two image pairs serving as input
    corecore