23 research outputs found

    Minimizing the probabilistic magnitude of active vision errors using genetic algorithm

    Get PDF
    Spatial quantization errors are resulted in digitization. The errors are serious when the size of the pixel is significant compared to the allowable tolerance in the object dimension on the image. In placing the active sensor to perform inspection, displacement of the sensors in orientation and location is common. The difference between observed dimensions obtained by the displaced sensor and the actual dimensions is defined as displacement errors. The density functions of quantization errors and displacement errors depend on the camera resolution and camera locations and orientations. We use genetic algorithm to minimize the probabilistic magnitude of the errors subject to the sensor constraints, such as the resolution, field-of-view, focus, and visibility constraints. Since the objective functions and the constraint functions are both complicated and nonlinear, traditional nonlinear programming may not be efficient and trapping at a local minimum may occur. Using crossover operations, mutation operations, and the stochastic selection in genetic algorithm, trapping can be avoided.published_or_final_versio

    Quantization error in stereo imaging systems

    Get PDF
    In this paper a stochastic analysis of the quantization error in a stereo imaging system has been presented. Further the probability density function of the range estimation error and the expected value of the range error magnitude are derived in terms of various design parameters. Further the relative range error is proposed

    A Collaborative Visual Localization Scheme for a Low-Cost Heterogeneous Robotic Team with Non-Overlapping Perspectives

    Get PDF
    This paper presents and evaluates a relative localization scheme for a heterogeneous team of low-cost mobile robots. An error-state, complementary Kalman Filter was developed to fuse analytically-derived uncertainty of stereoscopic pose measurements of an aerial robot, made by a ground robot, with the inertial/visual proprioceptive measurements of both robots. Results show that the sources of error, image quantization, asynchronous sensors, and a non-stationary bias, were sufficiently modeled to estimate the pose of the aerial robot. In both simulation and experiments, we demonstrate the proposed methodology with a heterogeneous robot team, consisting of a UAV and a UGV tasked with collaboratively localizing themselves while avoiding obstacles in an unknown environment. The team is able to identify a goal location and obstacles in the environment and plan a path for the UGV to the goal location. The results demonstrate localization accuracies of 2cm to 4cm, on average, while the robots operate at a distance from each-other between 1m and 4m

    Error analysis and planning accuracy for dimensional measurement in active vision inspection

    Get PDF
    This paper discusses the effect of spatial quantization errors and displacement errors on the precision dimensional measurements for an edge segment. Probabilistic analysis in terms of the resolution of the image is developed for 2D quantization errors. Expressions for the mean and variance of these errors are developed. The probability density function of the quantization error is derived. The position and orientation errors of the active head are assumed to be normally distributed. A probabilistic analysis in terms of these errors is developed for the displacement errors. Through integrating the spatial quantization errors and the displacement errors, we can compute the total error in the active vision inspection system. Based on the developed analysis, we investigate whether a given set of sensor setting parameters in an active system is suitable to obtain a desired accuracy for specific dimensional measurements, and one can determine sensor positions and view directions which meet the necessary tolerance and accuracy of inspection.published_or_final_versio

    On the Accuracy of Point Localisation in a Circular Camera-Array

    Get PDF
    Although many advances have been made in light-field and camera-array image processing, there is still a lack of thorough analysis of the localisation accuracy of different multi-camera systems. By considering the problem from a frame-quantisation perspective, we are able to quantify the point localisation error of circular camera configurations. Specifically, we obtain closed form expressions bounding the localisation error in terms of the parameters describing the acquisition setup. These theoretical results are independent of the localisation algorithm and thus provide fundamental limits on performance. Furthermore, the new frame-quantisation perspective is general enough to be extended to more complex camera configurations

    Stereo-Based Region-Growing using String Matching

    Get PDF
    We present a novel stereo algorithm based on a coarse texture segmentation preprocessing phase. Matching is performed using a string comparison. Matching sub-strings correspond to matching sequences of textures. Inter-scanline clustering of matching sub-strings yields regions of matching texture. The shape of these regions yield information concerning object's height, width and azimuthal position relative to the camera pair. Hence, rather than the standard dense depth map, the output of this algorithm is a segmentation of objects in the scene. Such a format is useful for the integration of stereo with other sensor modalities on a mobile robotic platform. It is also useful for localization; the height and width of a detected object may be used for landmark recognition, while depth and relative azimuthal location determine pose. The algorithm does not rely on the monotonicity of order of image primitives. Occlusions, exposures, and foreshortening effects are not problematic. The algorithm can deal with certain types of transparencies. It is computationally efficient, and very amenable to parallel implementation. Further, the epipolar constraints may be relaxed to some small but significant degree. A version of the algorithm has been implemented and tested on various types of images. It performs best on random dot stereograms, on images with easily filtered backgrounds (as in synthetic images), and on real scenes with uncontrived backgrounds

    An Analysis of Camera Calibration for Voxel Coloring Including the Effect of Calibration on Voxelization Errors

    Get PDF
    This thesis characterizes the problem of relative camera calibration in the context of three-dimensional volumetric reconstruction. The general effects of camera calibration errors on different parameters of the projection matrix are well understood. In addition, calibration error and Euclidean world errors for a single camera can be related via the inverse perspective projection. However, there has been little analysis of camera calibration for a large number of views and how those errors directly influence the accuracy of recovered three-dimensional models. A specific analysis of how camera calibration error is propagated to reconstruction errors using traditional voxel coloring algorithms is discussed. A review of the Voxel coloring algorithm is included and the general methods applied in the coloring algorithm are related to camera error. In addition, a specific, but common, experimental setup used to acquire real-world objects through voxel coloring is introduced. Methods for relative calibration for this specific setup are discussed as well as a method to measure calibration error. An analysis of effect of these errors on voxel coloring is presented, as well as a discussion concerning the effects of the resulting world-space error

    A Novel Approach to 3-D Gaze Tracking Using Stereo Cameras

    Full text link

    Depth Data Error Modeling of the ZED 3D Vision Sensor from Stereolabs

    Get PDF
    The ZED camera is binocular vision system that can be used to provide a 3D perception of the world. It can be applied in autonomous robot navigation, virtual reality, tracking, motion analysis and so on. This paper proposes a mathematical error model for depth data estimated by the ZED camera with its several resolutions of operation. For doing that, the ZED is attached to a Nvidia Jetson TK1 board providing an embedded system that is used for processing raw data acquired by ZED from a 3D checkerboard. Corners are extracted from the checkerboard using RGB data, and a 3D reconstruction is done for these points using disparity data calculated from the ZED camera, coming up with a partially ordered, and regularly distributed (in 3D space) point cloud of corners with given coordinates, which are computed by the device software. These corners also have their ideal world (3D) positions known with respect to the coordinate frame origin that is empirically set in the pattern. Both given (computed)  coordinates from the camera’s data and known (ideal) coordinates of a corner can, thus, be compared for estimating the error between the given and ideal point locations of the detected corner cloud. Subsequently, using a curve fitting technique, we obtain the equations that model the RMS (Root Mean Square) error. This procedure is repeated for several resolutions of the ZED sensor, and at several distances. Results showed its best effectiveness with a maximum distance of approximately sixteen meters, in real time, which allows its use in robotic or other online applications
    corecore