39,898 research outputs found

    Structured light techniques for 3D surface reconstruction in robotic tasks

    Get PDF
    Robotic tasks such as navigation and path planning can be greatly enhanced by a vision system capable of providing depth perception from fast and accurate 3D surface reconstruction. Focused on robotic welding tasks we present a comparative analysis of a novel mathematical formulation for 3D surface reconstruction and discuss image processing requirements for reliable detection of patterns in the image. Models are presented for a parallel and angled configurations of light source and image sensor. It is shown that the parallel arrangement requires 35\% fewer arithmetic operations to compute a point cloud in 3D being thus more appropriate for real-time applications. Experiments show that the technique is appropriate to scan a variety of surfaces and, in particular, the intended metallic parts for robotic welding tasks

    Affine Approximation for Direct Batch Recovery of Euclidean Motion From Sparse Data

    Get PDF
    We present a batch method for recovering Euclidian camera motion from sparse image data. The main purpose of the algorithm is to recover the motion parameters using as much of the available information and as few computational steps as possible. The algorithmthus places itself in the gap between factorisation schemes, which make use of all available information in the initial recovery step, and sequential approaches which are able to handle sparseness in the image data. Euclidian camera matrices are approximated via the affine camera model, thus making the recovery direct in the sense that no intermediate projective reconstruction is made. Using a little known closure constraint, the FA-closure, we are able to formulate the camera coefficients linearly in the entries of the affine fundamental matrices. The novelty of the presented work is twofold: Firstly the presented formulation allows for a particularly good conditioning of the estimation of the initial motion parameters but also for an unprecedented diversity in the choice of possible regularisation terms. Secondly, the new autocalibration scheme presented here is in practice guaranteed to yield a Least Squares Estimate of the calibration parameters. As a bi-product, the affine camera model is rehabilitated as a useful model for most cameras and scene configurations, e.g. wide angle lenses observing a scene at close range. Experiments on real and synthetic data demonstrate the ability to reconstruct scenes which are very problematic for previous structure from motion techniques due to local ambiguities and error accumulation

    Inertial sensor-based knee flexion/extension angle estimation

    Get PDF
    A new method for estimating knee joint flexion/extension angles from segment acceleration and angular velocity data is described. The approach uses a combination of Kalman filters and biomechanical constraints based on anatomical knowledge. In contrast to many recently published methods, the proposed approach does not make use of the earth’s magnetic field and hence is insensitive to the complex field distortions commonly found in modern buildings. The method was validated experimentally by calculating knee angle from measurements taken from two IMUs placed on adjacent body segments. In contrast to many previous studies which have validated their approach during relatively slow activities or over short durations, the performance of the algorithm was evaluated during both walking and running over 5 minute periods. Seven healthy subjects were tested at various speeds from 1 to 5 miles/hour. Errors were estimated by comparing the results against data obtained simultaneously from a 10 camera motion tracking system (Qualysis). The average measurement error ranged from 0.7 degrees for slow walking (1 mph) to 3.4 degrees for running (5mph). The joint constraint used in the IMU analysis was derived from the Qualysis data. Limitations of the method, its clinical application and its possible extension are discussed

    On the Issue of Camera Calibration with Narrow Angular Field of View

    Get PDF
    This paper considers the issue of calibrating a camera with narrow angular field of view using standard, perspective methods in computer vision. In doing so, the significance of perspective distortion both for camera calibration and for pose estimation is revealed. Since narrow angular field of view cameras make it difficult to obtain rich images in terms of perspectivity, the accuracy of the calibration results is expectedly low. From this, we propose an alternative method that compensates for this loss by utilizing the pose readings of a robotic manipulator. It facilitates accurate pose estimation by nonlinear optimization, minimizing reprojection errors and errors in the manipulator transformations at the same time. Accurate pose estimation in turn enables accurate parametrization of a perspective camera
    corecore