1,506 research outputs found

    Camera distortion self-calibration using the plumb-line constraint and minimal Hough entropy

    Full text link
    In this paper we present a simple and robust method for self-correction of camera distortion using single images of scenes which contain straight lines. Since the most common distortion can be modelled as radial distortion, we illustrate the method using the Harris radial distortion model, but the method is applicable to any distortion model. The method is based on transforming the edgels of the distorted image to a 1-D angular Hough space, and optimizing the distortion correction parameters which minimize the entropy of the corresponding normalized histogram. Properly corrected imagery will have fewer curved lines, and therefore less spread in Hough space. Since the method does not rely on any image structure beyond the existence of edgels sharing some common orientations and does not use edge fitting, it is applicable to a wide variety of image types. For instance, it can be applied equally well to images of texture with weak but dominant orientations, or images with strong vanishing points. Finally, the method is performed on both synthetic and real data revealing that it is particularly robust to noise.Comment: 9 pages, 5 figures Corrected errors in equation 1

    Vision-Guided Mobile Robot Navigation

    Get PDF
    This report discusses the use of vision feedback for autonomous navigation by a mobile robot in indoor environments. In particular, we have discussed in detail the issues of camera calibration and how binocular and monocular vision may be utilized for self-location by the robot. A noteworthy feature of monocular vision is that the camera image is compared with a CAD model of the interior of the hallways using the PSEIKI reasoning system; this reasoning system allows the comparison to take place at different levels of geometric detail

    Pathfinding and positioning in a labyrinth game using a wide-angle camera

    Get PDF
    Alten AB has a technology demonstator in the form of a motorized and camera equipped large scale labyrinth game. The ball position is controlled by a ABB industrial PLC connected with Android tablets for user interface and a camera as a sensor for the ball position. This thesis demonstrates the ability to place a wide angle camera inside the cabinet, correcting the lens distortion caused by the wide angle lens and detect the ball with the use of a circular Hough transform. A path is also generated from the ball position to any position of the maze by capturing an image from the camera, generating a map for subsequent pathfinding, using an improvement of the Dijkstra’s pathfinding algorithm named Theta*. It further demonstrates the feasibility of using the computing power of the camera for both pathfinding and ball positioning

    Automated calibration of multi-sensor optical shape measurement system

    Get PDF
    A multi-sensor optical shape measurement system (SMS) based on the fringe projection method and temporal phase unwrapping has recently been commercialised as a result of its easy implementation, computer control using a spatial light modulator, and fast full-field measurement. The main advantage of a multi-sensor SMS is the ability to make measurements for 360° coverage without the requirement for mounting the measured component on translation and/or rotation stages. However, for greater acceptance in industry, issues relating to a user-friendly calibration of the multi-sensor SMS in an industrial environment for presentation of the measured data in a single coordinate system need to be addressed. The calibration of multi-sensor SMSs typically requires a calibration artefact, which consequently leads to significant user input for the processing of calibration data, in order to obtain the respective sensor's optimal imaging geometry parameters. The imaging geometry parameters provide a mapping from the acquired shape data to real world Cartesian coordinates. However, the process of obtaining optimal sensor imaging geometry parameters (which involves a nonlinear numerical optimization process known as bundle adjustment), requires labelling regions within each point cloud as belonging to known features of the calibration artefact. This thesis describes an automated calibration procedure which ensures that calibration data is processed through automated feature detection of the calibration artefact, artefact pose estimation, automated control point selection, and finally bundle adjustment itself. [Continues.

    Camera calibration in sport event scenarios

    Get PDF
    The main goal of this paper is the design of a novel and robust methodology for calibrating cameras from a single image in sport scenarios, such as a soccer field, or a basketball or tennis court. In these sport scenarios, the only references we use to calibrate the camera are the lines and circles delimiting the different regions. The first problem we address is the extraction of image primitives, including the challenging problems of shaded regions and lens distortion. From these primitives, we automatically recognise the location of the sport court in the scene by estimating the homography which matches the actual court with its projection onto the image. This is achieved even when only a few primitives are available. Finally, from this homography, we recover the camera calibration parameters. In particular, we estimate the focal length as well as the position and orientation in the 3D space. We present some experiments on models and real courts which illustrate the accuracy of the proposed methodology

    Investigation into the use of the Microsoft Kinect and the Hough transform for mobile robotics

    Get PDF
    Includes bibliographical references.The Microsoft Kinect sensor is a low cost RGB-D sensor. In this dissertation, its calibration is fully investigated and then these parameters are compared to the parameters given by Microsoft and OpenNI. The parameters found were found to be different to those given by Microsoft and OpenNI therefore, every Kinect should be fully calibrated. The transformation from the raw data to a point cloud is also investigated. Then, the Hough transform is presented in its 2-dimensional form. The Hough transform is a line extraction algorithm which uses a voting system. It is then compared to the Split-and-Merge algorithm using laser range _nder data. The Hough transform is found to compare well to the Split-and-Merge in 2 dimensions. Finally, the Hough transform is extended into 3-dimensions for use with the Kinect sensor. It was found that pre-processing of the Kinect data was necessary to reduce the number of points input into the Hough transform. Three edge detectors are used - the LoG, Canny and Sobel edge detectors. These were compared, and the Sobel detector was found to be the best. The _nal process was then used in multiple ways - _rst to determine its speed. Its accuracy was then investigated. It was found that the planes extracted were very inaccurate, and therefore not suitable for obstacle avoidance in mobile robotics. The suitability of the process for SLAM was also investigated. It was found to be unsuitable, as planar environments did not have distinct features which could be tracked, whilst the complex environment was not planar, and therefore the Hough transform would not work
    • …
    corecore