12,087 research outputs found

    Forward Vehicle Collision Warning Based on Quick Camera Calibration

    Full text link
    Forward Vehicle Collision Warning (FCW) is one of the most important functions for autonomous vehicles. In this procedure, vehicle detection and distance measurement are core components, requiring accurate localization and estimation. In this paper, we propose a simple but efficient forward vehicle collision warning framework by aggregating monocular distance measurement and precise vehicle detection. In order to obtain forward vehicle distance, a quick camera calibration method which only needs three physical points to calibrate related camera parameters is utilized. As for the forward vehicle detection, a multi-scale detection algorithm that regards the result of calibration as distance priori is proposed to improve the precision. Intensive experiments are conducted in our established real scene dataset and the results have demonstrated the effectiveness of the proposed framework

    Biometric imaging: three dimensional imaging of the human hand using coded structured lighting

    Get PDF
    In this report the results of applying a three dimensional range imaging system, based on coded structured light, are presented. This includes a description of a new improved spatial coding scheme. This new scheme increases the number of reference points available and provides a basis for more accurate calculation of their location. A detailed description of the image processing methods used to extract structural information and to identify structural objects from the camera image are given. In addition the method used to calculate the locations of reference points with \u27subpixel\u27 accuracy is described. Finally, the results of experiments with synthesised and projected structured light images are presented

    Low-cost interactive active monocular range finder

    Full text link
    This paper describes a low-cost interactive active monocular range finder and illustrates the effect of introducing interactivity to the range acquisition process. The range finder consists of only one camera and a laser pointer, to which three LEDs are attached. When a user scans the laser along surfaces of objects, the camera captures the image of spots (one from the laser, and the others from LEDs), and triangulation is carried out using the camera\u27s viewing direction and the optical axis of the laser. The user interaction allows the range finder to acquire range data in which the sampling rate varies across the object depending on the underlying surface structures. Moreover, the processes of separating objects from the background and/or finding parts in the object can be achieved using the operator\u27s knowledge of the objects

    Measurement of range of motion of human finger joints, using a computer vision system

    Get PDF
    Assessment of finger range of motion (ROM) is often required for monitoring the effectiveness of rehabilitative treatments and for evaluating patients' functional impairment. There are several devices which are used to measure this motion, such as wire tracing, tracing onto paper and mechanical and electronic goniometry. These devices are quite cheap, excluding electronic goniometry; however the drawbacks of these devices are their lack of accuracy and the time- consuming nature of the measurement process. The work described in this thesis considers the design, implementation and validation of a new medical measurement system utilized in the evaluation of the range of motion of the human finger joints instead of the current measurement tools. The proposed system is a non-contact measurement device based on computer vision technology and has many advantages over the existing measurement devices. In terms of accuracy, better results are achieved by this system, it can be operated by semi-skilled person, and is time saving for the evaluator. The computer vision system in this study consists of CCD cameras to capture the images, a frame-grabber to change the analogue signal from the cameras to digital signals which can be manipulated by a computer, Ultra Violet light (UV) to illuminate the measurement space, software to process the images and perform the required computation, a darkened enclosure to accommodate the cameras and UV light and to shield the working area from any undesirable ambient light. Two calibration techniques were used to calibrate the cameras, Direct Linear Transformation and Tsai. A calibration piece that suits this application was designed and manufactured. A steel hand model was used to measure the fingers joint angles. The average error from measuring the finger angles using this system was around 1 degree compared with 5 degrees for the existing used techniques

    Self-Calibration of Cameras with Euclidean Image Plane in Case of Two Views and Known Relative Rotation Angle

    Full text link
    The internal calibration of a pinhole camera is given by five parameters that are combined into an upper-triangular 3×33\times 3 calibration matrix. If the skew parameter is zero and the aspect ratio is equal to one, then the camera is said to have Euclidean image plane. In this paper, we propose a non-iterative self-calibration algorithm for a camera with Euclidean image plane in case the remaining three internal parameters --- the focal length and the principal point coordinates --- are fixed but unknown. The algorithm requires a set of N7N \geq 7 point correspondences in two views and also the measured relative rotation angle between the views. We show that the problem generically has six solutions (including complex ones). The algorithm has been implemented and tested both on synthetic data and on publicly available real dataset. The experiments demonstrate that the method is correct, numerically stable and robust.Comment: 13 pages, 7 eps-figure
    corecore