418 research outputs found

    Do-It-Yourself Single Camera 3D Pointer Input Device

    Full text link
    We present a new algorithm for single camera 3D reconstruction, or 3D input for human-computer interfaces, based on precise tracking of an elongated object, such as a pen, having a pattern of colored bands. To configure the system, the user provides no more than one labelled image of a handmade pointer, measurements of its colored bands, and the camera's pinhole projection matrix. Other systems are of much higher cost and complexity, requiring combinations of multiple cameras, stereocameras, and pointers with sensors and lights. Instead of relying on information from multiple devices, we examine our single view more closely, integrating geometric and appearance constraints to robustly track the pointer in the presence of occlusion and distractor objects. By probing objects of known geometry with the pointer, we demonstrate acceptable accuracy of 3D localization.Comment: 8 pages, 6 figures, 2018 15th Conference on Computer and Robot Visio

    Project SLOPE - Study of Lunar Orbiter Photographic Evaluation Final report

    Get PDF
    Quantitative measurement methods for evaluating ability of Lunar Orbiter photographs to detect topographic feature

    Object-based 2D-to-3D video conversion for effective stereoscopic content generation in 3D-TV applications

    Get PDF
    Three-dimensional television (3D-TV) has gained increasing popularity in the broadcasting domain, as it enables enhanced viewing experiences in comparison to conventional two-dimensional (2D) TV. However, its application has been constrained due to the lack of essential contents, i.e., stereoscopic videos. To alleviate such content shortage, an economical and practical solution is to reuse the huge media resources that are available in monoscopic 2D and convert them to stereoscopic 3D. Although stereoscopic video can be generated from monoscopic sequences using depth measurements extracted from cues like focus blur, motion and size, the quality of the resulting video may be poor as such measurements are usually arbitrarily defined and appear inconsistent with the real scenes. To help solve this problem, a novel method for object-based stereoscopic video generation is proposed which features i) optical-flow based occlusion reasoning in determining depth ordinal, ii) object segmentation using improved region-growing from masks of determined depth layers, and iii) a hybrid depth estimation scheme using content-based matching (inside a small library of true stereo image pairs) and depth-ordinal based regularization. Comprehensive experiments have validated the effectiveness of our proposed 2D-to-3D conversion method in generating stereoscopic videos of consistent depth measurements for 3D-TV applications

    Estimation of image quality factors for face recognition

    Get PDF
    Over the past few years, verification and identification of humans using biometric has gained attention of researchers and of the public in general. Face recognition systems are used by the public and the government and are applied in different facets of life including security, identification of criminals and identification of terrorists. Because of the importance of these applications, it is of great necessity that face recognition systems be as accurate as possible. Some research has shown that image quality degrades the performance of face recognition systems. Most previous research has focused on designing algorithms for face recognition that deal or compensate a single effect such as blur, lighting conditions, pose, and emotions. In this thesis we identify a number of factors influencing recognition performance and conduct an extensive study of the effects of image quality factors on recognition performance and discuss methods to estimate this quality factors

    COMPENSATION THROUGH PREDICTION FOR ATMOSPHERIC TURBULENCE EFFECTS ON TARGET IMAGING AND HIGH ENERGY LASER BEAM

    Get PDF
    Atmospheric turbulence significantly degrades the performance of High Energy Laser (HEL) beams. The three key undesirable effects are: (1) degraded target images used for target tracking; (2) inaccurate HEL pointing; and (3) reduction in HEL power during propagation to the target. The current approach for compensating for these turbulence effects uses adaptive optics to measure atmospheric turbulence and compensate the aberration in the optical beam. However, an adaptive optics system has limited performance in strong turbulence and an optical system makes the HEL system more complex. With improvements in Deep Learning algorithms and further development in Artificial Intelligence, we used Deep Learning and Convolutional Neural Networks to predict the atmospheric turbulence and compensate for its negative effects on laser beams. The predicted turbulence can be used for image correction and HEL beam correction using a deformable mirror to reduce turbulence effects during propagation.Military Expert 5, Republic of Singapore NavyApproved for public release. Distribution is unlimited

    Global Depth Perception from Familiar Scene Structure

    Get PDF
    In the absence of cues for absolute depth measurements as binocular disparity, motion, or defocus, the absolute distance between the observer and a scene cannot be measured. The interpretation of shading, edges and junctions may provide a 3D model of the scene but it will not inform about the actual "size" of the space. One possible source of information for absolute depth estimation is the image size of known objects. However, this is computationally complex due to the difficulty of the object recognition process. Here we propose a source of information for absolute depth estimation that does not rely on specific objects: we introduce a procedure for absolute depth estimation based on the recognition of the whole scene. The shape of the space of the scene and the structures present in the scene are strongly related to the scale of observation. We demonstrate that, by recognizing the properties of the structures present in the image, we can infer the scale of the scene, and therefore its absolute mean depth. We illustrate the interest in computing the mean depth of the scene with application to scene recognition and object detection
    • …
    corecore