8,632 research outputs found

    Improved Real-Time Monocular SLAM Using Semantic Segmentation on Selective Frames

    Full text link
    Monocular simultaneous localization and mapping (SLAM) is emerging in advanced driver assistance systems and autonomous driving, because a single camera is cheap and easy to install. Conventional monocular SLAM has two major challenges leading inaccurate localization and mapping. First, it is challenging to estimate scales in localization and mapping. Second, conventional monocular SLAM uses inappropriate mapping factors such as dynamic objects and low-parallax areas in mapping. This paper proposes an improved real-time monocular SLAM that resolves the aforementioned challenges by efficiently using deep learning-based semantic segmentation. To achieve the real-time execution of the proposed method, we apply semantic segmentation only to downsampled keyframes in parallel with mapping processes. In addition, the proposed method corrects scales of camera poses and three-dimensional (3D) points, using estimated ground plane from road-labeled 3D points and the real camera height. The proposed method also removes inappropriate corner features labeled as moving objects and low parallax areas. Experiments with eight video sequences demonstrate that the proposed monocular SLAM system achieves significantly improved and comparable trajectory tracking accuracy, compared to existing state-of-the-art monocular and stereo SLAM systems, respectively. The proposed system can achieve real-time tracking on a standard CPU potentially with a standard GPU support, whereas existing segmentation-aided monocular SLAM does not

    Adopting multiview pixel mapping for enhancing quality of holoscopic 3D scene in parallax barriers based holoscopic 3D displays

    Get PDF
    The Autostereoscopic multiview 3D Display is robustly developed and widely available in commercial markets. Excellent improvements are made using pixel mapping techniques and achieved an acceptable 3D resolution with balanced pixel aspect ratio in lens array technology. This paper proposes adopting multiview pixel mapping for enhancing quality constructed holoscopic 3D scene in parallax barriers based holoscopic 3D displays achieving great results. The Holoscopic imaging technology mimics the imaging system of insects, such as the fly, utilizing a single camera, equipped with a large number of micro-lenses, to capture a scene, offering rich parallax information and enhanced 3D feeling without the need of wearing specific eyewear. In addition pixel mapping and holoscopic 3D rendering tools are developed including a custom built holoscopic 3D displays to test the proposed method and carry out a like-to-like comparison.This work has been supported by European Commission under Grant FP7-ICT-2009-4 (3DVIVANT). The authors wish to ex-press their gratitude and thanks for the support given throughout the project

    Enhancement of Underwater Video Mosaics for Post-Processing

    Get PDF
    Mosaics of seafloor created from still images or video acquired underwater have proved to be useful for construction of maps of forensic and archeological sites, species\u27 abundance estimates, habitat characterization, etc. Images taken by a camera mounted on a stable platform are registered (at first pair-wise and then globally) and assembled in a high resolution visual map of the surveyed area. While this map is usually sufficient for a human orientation and even quantitative measurements, it often contains artifacts that complicate an automatic post-processing (for example, extraction of shapes for organism counting, or segmentation for habitat characterization). The most prominent artifacts are inter-frame seams caused by inhomogeneous artificial illumination, and local feature misalignments due to parallax effects - result of an attempt to represent a 3D world on a 2D map. In this paper we propose two image processing techniques for mosaic quality enhancement - median mosaic-based illumination correction suppressing appearance of inter-frame seams, and micro warping decreasing influence of parallax effects

    Objective Evaluation Criteria for Shooting Quality of Stereo Cameras over Short Distance

    Get PDF
    Stereo cameras are the basic tools used to obtain stereoscopic image pairs, which can lead to truly great image quality. However, some inappropriate shooting conditions may cause discomfort while viewing stereo images. It is therefore considerably necessary to establish the perceptual criteria that can be used to evaluate the shooting quality of stereo cameras. This article proposes objective quality evaluation criteria based on the characteristics of parallel and toed-in camera configurations. Considering the different internal structures and basic shooting principles, this paper focuses on short-distance shooting conditions and establishes assessment criteria for both parallel and toed-in camera configurations. Experimental results show that the proposed evaluation criteria can predict the visual perception of stereoscopic images and effectively evaluate stereoscopic image quality

    Quasar Parallax: a Method for Determining Direct Geometrical Distances to Quasars

    Full text link
    We describe a novel method to determine direct geometrical distances to quasars that can measure the cosmological constant, Lambda, with minimal assumptions. This method is equivalent to geometric parallax, with the `standard length' being the size of the quasar broad emission line region (BELR) as determined from the light travel time measurements of reverberation mapping. The effect of non-zero Lambda on angular diameter is large, 40% at z=2, so mapping angular diameter distances vs. redshift will give Lambda with (relative) ease. In principle these measurements could be made in the UV, optical, near infrared or even X-ray bands. Interferometers with a resolution of 0.01mas are needed to measure the size of the BELR in z=2 quasars, which appear plausible given reasonable short term extrapolations of current technology.Comment: 13 pages, with 3 figures. ApJ Letters, in press (Dec 20, 2002

    The History of Astrometry

    Full text link
    The history of astrometry, the branch of astronomy dealing with the positions of celestial objects, is a lengthy and complex chronicle, having its origins in the earliest records of astronomical observations more than two thousand years ago, and extending to the high accuracy observations being made from space today. Improved star positions progressively opened up and advanced fundamental fields of scientific enquiry, including our understanding of the scale of the solar system, the details of the Earth's motion through space, and the comprehension and acceptance of Newtonianism. They also proved crucial to the practical task of maritime navigation. Over the past 400 years, during which positional accuracy has improved roughly logarithmically with time, the distances to the nearest stars were triangulated, making use of the extended measurement baseline given by the Earth's orbit around the Sun. This led to quantifying the extravagantly vast scale of the Universe, to a determination of the physical properties of stars, and to the resulting characterisation of the structure, dynamics and origin of our Galaxy. After a period in the middle years of the twentieth century in which accuracy improvements were greatly hampered by the perturbing effects of the Earth's atmosphere, ultra-high accuracies of star positions from space platforms have led to a renewed advance in this fundamental science over the past few years.Comment: 52 pages, 14 figures. To appear in The European Physical Journal: Historical Perspectives on Contemporary Physic

    Cross-calibration of Time-of-flight and Colour Cameras

    Get PDF
    Time-of-flight cameras provide depth information, which is complementary to the photometric appearance of the scene in ordinary images. It is desirable to merge the depth and colour information, in order to obtain a coherent scene representation. However, the individual cameras will have different viewpoints, resolutions and fields of view, which means that they must be mutually calibrated. This paper presents a geometric framework for this multi-view and multi-modal calibration problem. It is shown that three-dimensional projective transformations can be used to align depth and parallax-based representations of the scene, with or without Euclidean reconstruction. A new evaluation procedure is also developed; this allows the reprojection error to be decomposed into calibration and sensor-dependent components. The complete approach is demonstrated on a network of three time-of-flight and six colour cameras. The applications of such a system, to a range of automatic scene-interpretation problems, are discussed.Comment: 18 pages, 12 figures, 3 table
    • …
    corecore