399,177 research outputs found

    On the Issue of Camera Calibration with Narrow Angular Field of View

    Get PDF
    This paper considers the issue of calibrating a camera with narrow angular field of view using standard, perspective methods in computer vision. In doing so, the significance of perspective distortion both for camera calibration and for pose estimation is revealed. Since narrow angular field of view cameras make it difficult to obtain rich images in terms of perspectivity, the accuracy of the calibration results is expectedly low. From this, we propose an alternative method that compensates for this loss by utilizing the pose readings of a robotic manipulator. It facilitates accurate pose estimation by nonlinear optimization, minimizing reprojection errors and errors in the manipulator transformations at the same time. Accurate pose estimation in turn enables accurate parametrization of a perspective camera

    Optical approach of a hypercatadioptric system depth of field

    Get PDF
    A catadioptric system is composed of a mirror and a perspective camera. Since the mirror is curved and the distance between the mirror and the camera is short, some parts of the panoramic image keep blurred. In this article, an optical approach of the panoramic system using a hyperbolic mirror is presented and its depth of field is analyzed. The impact of different parameters of mirror and camera on the quality of the panoramic image is researched and a valid method of choosing camera and mirror is presented. Finally, this article gives some possible perspectives based on these researches

    Adaptive User Perspective Rendering for Handheld Augmented Reality

    Full text link
    Handheld Augmented Reality commonly implements some variant of magic lens rendering, which turns only a fraction of the user's real environment into AR while the rest of the environment remains unaffected. Since handheld AR devices are commonly equipped with video see-through capabilities, AR magic lens applications often suffer from spatial distortions, because the AR environment is presented from the perspective of the camera of the mobile device. Recent approaches counteract this distortion based on estimations of the user's head position, rendering the scene from the user's perspective. To this end, approaches usually apply face-tracking algorithms on the front camera of the mobile device. However, this demands high computational resources and therefore commonly affects the performance of the application beyond the already high computational load of AR applications. In this paper, we present a method to reduce the computational demands for user perspective rendering by applying lightweight optical flow tracking and an estimation of the user's motion before head tracking is started. We demonstrate the suitability of our approach for computationally limited mobile devices and we compare it to device perspective rendering, to head tracked user perspective rendering, as well as to fixed point of view user perspective rendering

    The Right (Angled) Perspective: Improving the Understanding of Road Scenes Using Boosted Inverse Perspective Mapping

    Full text link
    Many tasks performed by autonomous vehicles such as road marking detection, object tracking, and path planning are simpler in bird's-eye view. Hence, Inverse Perspective Mapping (IPM) is often applied to remove the perspective effect from a vehicle's front-facing camera and to remap its images into a 2D domain, resulting in a top-down view. Unfortunately, however, this leads to unnatural blurring and stretching of objects at further distance, due to the resolution of the camera, limiting applicability. In this paper, we present an adversarial learning approach for generating a significantly improved IPM from a single camera image in real time. The generated bird's-eye-view images contain sharper features (e.g. road markings) and a more homogeneous illumination, while (dynamic) objects are automatically removed from the scene, thus revealing the underlying road layout in an improved fashion. We demonstrate our framework using real-world data from the Oxford RobotCar Dataset and show that scene understanding tasks directly benefit from our boosted IPM approach.Comment: equal contribution of first two authors, 8 full pages, 6 figures, accepted at IV 201

    Generic decoupled image-based visual servoing for cameras obeying the unified projection model

    Get PDF
    In this paper a generic decoupled imaged-based control scheme for calibrated cameras obeying the unified projection model is proposed. The proposed decoupled scheme is based on the surface of object projections onto the unit sphere. Such features are invariant to rotational motions. This allows the control of translational motion independently from the rotational motion. Finally, the proposed results are validated with experiments using a classical perspective camera as well as a fisheye camera mounted on a 6 dofs robot platform
    • 

    corecore