347 research outputs found

    Mosaics from arbitrary stereo video sequences

    Get PDF
    lthough mosaics are well established as a compact and non-redundant representation of image sequences, their application still suffers from restrictions of the camera motion or has to deal with parallax errors. We present an approach that allows construction of mosaics from arbitrary motion of a head-mounted camera pair. As there are no parallax errors when creating mosaics from planar objects, our approach first decomposes the scene into planar sub-scenes from stereo vision and creates a mosaic for each plane individually. The power of the presented mosaicing technique is evaluated in an office scenario, including the analysis of the parallax error

    The Event-Camera Dataset and Simulator: Event-based Data for Pose Estimation, Visual Odometry, and SLAM

    Full text link
    New vision sensors, such as the Dynamic and Active-pixel Vision sensor (DAVIS), incorporate a conventional global-shutter camera and an event-based sensor in the same pixel array. These sensors have great potential for high-speed robotics and computer vision because they allow us to combine the benefits of conventional cameras with those of event-based sensors: low latency, high temporal resolution, and very high dynamic range. However, new algorithms are required to exploit the sensor characteristics and cope with its unconventional output, which consists of a stream of asynchronous brightness changes (called "events") and synchronous grayscale frames. For this purpose, we present and release a collection of datasets captured with a DAVIS in a variety of synthetic and real environments, which we hope will motivate research on new algorithms for high-speed and high-dynamic-range robotics and computer-vision applications. In addition to global-shutter intensity images and asynchronous events, we provide inertial measurements and ground-truth camera poses from a motion-capture system. The latter allows comparing the pose accuracy of ego-motion estimation algorithms quantitatively. All the data are released both as standard text files and binary files (i.e., rosbag). This paper provides an overview of the available data and describes a simulator that we release open-source to create synthetic event-camera data.Comment: 7 pages, 4 figures, 3 table

    04251 -- Imaging Beyond the Pinhole Camera

    Get PDF
    From 13.06.04 to 18.06.04, the Dagstuhl Seminar 04251 ``Imaging Beyond the Pin-hole Camera. 12th Seminar on Theoretical Foundations of Computer Vision\u27\u27 was held in the International Conference and Research Center (IBFI), Schloss Dagstuhl. During the seminar, several participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas are put together in this paper. The first section describes the seminar topics and goals in general. Links to extended abstracts or full papers are provided, if available

    Robust Techniques for Feature-based Image Mosaicing

    Get PDF
    Since the last few decades, image mosaicing in real time applications has been a challenging field for image processing experts. It has wide applications in the field of video conferencing, 3D image reconstruction, satellite imaging and several medical as well as computer vision fields. It can also be used for mosaic-based localization, motion detection & tracking, augmented reality, resolution enhancement, generating large FOV etc. In this research work, feature based image mosaicing technique using image fusion have been proposed. The image mosaicing algorithms can be categorized into two broad horizons. The first is the direct method and the second one is based on image features. The direct methods need an ambient initialization whereas, Feature based methods does not require initialization during registration. The feature-based techniques are primarily followed by the four steps: feature detection, feature matching, transformation model estimation, image resampling and transformation. SIFT and SURF are such algorithms which are based on the feature detection for the accomplishment of image mosaicing, but both the algorithms has their own limitations as well as advantages according to the applications concerned. The proposed method employs this two feature based image mosaicing techniques to generate an output image that works out the limitations of the both in terms of image quality The developed robust algorithm takes care of the combined effect of rotation, illumination, noise variation and other minor variation. Initially, the input images are stitched together using the popular stitching algorithms i.e. Scale Invariant Feature Transform (SIFT) and Speeded-Up Robust Features (SURF). To extract the best features from the stitching results, the blending process is done by means of Discrete Wavelet Transform (DWT) using the maximum selection rule for both approximate as well as detail-components

    A cognitive ego-vision system for interactive assistance

    Get PDF
    With increasing computational power and decreasing size, computers nowadays are already wearable and mobile. They become attendant of peoples' everyday life. Personal digital assistants and mobile phones equipped with adequate software gain a lot of interest in public, although the functionality they provide in terms of assistance is little more than a mobile databases for appointments, addresses, to-do lists and photos. Compared to the assistance a human can provide, such systems are hardly to call real assistants. The motivation to construct more human-like assistance systems that develop a certain level of cognitive capabilities leads to the exploration of two central paradigms in this work. The first paradigm is termed cognitive vision systems. Such systems take human cognition as a design principle of underlying concepts and develop learning and adaptation capabilities to be more flexible in their application. They are embodied, active, and situated. Second, the ego-vision paradigm is introduced as a very tight interaction scheme between a user and a computer system that especially eases close collaboration and assistance between these two. Ego-vision systems (EVS) take a user's (visual) perspective and integrate the human in the system's processing loop by means of a shared perception and augmented reality. EVSs adopt techniques of cognitive vision to identify objects, interpret actions, and understand the user's visual perception. And they articulate their knowledge and interpretation by means of augmentations of the user's own view. These two paradigms are studied as rather general concepts, but always with the goal in mind to realize more flexible assistance systems that closely collaborate with its users. This work provides three major contributions. First, a definition and explanation of ego-vision as a novel paradigm is given. Benefits and challenges of this paradigm are discussed as well. Second, a configuration of different approaches that permit an ego-vision system to perceive its environment and its user is presented in terms of object and action recognition, head gesture recognition, and mosaicing. These account for the specific challenges identified for ego-vision systems, whose perception capabilities are based on wearable sensors only. Finally, a visual active memory (VAM) is introduced as a flexible conceptual architecture for cognitive vision systems in general, and for assistance systems in particular. It adopts principles of human cognition to develop a representation for information stored in this memory. So-called memory processes continuously analyze, modify, and extend the content of this VAM. The functionality of the integrated system emerges from their coordinated interplay of these memory processes. An integrated assistance system applying the approaches and concepts outlined before is implemented on the basis of the visual active memory. The system architecture is discussed and some exemplary processing paths in this system are presented and discussed. It assists users in object manipulation tasks and has reached a maturity level that allows to conduct user studies. Quantitative results of different integrated memory processes are as well presented as an assessment of the interactive system by means of these user studies

    Computer Vision and Image Understanding xxx

    Get PDF
    Abstract 12 A compact visual representation, called the 3D layered, adaptive-resolution, and multi-13 perspective panorama (LAMP), is proposed for representing large-scale 3D scenes with large 14 variations of depths and obvious occlusions. Two kinds of 3D LAMP representations are 15 proposed: the relief-like LAMP and the image-based LAMP. Both types of LAMPs con-16 cisely represent almost all the information from a long image sequence. Methods to con-17 struct LAMP representations from video sequences with dominant translation are 18 provided. The relief-like LAMP is basically a single extended multi-perspective panoramic 19 view image. Each pixel has a pair of texture and depth values, but each pixel may also have 20 multiple pairs of texture-depth values to represent occlusion in layers, in addition to adap-21 tive resolution changing with depth. The image-based LAMP, on the other hand, consists of 22 a set of multi-perspective layers, each of which has a pair of 2D texture and depth maps, 23 but with adaptive time-sampling scales depending on depths of scene points. Several exam-24 ples of 3D LAMP construction for real image sequences are given. The 3D LAMP is a con-25 cise and powerful representation for image-based rendering. 2

    Video alignment to a common reference

    Get PDF
    2015 Spring.Includes bibliographical references.Handheld videos often include unintentional motion (jitter) and intentional motion (pan and/or zoom). Human viewers prefer to see jitter removed, creating a smoothly moving camera. For video analysis, in contrast, aligning to a fixed stable background is sometimes preferable. This paper presents an algorithm that removes both forms of motion using a novel and efficient way of tracking background points while ignoring moving foreground points. The approach is related to image mosaicing, but the result is a video rather than an enlarged still image. It is also related to multiple object tracking approaches, but simpler since moving objects need not be explicitly tracked. The algorithm presented takes as input a video and returns one or several stabilized videos. Videos are broken into parts when the algorithm detects background change and it becomes necessary to fix upon a new background. We present two techniques in this thesis. One technique stabilizes the video with respect to the first available frame. Another technique stabilizes the videos with respect to a best frame. Our approach assumes the person holding the camera is standing in one place and that objects in motion do not dominate the image. Our algorithm performs better than previously published approaches when compared on 1,401 handheld videos from the recently released Point-and-Shoot Face Recognition Challenge (PASC)
    corecore