1,138 research outputs found

    Mosaics from arbitrary stereo video sequences

    Get PDF
    lthough mosaics are well established as a compact and non-redundant representation of image sequences, their application still suffers from restrictions of the camera motion or has to deal with parallax errors. We present an approach that allows construction of mosaics from arbitrary motion of a head-mounted camera pair. As there are no parallax errors when creating mosaics from planar objects, our approach first decomposes the scene into planar sub-scenes from stereo vision and creates a mosaic for each plane individually. The power of the presented mosaicing technique is evaluated in an office scenario, including the analysis of the parallax error

    Seafloor Video Mapping: Modeling, Algorithms, Apparatus

    Get PDF
    This paper discusses a technique used for construction of high-resolution image mosaic from a videosequence and the synchronously logged camera attitude information. It allows one to infer geometric characteristics of the imaged terrain and hence improve the mosaic quality and reduce the computational burden. The technique is demonstrated using numerical modeling and is applied to videodata collected on Rainsford Island, Mass. Calculation of the transformation relating consecutive image frames is an essential operation affecting reliability of the whole mosaicing process. Improvements to the algorithm are suggested, which significantly decrease the possibility of convergence to an inappropriate solution

    Sensor-Assisted Video Mosaicing for Seafloor Mapping

    Get PDF
    This paper discusses a proposed processing technique for combining video imagery with auxiliary sensor information. The latter greatly simplifies image processing by reducing complexity of the transformation model. The mosaics produced by this technique are adequate for many applications, in particular habitat mapping. The algorithm is demonstrated through simulations and hardware configuration is described

    Improvement of Image Alignment Using Camera Attitude Information

    Get PDF
    We discuss a proposed technique for incorporation of information from a variety of sensors in a video imagery processing pipeline. The auxiliary information allows one to simplify computations, effectively reducing the number of independent parameters in the transformation model. The mosaics produced by this technique are adequate for many applications, in particular habitat mapping. The algorithm, demonstrated through simulations and hardware configuration, is described in detai

    Laryngoscopic Image Stitching for View Enhancement and Documentation - First Experiences

    Get PDF
    One known problem within laryngoscopy is the spatially limited view onto the hypopharynx and the larynx through the endoscope. To examine the complete larynx and hypopharynx, the laryngoscope can be rotated about its main axis, and hence the physician obtains a complete view. If such examinations are captured using endoscopic video, the examination can be reviewed in detail at a later time. Nevertheless, in order to document the examination with a single representative image, a panorama image can be computed for archiving and enhanced documentation. Twenty patients with various clinical findings were examined with a 70 rigid laryngoscope, and the video sequences were digitally stored. The image sequence for each patient was then post-processed using an image stitching tool based on SIFT features, the RANSAC approach and blending. As a result, endoscopic panorama images of the larynx and pharynx were obtained for each video sequence. The proposed approach of image stitching for laryngoscopic video sequences offers a new tool for enhanced visual examination and documentation of morphologic characteristics of the larynx and the hypopharynx

    Low-Cost Compressive Sensing for Color Video and Depth

    Full text link
    A simple and inexpensive (low-power and low-bandwidth) modification is made to a conventional off-the-shelf color video camera, from which we recover {multiple} color frames for each of the original measured frames, and each of the recovered frames can be focused at a different depth. The recovery of multiple frames for each measured frame is made possible via high-speed coding, manifested via translation of a single coded aperture; the inexpensive translation is constituted by mounting the binary code on a piezoelectric device. To simultaneously recover depth information, a {liquid} lens is modulated at high speed, via a variable voltage. Consequently, during the aforementioned coding process, the liquid lens allows the camera to sweep the focus through multiple depths. In addition to designing and implementing the camera, fast recovery is achieved by an anytime algorithm exploiting the group-sparsity of wavelet/DCT coefficients.Comment: 8 pages, CVPR 201

    A middleware for a large array of cameras

    Get PDF
    Large arrays of cameras are increasingly being employed for producing high quality image sequences needed for motion analysis research. This leads to the logistical problem with coordination and control of a large number of cameras. In this paper, we used a lightweight multi-agent system for coordinating such camera arrays. The agent framework provides more than a remote sensor access API. It allows reconfigurable and transparent access to cameras, as well as software agents capable of intelligent processing. Furthermore, it eases maintenance by encouraging code reuse. Additionally, our agent system includes an automatic discovery mechanism at startup, and multiple language bindings. Performance tests showed the lightweight nature of the framework while validating its correctness and scalability. Two different camera agents were implemented to provide access to a large array of distributed cameras. Correct operation of these camera agents was confirmed via several image processing agents
    • 

    corecore