6,132 research outputs found

    On the Two-View Geometry of Unsynchronized Cameras

    Full text link
    We present new methods for simultaneously estimating camera geometry and time shift from video sequences from multiple unsynchronized cameras. Algorithms for simultaneous computation of a fundamental matrix or a homography with unknown time shift between images are developed. Our methods use minimal correspondence sets (eight for fundamental matrix and four and a half for homography) and therefore are suitable for robust estimation using RANSAC. Furthermore, we present an iterative algorithm that extends the applicability on sequences which are significantly unsynchronized, finding the correct time shift up to several seconds. We evaluated the methods on synthetic and wide range of real world datasets and the results show a broad applicability to the problem of camera synchronization.Comment: 12 pages, 9 figures, Computer Vision and Pattern Recognition (CVPR) 201

    Wireless Software Synchronization of Multiple Distributed Cameras

    Full text link
    We present a method for precisely time-synchronizing the capture of image sequences from a collection of smartphone cameras connected over WiFi. Our method is entirely software-based, has only modest hardware requirements, and achieves an accuracy of less than 250 microseconds on unmodified commodity hardware. It does not use image content and synchronizes cameras prior to capture. The algorithm operates in two stages. In the first stage, we designate one device as the leader and synchronize each client device's clock to it by estimating network delay. Once clocks are synchronized, the second stage initiates continuous image streaming, estimates the relative phase of image timestamps between each client and the leader, and shifts the streams into alignment. We quantitatively validate our results on a multi-camera rig imaging a high-precision LED array and qualitatively demonstrate significant improvements to multi-view stereo depth estimation and stitching of dynamic scenes. We release as open source 'libsoftwaresync', an Android implementation of our system, to inspire new types of collective capture applications.Comment: Main: 9 pages, 10 figures. Supplemental: 3 pages, 5 figure

    Automatic alignment of surgical videos using kinematic data

    Full text link
    Over the past one hundred years, the classic teaching methodology of "see one, do one, teach one" has governed the surgical education systems worldwide. With the advent of Operation Room 2.0, recording video, kinematic and many other types of data during the surgery became an easy task, thus allowing artificial intelligence systems to be deployed and used in surgical and medical practice. Recently, surgical videos has been shown to provide a structure for peer coaching enabling novice trainees to learn from experienced surgeons by replaying those videos. However, the high inter-operator variability in surgical gesture duration and execution renders learning from comparing novice to expert surgical videos a very difficult task. In this paper, we propose a novel technique to align multiple videos based on the alignment of their corresponding kinematic multivariate time series data. By leveraging the Dynamic Time Warping measure, our algorithm synchronizes a set of videos in order to show the same gesture being performed at different speed. We believe that the proposed approach is a valuable addition to the existing learning tools for surgery.Comment: Accepted at AIME 201

    Television noise reduction device

    Get PDF
    A noise reduction system that divides the color video signal into its luminance and chrominance components is reported. The luminance component of a given frame is summed with the luminance component of at least one preceding frame which was stored on a disc recorder. The summation is carried out so as to achieve a signal amplitude equivalent to that of the original signal. The averaged luminance signal is then recombined with the chrominance signal to achieve a noise-reduced television signal

    Visualization of Accessible Multimedia Content in Web Pages

    Get PDF
    Multimedia content still presented on the web sites. The visualization of multimedia content by the users with disabilities, those that usually use screen readers, is extremely difficult. With the onset of the audio sequence of multimedia presentation it is difficult for users with visual impairs to listen the audio component of presentation and the audio version of the screen readers too, because the two audio streams cannot be controlled using only one volume control. Therefore, because of the difficulties to control the available audio streams and because of the difficulties to access the control buttons by people with disabilities, the multimedia content is often inaccessible for users with visual problems. More than this, the use of dynamic users’ interfaces is a critical problem because the screen-readers cannot detect the dynamics in content changes. The current paper presents some solutions for multimedia content production and distribution in distributed multimedia web presentations.Accessible multimedia content, Synchronized Accessible Media Interchange

    ACE 16k based stand-alone system for real-time pre-processing tasks

    Get PDF
    This paper describes the design of a programmable stand-alone system for real time vision pre-processing tasks. The system's architecture has been implemented and tested using an ACE16k chip and a Xilinx xc4028xl FPGA. The ACE16k chip consists basically of an array of 128×128 identical mixed-signal processing units, locally interacting, which operate in accordance with single instruction multiple data (SIMD) computing architectures and has been designed for high speed image pre-processing tasks requiring moderate accuracy levels (7 bits). The input images are acquired using the optical input capabilities of the ACE16k chip, and after being processed according to a programmed algorithm, the images are represented at real time on a TFT screen. The system is designed to store and run different algorithms and to allow changes and improvements. Its main board includes a digital core, implemented on a Xilinx 4028 Series FPGA, which comprises a custom programmable Control Unit, a digital monochrome PAL video generator and an image memory selector. Video SRAM chips are included to store and access images processed by the ACE16k. Two daughter boards hold the program SRAM and a video DAC-mixer card is used to generate composite analog video signal.European Commission IST2001 – 38097Ministerio de Ciencia y Tecnología TIC2003 – 09817- C02 – 01Office of Naval Research (USA) N00014021088

    Construction of ATS Cloud Console Final Report

    Get PDF
    ATS cloud console for rapid analysis of cloud image sequence
    corecore