12,043 research outputs found

    Rapid sync acquisition system Patent

    Get PDF
    System designed to reduce time required for obtaining synchronization in data communication with spacecraft utilizing pseudonoise code

    Parsing a sequence of qubits

    Full text link
    We develop a theoretical framework for frame synchronization, also known as block synchronization, in the quantum domain which makes it possible to attach classical and quantum metadata to quantum information over a noisy channel even when the information source and sink are frame-wise asynchronous. This eliminates the need of frame synchronization at the hardware level and allows for parsing qubit sequences during quantum information processing. Our framework exploits binary constant-weight codes that are self-synchronizing. Possible applications may include asynchronous quantum communication such as a self-synchronizing quantum network where one can hop into the channel at any time, catch the next coming quantum information with a label indicating the sender, and reply by routing her quantum information with control qubits for quantum switches all without assuming prior frame synchronization between users.Comment: 11 pages, 2 figures, 1 table. Final accepted version for publication in the IEEE Transactions on Information Theor

    Wireless Software Synchronization of Multiple Distributed Cameras

    Full text link
    We present a method for precisely time-synchronizing the capture of image sequences from a collection of smartphone cameras connected over WiFi. Our method is entirely software-based, has only modest hardware requirements, and achieves an accuracy of less than 250 microseconds on unmodified commodity hardware. It does not use image content and synchronizes cameras prior to capture. The algorithm operates in two stages. In the first stage, we designate one device as the leader and synchronize each client device's clock to it by estimating network delay. Once clocks are synchronized, the second stage initiates continuous image streaming, estimates the relative phase of image timestamps between each client and the leader, and shifts the streams into alignment. We quantitatively validate our results on a multi-camera rig imaging a high-precision LED array and qualitatively demonstrate significant improvements to multi-view stereo depth estimation and stitching of dynamic scenes. We release as open source 'libsoftwaresync', an Android implementation of our system, to inspire new types of collective capture applications.Comment: Main: 9 pages, 10 figures. Supplemental: 3 pages, 5 figure

    DDD17: End-To-End DAVIS Driving Dataset

    Full text link
    Event cameras, such as dynamic vision sensors (DVS), and dynamic and active-pixel vision sensors (DAVIS) can supplement other autonomous driving sensors by providing a concurrent stream of standard active pixel sensor (APS) images and DVS temporal contrast events. The APS stream is a sequence of standard grayscale global-shutter image sensor frames. The DVS events represent brightness changes occurring at a particular moment, with a jitter of about a millisecond under most lighting conditions. They have a dynamic range of >120 dB and effective frame rates >1 kHz at data rates comparable to 30 fps (frames/second) image sensors. To overcome some of the limitations of current image acquisition technology, we investigate in this work the use of the combined DVS and APS streams in end-to-end driving applications. The dataset DDD17 accompanying this paper is the first open dataset of annotated DAVIS driving recordings. DDD17 has over 12 h of a 346x260 pixel DAVIS sensor recording highway and city driving in daytime, evening, night, dry and wet weather conditions, along with vehicle speed, GPS position, driver steering, throttle, and brake captured from the car's on-board diagnostics interface. As an example application, we performed a preliminary end-to-end learning study of using a convolutional neural network that is trained to predict the instantaneous steering angle from DVS and APS visual data.Comment: Presented at the ICML 2017 Workshop on Machine Learning for Autonomous Vehicle
    corecore