3,397 research outputs found

    A sparse-to-dense method for 3D optical flow estimation in 3D light microscopy image sequences

    Get PDF
    International audienceWe present a two-stage 3D optical flow estimation method for light microscopy image volumes. The method takes a pair of light microscopy image volumes as input, segments the 2D slices of the source volume in superpixels and sparsely estimates the 3D displacement vectors in the volume pair. A weighted interpolation is then introduced to get a dense 3D flow field. Edges and motion boundaries are considered during the interpolation. Our experimental results show good gain in execution speed, and accuracy evaluated in computer generated 3D data. Promising results on real 3D image sequences are reported

    4D Temporally Coherent Light-field Video

    Get PDF
    Light-field video has recently been used in virtual and augmented reality applications to increase realism and immersion. However, existing light-field methods are generally limited to static scenes due to the requirement to acquire a dense scene representation. The large amount of data and the absence of methods to infer temporal coherence pose major challenges in storage, compression and editing compared to conventional video. In this paper, we propose the first method to extract a spatio-temporally coherent light-field video representation. A novel method to obtain Epipolar Plane Images (EPIs) from a spare light-field camera array is proposed. EPIs are used to constrain scene flow estimation to obtain 4D temporally coherent representations of dynamic light-fields. Temporal coherence is achieved on a variety of light-field datasets. Evaluation of the proposed light-field scene flow against existing multi-view dense correspondence approaches demonstrates a significant improvement in accuracy of temporal coherence.Comment: Published in 3D Vision (3DV) 201

    3D Flow Field Estimation and Assessment for Live Cell Fluorescence Microscopy

    Get PDF
    International audienceMotivation: The revolution in light sheet microscopy enables the concurrent observation of thousands of dynamic processes, from single molecules to cellular organelles, with high spatiotemporal resolution. However, challenges in the interpretation of multidimensional data requires the fully automaticmeasurement of those motions to link local processes to cellular functions. This includes the design and the implementation of image processing pipelines able to deal with diverse motion types, and 3D visualization tools adapted to the human visual system.Results: Here, we describe a new method for 3D motion estimation that addresses the aforementioned issues. We integrate 3D matching and variational approach to handle a diverse range of motion without any prior on the shape of moving objects. We compare dierent similarity measures to cope with intensity ambiguities and demonstrate the eectiveness of the Census signature for both stages. Additionally, wepresent two intuitive visualization approaches to adapt complex 3D measures into an interpretable 2D view, and a novel way to assess the quality of flow estimates in absence of ground truth

    Event-based Vision: A Survey

    Get PDF
    Event cameras are bio-inspired sensors that differ from conventional frame cameras: Instead of capturing images at a fixed rate, they asynchronously measure per-pixel brightness changes, and output a stream of events that encode the time, location and sign of the brightness changes. Event cameras offer attractive properties compared to traditional cameras: high temporal resolution (in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low power consumption, and high pixel bandwidth (on the order of kHz) resulting in reduced motion blur. Hence, event cameras have a large potential for robotics and computer vision in challenging scenarios for traditional cameras, such as low-latency, high speed, and high dynamic range. However, novel methods are required to process the unconventional output of these sensors in order to unlock their potential. This paper provides a comprehensive overview of the emerging field of event-based vision, with a focus on the applications and the algorithms developed to unlock the outstanding properties of event cameras. We present event cameras from their working principle, the actual sensors that are available and the tasks that they have been used for, from low-level vision (feature detection and tracking, optic flow, etc.) to high-level vision (reconstruction, segmentation, recognition). We also discuss the techniques developed to process events, including learning-based techniques, as well as specialized processors for these novel sensors, such as spiking neural networks. Additionally, we highlight the challenges that remain to be tackled and the opportunities that lie ahead in the search for a more efficient, bio-inspired way for machines to perceive and interact with the world

    10411 Abstracts Collection -- Computational Video

    Get PDF
    From 10.10.2010 to 15.10.2010, the Dagstuhl Seminar 10411 ``Computational Video \u27\u27 was held in Schloss Dagstuhl~--~Leibniz Center for Informatics. During the seminar, several participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas are put together in this paper. The first section describes the seminar topics and goals in general. Links to extended abstracts or full papers are provided, if available
    • …
    corecore