1,723 research outputs found

    Sequence-Level Reference Frames In Video Coding

    Get PDF
    The proliferation of low-cost DRAM chipsets now begins to allow for the consideration of substantially-increased decoded picture buffers in advanced video coding standards such as HEVC, VVC, and Google VP9. At the same time, the increasing demand for rapid scene changes and multiple scene repetitions in entertainment or broadcast content indicates that extending the frame referencing interval to tens of minutes or even the entire video sequence may offer coding gains, as long as one is able to identify frame similarity in a computationally- and memory-efficient manner. Motivated by these observations, we propose a “stitching” method that defines a reference buffer and a reference frame selection algorithm. Our proposal extends the referencing interval of inter-frame video coding to the entire length of video sequences. Our reference frame selection algorithm uses well-established feature descriptor methods that describe frame structural elements in a compact and semantically-rich manner. We propose to combine such compact descriptors with a similarity scoring mechanism in order to select the frames to be “stitched” to reference picture buffers of advanced inter-frame encoders like HEVC, VVC, and VP9 without breaking standard compliance. Our evaluation on synthetic and real-world video sequences with the HEVC and VVC reference encoders shows that our method offers significant rate gains, with complexity and memory requirements that remain manageable for practical encoders and decoders

    Event-based Vision: A Survey

    Get PDF
    Event cameras are bio-inspired sensors that differ from conventional frame cameras: Instead of capturing images at a fixed rate, they asynchronously measure per-pixel brightness changes, and output a stream of events that encode the time, location and sign of the brightness changes. Event cameras offer attractive properties compared to traditional cameras: high temporal resolution (in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low power consumption, and high pixel bandwidth (on the order of kHz) resulting in reduced motion blur. Hence, event cameras have a large potential for robotics and computer vision in challenging scenarios for traditional cameras, such as low-latency, high speed, and high dynamic range. However, novel methods are required to process the unconventional output of these sensors in order to unlock their potential. This paper provides a comprehensive overview of the emerging field of event-based vision, with a focus on the applications and the algorithms developed to unlock the outstanding properties of event cameras. We present event cameras from their working principle, the actual sensors that are available and the tasks that they have been used for, from low-level vision (feature detection and tracking, optic flow, etc.) to high-level vision (reconstruction, segmentation, recognition). We also discuss the techniques developed to process events, including learning-based techniques, as well as specialized processors for these novel sensors, such as spiking neural networks. Additionally, we highlight the challenges that remain to be tackled and the opportunities that lie ahead in the search for a more efficient, bio-inspired way for machines to perceive and interact with the world

    A brief survey of visual saliency detection

    Get PDF
    • …
    corecore