641 research outputs found

    Distributed video coding for wireless video sensor networks: a review of the state-of-the-art architectures

    Get PDF
    Distributed video coding (DVC) is a relatively new video coding architecture originated from two fundamental theorems namely, Slepian–Wolf and Wyner–Ziv. Recent research developments have made DVC attractive for applications in the emerging domain of wireless video sensor networks (WVSNs). This paper reviews the state-of-the-art DVC architectures with a focus on understanding their opportunities and gaps in addressing the operational requirements and application needs of WVSNs

    Lossless Compression of Neuromorphic Vision Sensor Data Based on Point Cloud Representation

    Get PDF
    Visual information varying over time is typically captured by cameras that acquire data via images (frames) equally spaced in time. Using a different approach, Neuromorphic Vision Sensors (NVSs) are emerging visual capturing devices that only acquire information when changes occur in the scene. This results in major advantages in terms of low power consumption, wide dynamic range, high temporal resolution, and lower data rates than conventional video. Although the acquisition strategy already results in much lower data rates than conventional video, such data can be further compressed. To this end, in this paper we propose a lossless compression strategy based on point cloud compression, inspired by the observation that, by appropriately reporting NVS data in a (x,y,t)(x,y,t) tridimensional space, we have a point cloud representation of NVS data. The proposed strategy outperforms the benchmark strategies resulting in a compression ratio up to 30% higher for the considered

    MICROANGIOGRAM VIDEO COMPRESSION USING ADAPTIVE PREDICTION

    Get PDF
    Coronary angiography is an X-ray examination of the heart\u27s arteries. This is an essential technique for diagnosis of heart damages. Image sequences from digital angiography contain areas of high diagnostic interest. Loss of information due to compression for regions of interest (ROI) in angiograms is not tolerable. Since Commercially available technology such as JPEG and MPEG do not satisfy medical requirements due to their severe blockartifacts. In this paper, a new compression algorithm that achieves high compression ratio and excellent reconstruction quality for video rate or sub-video rate angiograms is developed. The proposed algorithm exploits temporal spatial and spectral redundancies in backward adaptive fashion with Extremely low side information. An experimental result shows that the proposed scheme provides significant improvements in compression efficiencies

    Directional Transforms for Video Coding Based on Lifting on Graphs

    Get PDF
    In this work we describe and optimize a general scheme based on lifting transforms on graphs for video coding. A graph is constructed to represent the video signal. Each pixel becomes a node in the graph and links between nodes represent similarity between them. Therefore, spatial neighbors and temporal motion-related pixels can be linked, while nonsimilar pixels (e.g., pixels across an edge) may not be. Then, a lifting-based transform, in which filterin operations are performed using linked nodes, is applied to this graph, leading to a 3-dimensional (spatio-temporal) directional transform which can be viewed as an extension of wavelet transforms for video. The design of the proposed scheme requires four main steps: (i) graph construction, (ii) graph splitting, (iii) filte design, and (iv) extension of the transform to different levels of decomposition. We focus on the optimization of these steps in order to obtain an effective transform for video coding. Furthermore, based on this scheme, we propose a coefficien reordering method and an entropy coder leading to a complete video encoder that achieves better coding performance than a motion compensated temporal filterin wavelet-based encoder and a simple encoder derived from H.264/AVC that makes use of similar tools as our proposed encoder (reference software JM15.1 configu ed to use 1 reference frame, no subpixel motion estimation, 16 Ă— 16 inter and 4 Ă— 4 intra modes).This work was supported in part by NSF under grant CCF-1018977 and by Spanish Ministry of Economy and Competitiveness under grants TEC2014-53390-P and TEC2014-52289-R.Publicad

    Non-disruptive use of light fields in image and video processing

    Get PDF
    In the age of computational imaging, cameras capture not only an image but also data. This captured additional data can be best used for photo-realistic renderings facilitating numerous post-processing possibilities such as perspective shift, depth scaling, digital refocus, 3D reconstruction, and much more. In computational photography, the light field imaging technology captures the complete volumetric information of a scene. This technology has the highest potential to accelerate immersive experiences towards close-toreality. It has gained significance in both commercial and research domains. However, due to lack of coding and storage formats and also the incompatibility of the tools to process and enable the data, light fields are not exploited to its full potential. This dissertation approaches the integration of light field data to image and video processing. Towards this goal, the representation of light fields using advanced file formats designed for 2D image assemblies to facilitate asset re-usability and interoperability between applications and devices is addressed. The novel 5D light field acquisition and the on-going research on coding frameworks are presented. Multiple techniques for optimised sequencing of light field data are also proposed. As light fields contain complete 3D information of a scene, large amounts of data is captured and is highly redundant in nature. Hence, by pre-processing the data using the proposed approaches, excellent coding performance can be achieved.Im Zeitalter der computergestützten Bildgebung erfassen Kameras nicht mehr nur ein Bild, sondern vielmehr auch Daten. Diese erfassten Zusatzdaten lassen sich optimal für fotorealistische Renderings nutzen und erlauben zahlreiche Nachbearbeitungsmöglichkeiten, wie Perspektivwechsel, Tiefenskalierung, digitale Nachfokussierung, 3D-Rekonstruktion und vieles mehr. In der computergestützten Fotografie erfasst die Lichtfeld-Abbildungstechnologie die vollständige volumetrische Information einer Szene. Diese Technologie bietet dabei das größte Potenzial, immersive Erlebnisse zu mehr Realitätsnähe zu beschleunigen. Deshalb gewinnt sie sowohl im kommerziellen Sektor als auch im Forschungsbereich zunehmend an Bedeutung. Aufgrund fehlender Kompressions- und Speicherformate sowie der Inkompatibilität derWerkzeuge zur Verarbeitung und Freigabe der Daten, wird das Potenzial der Lichtfelder nicht voll ausgeschöpft. Diese Dissertation ermöglicht die Integration von Lichtfelddaten in die Bild- und Videoverarbeitung. Hierzu wird die Darstellung von Lichtfeldern mit Hilfe von fortschrittlichen für 2D-Bilder entwickelten Dateiformaten erarbeitet, um die Wiederverwendbarkeit von Assets- Dateien und die Kompatibilität zwischen Anwendungen und Geräten zu erleichtern. Die neuartige 5D-Lichtfeldaufnahme und die aktuelle Forschung an Kompressions-Rahmenbedingungen werden vorgestellt. Es werden zudem verschiedene Techniken für eine optimierte Sequenzierung von Lichtfelddaten vorgeschlagen. Da Lichtfelder die vollständige 3D-Information einer Szene beinhalten, wird eine große Menge an Daten, die in hohem Maße redundant sind, erfasst. Die hier vorgeschlagenen Ansätze zur Datenvorverarbeitung erreichen dabei eine ausgezeichnete Komprimierleistung
    • …
    corecore