171 research outputs found

    Framework for reproducible objective video quality research with case study on PSNR implementations

    Get PDF
    Reproducibility is an important and recurrent issue in objective video quality research because the presented algorithms are complex, depend on specific implementations in software packages or their parameters need to be trained on a particular, sometimes unpublished, dataset. Textual descriptions often lack the required detail and even for the simple Peak Signal to Noise Ratio (PSNR) several mutations exist for images and videos, in particular considering the choice of the peak value and the temporal pooling. This work presents results achieved through the analysis of objective video quality measures evaluated on a reproducible large scale database containing about 60,000 HEVC coded video sequences. We focus on PSNR, one of the most widespread measures, considering its two most common definitions. The sometimes largely different results achieved by applying the two definitions highlight the importance of the strict reproducibility of the research in video quality evaluation in particular. Reproducibility is also often a question of computational power and PSNR is a computationally inexpensive algorithm running faster than realtime. Complex algorithms cannot be reasonably developed and evaluated on the abovementioned 160 hours of video sequences. Therefore, techniques to select subsets of coding parameters are then introduced. Results show that an accurate selection can preserve the variety of the results seen on the large database but with much lower complexity. Finally, note that our SoftwareX accompanying paper presents the software framework which allows the full reproducibility of all the research results presented here, as well as how the same framework can be used to produce derived work for other measures or indexes proposed by other researchers which we strongly encourage for integration in our open framework

    Reproducible research framework for objective video quality measures using a large-scale database approach

    Get PDF
    This work presents a framework to facilitate reproducibility of research in video quality evaluation. Its initial version is built around the JEG-Hybrid database of HEVC coded video sequences. The framework is modular, organized in the form of pipelined activities, which range from the tools needed to generate the whole database from reference signals up to the analysis of the video quality measures already present in the database. Researchers can re-run, modify and extend any module, starting from any point in the pipeline, while always achieving perfect reproducibility of the results. The modularity of the structure allows to work on subsets of the database since for some analysis this might be too computationally intensive. To this purpose, the framework also includes a software module to compute interesting subsets, in terms of coding conditions, of the whole database. An example shows how the framework can be used to investigate how the small differences in the definition of the widespread PSNR metric can yield very different results, discussed in more details in our accompanying research paper Aldahdooh et al. (0000). This further underlines the importance of reproducibility to allow comparing different research work with high confidence. To the best of our knowledge, this framework is the first attempt to bring exact reproducibility end-to-end in the context of video quality evaluation research. (C) 2017 The Authors. Published by Elsevier B.V

    Algorithms for Reconstruction of Undersampled Atomic Force Microscopy Images

    Get PDF

    Experimental evaluation of a video streaming system for Wireless Multimedia Sensor Networks

    Get PDF
    Wireless Multimedia Sensor Networks (WMSNs) are recently emerging as an extension to traditional scalar wireless sensor networks, with the distinctive feature of supporting the acquisition and delivery of multimedia content such as audio, images and video. In this paper, a complete framework is proposed and developed for streaming video flows in WMSNs. Such framework is designed in a cross-layer fashion with three main building blocks: (i) a hybrid DPCM/DCT encoder; (ii) a congestion control mechanism and (iii) a selective priority automatic request mechanism at the MAC layer. The system has been implemented on the IntelMote2 platform operated by TinyOS and thoroughly evaluated through testbed experiments on multi-hop WMSNs. The source code of the whole system is publicly available to enable reproducible research. © 2011 IEEE

    Content-aware frame interpolation (CAFI): deep learning-based temporal super-resolution for fast bioimaging

    Get PDF
    The development of high-resolution microscopes has made it possible to investigate cellular processes in 3D and over time. However, observing fast cellular dynamics remains challenging because of photobleaching and phototoxicity. Here we report the implementation of two content-aware frame interpolation (CAFI) deep learning networks, Zooming SlowMo and Depth-Aware Video Frame Interpolation, that are highly suited for accurately predicting images in between image pairs, therefore improving the temporal resolution of image series post-acquisition. We show that CAFI is capable of understanding the motion context of biological structures and can perform better than standard interpolation methods. We benchmark CAFI’s performance on 12 different datasets, obtained from four different microscopy modalities, and demonstrate its capabilities for single-particle tracking and nuclear segmentation. CAFI potentially allows for reduced light exposure and phototoxicity on the sample for improved long-term live-cell imaging. The models and the training and testing data are available via the ZeroCostDL4Mic platform

    Image quality assessment for fake biometric detection: Application to Iris, fingerprint, and face recognition

    Full text link
    Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.To ensure the actual presence of a real legitimate trait in contrast to a fake self-manufactured synthetic or reconstructed sample is a significant problem in biometric authentication, which requires the development of new and efficient protection measures. In this paper, we present a novel software-based fake detection method that can be used in multiple biometric systems to detect different types of fraudulent access attempts. The objective of the proposed system is to enhance the security of biometric recognition frameworks, by adding liveness assessment in a fast, user-friendly, and non-intrusive manner, through the use of image quality assessment. The proposed approach presents a very low degree of complexity, which makes it suitable for real-time applications, using 25 general image quality features extracted from one image (i.e., the same acquired for authentication purposes) to distinguish between legitimate and impostor samples. The experimental results, obtained on publicly available data sets of fingerprint, iris, and 2D face, show that the proposed method is highly competitive compared with other state-of-the-art approaches and that the analysis of the general image quality of real biometric samples reveals highly valuable information that may be very efficiently used to discriminate them from fake traits.This work has been partially supported by projects Contexts (S2009/TIC-1485) from CAM, Bio-Shield (TEC2012-34881) from Spanish MECD, TABULA RASA (FP7-ICT-257289) and BEAT (FP7-SEC-284989) from EU, and Cátedra UAM-Telefónic
    • …
    corecore