1,664 research outputs found

    Adaptive Streaming: A subjective catalog to assess the performance of objective QoE metrics

    Get PDF
    Scalable streaming has emerged as a feasible solution to resolve users' heterogeneity problems. SVC is the technology that has served as the definitive impulse for the growth of streaming adaptive systems. Systems seek to improve layer switching efficiency from the network point of view but, with increasing importance, without jeopardizing user perceived video quality, i.e., QoE. We have performed extensive subjective experiments to corroborate the preference towards adaptive systems when compared to traditional non-adaptive systems. The resulting subjective scores are correlated with most relevant Full Reference (FR) objective metrics. We obtain an exponential relationship between human decisions and the same decisions expressed as a difference of objective metrics. A strong correlation with subjective scores validates objective metrics to be used as aid in the adaptive decision taking algorithms to improve overall systems performance. Results show that, among the evaluated objective metrics, PSNR is the metric that provide worse results in terms of reproducing the human decision

    On the Sensor Pattern Noise Estimation in Image Forensics: A Systematic Empirical Evaluation

    Get PDF
    Extracting a fingerprint of a digital camera has fertile applications in image forensics, such as source camera identification and image authentication. In the last decade, Photo Response Non_Uniformity (PRNU) has been well established as a reliable unique fingerprint of digital imaging devices. The PRNU noise appears in every image as a very weak signal, and its reliable estimation is crucial for the success rate of the forensic application. In this paper, we present a novel methodical evaluation of 21 state-of-the-art PRNU estimation/enhancement techniques that have been proposed in the literature in various frameworks. The techniques are classified and systematically compared based on their role/stage in the PRNU estimation procedure, manifesting their intrinsic impacts. The performance of each technique is extensively demonstrated over a large-scale experiment to conclude this case-sensitive study. The experiments have been conducted on our created database and a public image database, the 'Dresden image databas

    Online Super-Resolution For Fibre-Bundle-Based Confocal Laser Endomicroscopy

    Get PDF
    Probe-based Confocal Laser Endomicroscopy (pCLE) produces microscopic images enabling real-time in vivo optical biopsy. However, the miniaturisation of the optical hardware, specifically the reliance on an optical fibre bundle as an imaging guide, fundamentally limits image quality by producing artefacts, noise, and relatively low contrast and resolution. The reconstruction approaches in clinical pCLE products do not fully alleviate these problems. Consequently, image quality remains a barrier that curbs the full potential of pCLE. Enhancing the image quality of pCLE in real-time remains a challenge. The research in this thesis is a response to this need. I have developed dedicated online super-resolution methods that account for the physics of the image acquisition process. These methods have the potential to replace existing reconstruction algorithms without interfering with the fibre design or the hardware of the device. In this thesis, novel processing pipelines are proposed for enhancing the image quality of pCLE. First, I explored a learning-based super-resolution method that relies on mapping from the low to the high-resolution space. Due to the lack of high-resolution pCLE, I proposed to simulate high-resolution data and use it as a ground truth model that is based on the pCLE acquisition physics. However, pCLE images are reconstructed from irregularly distributed fibre signals, and grid-based Convolutional Neural Networks are not designed to take irregular data as input. To alleviate this problem, I designed a new trainable layer that embeds Nadaraya- Watson regression. Finally, I proposed a novel blind super-resolution approach by deploying unsupervised zero-shot learning accompanied by a down-sampling kernel crafted for pCLE. I evaluated these new methods in two ways: a robust image quality assessment and a perceptual quality test assessed by clinical experts. The results demonstrate that the proposed super-resolution pipelines are superior to the current reconstruction algorithm in terms of image quality and clinician preference

    Framework for reproducible objective video quality research with case study on PSNR implementations

    Get PDF
    Reproducibility is an important and recurrent issue in objective video quality research because the presented algorithms are complex, depend on specific implementations in software packages or their parameters need to be trained on a particular, sometimes unpublished, dataset. Textual descriptions often lack the required detail and even for the simple Peak Signal to Noise Ratio (PSNR) several mutations exist for images and videos, in particular considering the choice of the peak value and the temporal pooling. This work presents results achieved through the analysis of objective video quality measures evaluated on a reproducible large scale database containing about 60,000 HEVC coded video sequences. We focus on PSNR, one of the most widespread measures, considering its two most common definitions. The sometimes largely different results achieved by applying the two definitions highlight the importance of the strict reproducibility of the research in video quality evaluation in particular. Reproducibility is also often a question of computational power and PSNR is a computationally inexpensive algorithm running faster than realtime. Complex algorithms cannot be reasonably developed and evaluated on the abovementioned 160 hours of video sequences. Therefore, techniques to select subsets of coding parameters are then introduced. Results show that an accurate selection can preserve the variety of the results seen on the large database but with much lower complexity. Finally, note that our SoftwareX accompanying paper presents the software framework which allows the full reproducibility of all the research results presented here, as well as how the same framework can be used to produce derived work for other measures or indexes proposed by other researchers which we strongly encourage for integration in our open framework

    Application of deep learning upscaling technologies in cloud gaming solutions

    Get PDF
    Durant tota la seva història, la indústria dels videojocs ha vist com any rere any els requeriments hardware de les entregues més populars del mercat augmentaven amb cada nova sortida. Això, juntament amb l'encariment dels components hardware necessaris per poder executar aquests videojocs causat per la pandèmia i per l'auge del mercat de les criptomonedes ha condicionat al sorgiment d'un nou tipus de servei: el cloud gaming. Utilitzant tecnología de streaming de video, el cloud gaming és capaç d'oferir als seus usuaris la capacitat de poder jugar a qualsevol de les últimes entregues del mercat des de qualsevol peça de hardware. Aquesta nova aparició, però, no ha ocorregut sense dificultats, dons els alts requeriments pel que fa a latència i ample de banda dificulten a l'usuari a l'hora de tenir una bona experiència amb el servei. Per tal d'intentar millorar aquesta experiència, en aquest treball s'exploraran les possibles millores que podria aportar l'ús de tècniques de machine learning a l'hora de realitzar un reescalat del tràfic de video per tal d'aconseguir una major resolució final a partir d'una resolució menor de transmissió, reduint l'ample de banda necessari i subseqüentment la latencia del servei.Throughout its history, the video game industry has seen the hardware requirements of the most popular titles on the market increase year after year with each new release. This, in conjunction with the rise in the cost of the hardware components needed to run these video games caused by the pandemic and the rise of the cryptocurrency market, has led to the creation of a new type of services: cloud gaming. Using video streaming technology, cloud gaming is able to offer its users the ability to play any of the latest releases on the market from any piece of hardware. This new appearance, however, has not happened without difficulties, because the high requirements in terms of latency and bandwidth make it difficult for the user to have a good experience with the service. In order to try to improve this experience, this work will explore the possible improvements that could be made by the use of machine learning techniques when re-scaling video traffic to achieve a higher final resolution from a lower transmission resolution, reducing the required bandwidth and subsequently the latency of the service

    06221 Abstracts Collection -- Computational Aestethics in Graphics, Visualization and Imaging

    Get PDF
    From 28.05.06 to 02.06.06, the Dagstuhl Seminar 06221 ``Computational Aesthetics in Graphics, Visualization and Imaging\u27\u27 was held in the International Conference and Research Center (IBFI), Schloss Dagstuhl. During the seminar, several participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas are put together in this paper. The first section describes the seminar topics and goals in general. Links to extended abstracts or full papers are provided, if available

    Network emulation focusing on QoS-Oriented satellite communication

    Get PDF
    This chapter proposes network emulation basics and a complete case study of QoS-oriented Satellite Communication

    Design Influencing: A Formulaic Approach to an Alternative Career Path

    Get PDF
    The rising popularity of social media over the past two decades has resulted in many changes to the marketing world. Not only have companies turned to these social platforms for their own marketing efforts, but the surge has given rise to an entirely new industry, Influencing. Influencers can be found in any genre, opening up entirely new avenues of income for many professionals. Professional influencers are those influencers that are able to garner a monetary value large enough to sustain their way of life based on their social media following. For designers and illustrators, this avenue has resulted in the potential to market oneself in an entirely new way. Design influencers, as this research will refer to them, are no longer marketing themselves in a B2B format, they have created entirely customizable careers based on their social media presence that allows them to market themselves in a manner more similar to B2C practices, they have become the brand and their audience is the consumer. This research will examine the patterns and circumstances that have allowed these designers to build careers in this manner and determine whether or not it can be replicated in a consistent manner that would validate its acknowledgment as a viable career path, rooted in strategy, that can be prepared for in post-secondary institutions
    corecore