79 research outputs found

    Super-Resolution in Still Images and Videos via Deep Learning

    Get PDF
    PhDThe evolution of multimedia systems and technology in the past decade has enabled production and delivery of visual content in high resolution, and the thirst for achieving higher de nition pictures with more detailed visual characteristics continues. This brings attention to a critical computer vision task for spatial up-sampling of still images and videos called super-resolution. Recent advances in machine learning, and application of deep neural networks, have resulted in major improvements in various computer vision applications. Super-resolution is not an exception, and it is amongst the popular topics that have been a ected signi cantly by the emergence of deep learning. Employing modern machine learning solutions has made it easier to perform super-resolution in both images and videos, and has allowed professionals from di erent elds to upgrade low resolution content to higher resolutions with visually appealing picture delity. In spite of that, there remain many challenges to overcome in adopting deep learning concepts for designing e cient super-resolution models. In this thesis, the current trends in super-resolution, as well as the state of the art are presented. Moreover, several contributions for improving the performance of the deep learning-based super-resolution models are described in detail. The contributions include devising theoretical approaches, as well as proposing design choices that can lead to enhancing the existing art in super-resolution. In particular, an e ective approach for training convolutional networks is proposed, that can result in optimized and quick training of complex models. In addition, speci c deep learning architectures with novel elements are introduced that can provide reduction in the complexity of the existing solutions, and improve the super-resolution models to achieve better picture quality. Furthermore, application of super-resolution for handling compressed content, and its functionality as a compression tool are studied and investigated.COGNITUS media AI software fundin

    Runtime methods for energy-efficient, image processing using significance driven learning.

    Get PDF
    Ph. D. Thesis.Image and Video processing applications are opening up a whole range of opportunities for processing at the "edge" or IoT applications as the demand for high accuracy processing high resolution images increases. However this comes with an increase in the quantity of data to be processed and stored, thereby causing a significant increase in the computational challenges. There is a growing interest in developing hardware systems that provide energy efficient solutions to this challenge. The challenges in Image Processing are unique because the increase in resolution, not only increases the data to be processed but also the amount of information detail scavenged from the data is also greatly increased. This thesis addresses the concept of extracting the significant image information to enable processing the data intelligently within a heterogeneous system. We propose a unique way of defining image significance, based on what causes us to react when something "catches our eye", whether it be static or dynamic, whether it be in our central field of focus or our peripheral vision. This significance technique proves to be a relatively economical process in terms of energy and computational effort. We investigate opportunities for further computational and energy efficiency that are available by elective use of heterogeneous system elements. We utilise significance to adaptively select regions of interest for selective levels of processing dependent on their relative significance. We further demonstrate that exploiting the computational slack time released by this process, we can apply throttling of the processor speed to effect greater energy savings. This demonstrates a reduction in computational effort and energy efficiency a process that we term adaptive approximate computing. We demonstrate that our approach reduces energy in a range of 50 to 75%, dependent on user quality demand, for a real-time performance requirement of 10 fps for a WQXGA image, when compared with the existing approach that is agnostic of significance. We further hypothesise that by use of heterogeneous elements that savings up to 90% could be achievable in both performance and energy when compared with running OpenCV on the CPU alone

    The quality of experience of emerging display technologies

    Get PDF
    As new display technologies emerge and become part of everyday life, the understanding of the visual experience they provide becomes more relevant. The cognition of perception is the most vital component of visual experience; however, it is not the only cognition that contributes to the complex overall experience of the end-user. Expectations can create significant cognitive bias that may even override what the user genuinely perceives. Even if a visualization technology is somewhat novel, expectations can be fuelled by prior experiences gained from using similar displays and, more importantly, even a single word or an acronym may induce serious preconceptions, especially if such word suggests excellence in quality. In this interdisciplinary Ph.D. thesis, the effect of minimal, one-word labels on the Quality of Experience (QoE) is investigated in a series of subjective tests. In the studies carried out on an ultra-high-definition (UHD) display, UHD video contents were directly compared to their HD counterparts, with and without labels explicitly informing the test participants about the resolution of each stimulus. The experiments on High Dynamic Range (HDR) visualization addressed the effect of the word “premium” on the quality aspects of HDR video, and also how this may affect the perceived duration of stalling events. In order to support the findings, additional tests were carried out comparing the stalling detection thresholds of HDR video with conventional Low Dynamic Range (LDR) video. The third emerging technology addressed by this thesis is light field visualization. Due to its novel nature and the lack of comprehensive, exhaustive research on the QoE of light field displays and content parameters at the time of this thesis, instead of investigating the labeling effect, four phases of subjective studies were performed on light field QoE. The first phases started with fundamental research, and the experiments progressed towards the concept and evaluation of the dynamic adaptive streaming of light field video, introduced in the final phase

    UHD映像のための前景物体検出の高速化

    Get PDF
    早大学位記番号:新7460早稲田大

    eCNN: A Block-Based and Highly-Parallel CNN Accelerator for Edge Inference

    Full text link
    Convolutional neural networks (CNNs) have recently demonstrated superior quality for computational imaging applications. Therefore, they have great potential to revolutionize the image pipelines on cameras and displays. However, it is difficult for conventional CNN accelerators to support ultra-high-resolution videos at the edge due to their considerable DRAM bandwidth and power consumption. Therefore, finding a further memory- and computation-efficient microarchitecture is crucial to speed up this coming revolution. In this paper, we approach this goal by considering the inference flow, network model, instruction set, and processor design jointly to optimize hardware performance and image quality. We apply a block-based inference flow which can eliminate all the DRAM bandwidth for feature maps and accordingly propose a hardware-oriented network model, ERNet, to optimize image quality based on hardware constraints. Then we devise a coarse-grained instruction set architecture, FBISA, to support power-hungry convolution by massive parallelism. Finally,we implement an embedded processor---eCNN---which accommodates to ERNet and FBISA with a flexible processing architecture. Layout results show that it can support high-quality ERNets for super-resolution and denoising at up to 4K Ultra-HD 30 fps while using only DDR-400 and consuming 6.94W on average. By comparison, the state-of-the-art Diffy uses dual-channel DDR3-2133 and consumes 54.3W to support lower-quality VDSR at Full HD 30 fps. Lastly, we will also present application examples of high-performance style transfer and object recognition to demonstrate the flexibility of eCNN.Comment: 14 pages; appearing in IEEE/ACM International Symposium on Microarchitecture (MICRO), 201

    Deep Video Compression

    Get PDF
    corecore