14 research outputs found

    Implementing Real-Time Video Deblocking in FPGA Hardware

    Get PDF
    Video compression techniques are commonly used to meet the increasing demands for the storage and transmission of digital video content. Popular video compression techniques such as MPEG video encoding make use of block-transform coding algorithms which are susceptible to blocking artifacts. These artifacts can be reduced using a deblocking process, of which there are many. However, those deblocking algorithms which provide noticeable improvements in visual quality also tend to be computationally expensive and unsuitable for real-time video use. This dissertation selects and examines an appropriate algorithm for real-time video deblocking applications, and describes its hardware implementation on a Altera Cyclone II FPGA. The chosen algorithm is based on the concept of shifted thresholding; it reduces computational complexity by several means, such as by using only integer arithmetic and by replacing division operations with bit shifting. The implementation leverages the reduced hardware complexity of the chosen algorithm to cost-effectively implement real-time video deblocking

    Reduced reference image and video quality assessments: review of methods

    Get PDF
    With the growing demand for image and video-based applications, the requirements of consistent quality assessment metrics of image and video have increased. Different approaches have been proposed in the literature to estimate the perceptual quality of images and videos. These approaches can be divided into three main categories; full reference (FR), reduced reference (RR) and no-reference (NR). In RR methods, instead of providing the original image or video as a reference, we need to provide certain features (i.e., texture, edges, etc.) of the original image or video for quality assessment. During the last decade, RR-based quality assessment has been a popular research area for a variety of applications such as social media, online games, and video streaming. In this paper, we present review and classification of the latest research work on RR-based image and video quality assessment. We have also summarized different databases used in the field of 2D and 3D image and video quality assessment. This paper would be helpful for specialists and researchers to stay well-informed about recent progress of RR-based image and video quality assessment. The review and classification presented in this paper will also be useful to gain understanding of multimedia quality assessment and state-of-the-art approaches used for the analysis. In addition, it will help the reader select appropriate quality assessment methods and parameters for their respective applications

    Reduced reference image and video quality assessments: review of methods

    Get PDF
    With the growing demand for image and video-based applications, the requirements of consistent quality assessment metrics of image and video have increased. Different approaches have been proposed in the literature to estimate the perceptual quality of images and videos. These approaches can be divided into three main categories; full reference (FR), reduced reference (RR) and no-reference (NR). In RR methods, instead of providing the original image or video as a reference, we need to provide certain features (i.e., texture, edges, etc.) of the original image or video for quality assessment. During the last decade, RR-based quality assessment has been a popular research area for a variety of applications such as social media, online games, and video streaming. In this paper, we present review and classification of the latest research work on RR-based image and video quality assessment. We have also summarized different databases used in the field of 2D and 3D image and video quality assessment. This paper would be helpful for specialists and researchers to stay well-informed about recent progress of RR-based image and video quality assessment. The review and classification presented in this paper will also be useful to gain understanding of multimedia quality assessment and state-of-the-art approaches used for the analysis. In addition, it will help the reader select appropriate quality assessment methods and parameters for their respective applications

    Visual Data Compression for Multimedia Applications

    Get PDF
    The compression of visual information in the framework of multimedia applications is discussed. To this end, major approaches to compress still as well as moving pictures are reviewed. The most important objective in any compression algorithm is that of compression efficiency. High-compression coding of still pictures can be split into three categories: waveform, second-generation, and fractal coding techniques. Each coding approach introduces a different artifact at the target bit rates. The primary objective of most ongoing research in this field is to mask these artifacts as much as possible to the human visual system. Video-compression techniques have to deal with data enriched by one more component, namely, the temporal coordinate. Either compression techniques developed for still images can be generalized for three-dimensional signals (space and time) or a hybrid approach can be defined based on motion compensation. The video compression techniques can then be classified into the following four classes: waveform, object-based, model-based, and fractal coding techniques. This paper provides the reader with a tutorial on major visual data-compression techniques and a list of references for further information as the details of each metho

    Block-level discrete cosine transform coefficients for autonomic face recognition

    Get PDF
    This dissertation presents a novel method of autonomic face recognition based on the recently proposed biologically plausible network of networks (NoN) model of information processing. The NoN model is based on locally parallel and globally coordinated transformations. In the NoN architecture, the neurons or computational units form distributed networks, which themselves link to form larger networks. In the general case, an n-level hierarchy of nested distributed networks is constructed. This models the structures in the cerebral cortex described by Mountcastle and the architecture based on that proposed for information processing by Sutton. In the implementation proposed in the dissertation, the image is processed by a nested family of locally operating networks along with a hierarchically superior network that classifies the information from each of the local networks. The implementation of this approach helps obtain sensitivity to the contrast sensitivity function (CSF) in the middle of the spectrum, as is true for the human vision system. The input images are divided into blocks to define the local regions of processing. The two-dimensional Discrete Cosine Transform (DCT), a spatial frequency transform, is used to transform the data into the frequency domain. Thereafter, statistical operators that calculate various functions of spatial frequency in the block are used to produce a block-level DCT coefficient. The image is now transformed into a variable length vector that is trained with respect to the data set. The classification was done by the use of a backpropagation neural network. The proposed method yields excellent results on a benchmark database. The results of the experiments yielded a maximum of 98.5% recognition accuracy and an average of 97.4% recognition accuracy. An advanced version of the method where the local processing is done on offset blocks has also been developed. This has validated the NoN approach and further research using local processing as well as more advanced global operators is likely to yield even better results

    Adaptive Edge-guided Block-matching and 3D filtering (BM3D) Image Denoising Algorithm

    Get PDF
    Image denoising is a well studied field, yet reducing noise from images is still a valid challenge. Recently proposed Block-matching and 3D filtering (BM3D) is the current state of the art algorithm for denoising images corrupted by Additive White Gaussian noise (AWGN). Though BM3D outperforms all existing methods for AWGN denoising, still its performance decreases as the noise level increases in images, since it is harder to find proper match for reference blocks in the presence of highly corrupted pixel values. It also blurs sharp edges and textures. To overcome these problems we proposed an edge guided BM3D with selective pixel restoration. For higher noise levels it is possible to detect noisy pixels form its neighborhoods gray level statistics. We exploited this property to reduce noise as much as possible by applying a pre-filter. We also introduced an edge guided pixel restoration process in the hard-thresholding step of BM3D to restore the sharpness of edges and textures. Experimental results confirm that our proposed method is competitive and outperforms the state of the art BM3D in all considered subjective and objective quality measurements, particularly in preserving edges, textures and image contrast

    Robust density modelling using the student's t-distribution for human action recognition

    Full text link
    The extraction of human features from videos is often inaccurate and prone to outliers. Such outliers can severely affect density modelling when the Gaussian distribution is used as the model since it is highly sensitive to outliers. The Gaussian distribution is also often used as base component of graphical models for recognising human actions in the videos (hidden Markov model and others) and the presence of outliers can significantly affect the recognition accuracy. In contrast, the Student's t-distribution is more robust to outliers and can be exploited to improve the recognition rate in the presence of abnormal data. In this paper, we present an HMM which uses mixtures of t-distributions as observation probabilities and show how experiments over two well-known datasets (Weizmann, MuHAVi) reported a remarkable improvement in classification accuracy. © 2011 IEEE

    Remote Sensing Data Compression

    Get PDF
    A huge amount of data is acquired nowadays by different remote sensing systems installed on satellites, aircrafts, and UAV. The acquired data then have to be transferred to image processing centres, stored and/or delivered to customers. In restricted scenarios, data compression is strongly desired or necessary. A wide diversity of coding methods can be used, depending on the requirements and their priority. In addition, the types and properties of images differ a lot, thus, practical implementation aspects have to be taken into account. The Special Issue paper collection taken as basis of this book touches on all of the aforementioned items to some degree, giving the reader an opportunity to learn about recent developments and research directions in the field of image compression. In particular, lossless and near-lossless compression of multi- and hyperspectral images still remains current, since such images constitute data arrays that are of extremely large size with rich information that can be retrieved from them for various applications. Another important aspect is the impact of lossless compression on image classification and segmentation, where a reasonable compromise between the characteristics of compression and the final tasks of data processing has to be achieved. The problems of data transition from UAV-based acquisition platforms, as well as the use of FPGA and neural networks, have become very important. Finally, attempts to apply compressive sensing approaches in remote sensing image processing with positive outcomes are observed. We hope that readers will find our book useful and interestin

    Engineering Education and Research Using MATLAB

    Get PDF
    MATLAB is a software package used primarily in the field of engineering for signal processing, numerical data analysis, modeling, programming, simulation, and computer graphic visualization. In the last few years, it has become widely accepted as an efficient tool, and, therefore, its use has significantly increased in scientific communities and academic institutions. This book consists of 20 chapters presenting research works using MATLAB tools. Chapters include techniques for programming and developing Graphical User Interfaces (GUIs), dynamic systems, electric machines, signal and image processing, power electronics, mixed signal circuits, genetic programming, digital watermarking, control systems, time-series regression modeling, and artificial neural networks

    Stereoscopic high dynamic range imaging

    Get PDF
    Two modern technologies show promise to dramatically increase immersion in virtual environments. Stereoscopic imaging captures two images representing the views of both eyes and allows for better depth perception. High dynamic range (HDR) imaging accurately represents real world lighting as opposed to traditional low dynamic range (LDR) imaging. HDR provides a better contrast and more natural looking scenes. The combination of the two technologies in order to gain advantages of both has been, until now, mostly unexplored due to the current limitations in the imaging pipeline. This thesis reviews both fields, proposes stereoscopic high dynamic range (SHDR) imaging pipeline outlining the challenges that need to be resolved to enable SHDR and focuses on capture and compression aspects of that pipeline. The problems of capturing SHDR images that would potentially require two HDR cameras and introduce ghosting, are mitigated by capturing an HDR and LDR pair and using it to generate SHDR images. A detailed user study compared four different methods of generating SHDR images. Results demonstrated that one of the methods may produce images perceptually indistinguishable from the ground truth. Insights obtained while developing static image operators guided the design of SHDR video techniques. Three methods for generating SHDR video from an HDR-LDR video pair are proposed and compared to the ground truth SHDR videos. Results showed little overall error and identified a method with the least error. Once captured, SHDR content needs to be efficiently compressed. Five SHDR compression methods that are backward compatible are presented. The proposed methods can encode SHDR content to little more than that of a traditional single LDR image (18% larger for one method) and the backward compatibility property encourages early adoption of the format. The work presented in this thesis has introduced and advanced capture and compression methods for the adoption of SHDR imaging. In general, this research paves the way for a novel field of SHDR imaging which should lead to improved and more realistic representation of captured scenes
    corecore