271 research outputs found
Survey of Error Concealment techniques: Research directions and open issues
© 2015 IEEE. Error Concealment (EC) techniques use either spatial, temporal or a combination of both types of information to recover the data lost in transmitted video. In this paper, existing EC techniques are reviewed, which are divided into three categories, namely Intra-frame EC, Inter-frame EC, and Hybrid EC techniques. We first focus on the EC techniques developed for the H.264/AVC standard. The advantages and disadvantages of these EC techniques are summarized with respect to the features in H.264. Then, the EC algorithms are also analyzed. These EC algorithms have been recently adopted in the newly introduced H.265/HEVC standard. A performance comparison between the classic EC techniques developed for H.264 and H.265 is performed in terms of the average PSNR. Lastly, open issues in the EC domain are addressed for future research consideration
Error Correction and Concealment of Bock Based, Motion-Compensated Temporal Predition, Transform Coded Video
Error Correction and Concealment of Block Based, Motion-Compensated Temporal Prediction, Transform Coded Video
David L. Robie
133 Pages
Directed by Dr. Russell M. Mersereau
The use of the Internet and wireless networks to bring multimedia to the consumer continues to expand. The transmission of these products is always subject to corruption due to errors such as bit errors or lost and ill-timed packets; however, in many cases, such as real time video transmission, retransmission request (ARQ) is not practical. Therefore receivers must be capable of recovering from corrupted data. Errors can be mitigated using forward error correction in the encoder or error concealment techniques in the decoder. This thesis investigates the use of forward error correction (FEC) techniques in the encoder and error concealment in the decoder in block-based, motion-compensated, temporal prediction, transform codecs. It will show improvement over standard FEC applications and improvements in error concealment relative to the Motion Picture Experts Group (MPEG) standard. To this end, this dissertation will describe the following contributions and proofs-of-concept in the area of error concealment and correction in block-based video transmission. A temporal error concealment algorithm which uses motion-compensated macroblocks from previous frames. A spatial error concealment algorithm which uses the Hough transform to detect edges in both foreground and background colors and using directional interpolation or directional filtering to provide improved edge reproduction. A codec which uses data hiding to transmit error correction information. An enhanced codec which builds upon the last by improving the performance of the codec in the error-free environment while maintaining excellent error recovery capabilities. A method to allocate Reed-Solomon (R-S) packet-based forward error correction that will decrease distortion (using a PSNR metric) at the receiver compared to standard FEC techniques. Finally, under the constraints of a constant bit rate, the tradeoff between traditional R-S FEC and alternate forward concealment information (FCI) is evaluated. Each of these developments is compared and contrasted to state of the art techniques and are able to show improvements using widely accepted metrics. The dissertation concludes with a discussion of future work.Ph.D.Committee Chair: Mersereau, Russell; Committee Member: Altunbasak, Yucel; Committee Member: Fekri, Faramarz; Committee Member: Lanterman, Aaron; Committee Member: Zhou, Haomi
Error resilient image transmission using T-codes and edge-embedding
Current image communication applications involve image transmission over noisy channels, where the image gets damaged. The loss of synchronization at the decoder due to these errors increases the damage in the reconstructed image. Our main goal in this research is to develop an algorithm that has the capability to detect errors, achieve synchronization and conceal errors.;In this thesis we studied the performance of T-codes in comparison with Huffman codes. We develop an algorithm for the selection of best T-code set. We have shown that T-codes exhibit better synchronization properties when compared to Huffman Codes. In this work we developed an algorithm that extracts edge patterns from each 8x8 block, classifies edge patterns into different classes. In this research we also propose a novel scrambling algorithm to hide edge pattern of a block into neighboring 8x8 blocks of the image. This scrambled hidden data is used in the detection of errors and concealment of errors. We also develop an algorithm to protect the hidden data from getting damaged in the course of transmission
Image analysis using visual saliency with applications in hazmat sign detection and recognition
Visual saliency is the perceptual process that makes attractive objects stand out from their surroundings in the low-level human visual system. Visual saliency has been modeled as a preprocessing step of the human visual system for selecting the important visual information from a scene. We investigate bottom-up visual saliency using spectral analysis approaches. We present separate and composite model families that generalize existing frequency domain visual saliency models. We propose several frequency domain visual saliency models to generate saliency maps using new spectrum processing methods and an entropy-based saliency map selection approach. A group of saliency map candidates are then obtained by inverse transform. A final saliency map is selected among the candidates by minimizing the entropy of the saliency map candidates. The proposed models based on the separate and composite model families are also extended to various color spaces. We develop an evaluation tool for benchmarking visual saliency models. Experimental results show that the proposed models are more accurate and efficient than most state-of-the-art visual saliency models in predicting eye fixation.^ We use the above visual saliency models to detect the location of hazardous material (hazmat) signs in complex scenes. We develop a hazmat sign location detection and content recognition system using visual saliency. Saliency maps are employed to extract salient regions that are likely to contain hazmat sign candidates and then use a Fourier descriptor based contour matching method to locate the border of hazmat signs in these regions. This visual saliency based approach is able to increase the accuracy of sign location detection, reduce the number of false positive objects, and speed up the overall image analysis process. We also propose a color recognition method to interpret the color inside the detected hazmat sign. Experimental results show that our proposed hazmat sign location detection method is capable of detecting and recognizing projective distorted, blurred, and shaded hazmat signs at various distances.^ In other work we investigate error concealment for scalable video coding (SVC). When video compressed with SVC is transmitted over loss-prone networks, the decompressed video can suffer severe visual degradation across multiple frames. In order to enhance the visual quality, we propose an inter-layer error concealment method using motion vector averaging and slice interleaving to deal with burst packet losses and error propagation. Experimental results show that the proposed error concealment methods outperform two existing methods
Recommended from our members
Multi-scale edge-guided image gap restoration
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University London.The focus of this research work is the estimation of gaps (missing blocks) in digital images. To progress the research two main issues were identified as (1) the appropriate domains for image gap restoration and (2) the methodologies for gap interpolation. Multi-scale transforms provide an appropriate framework for gap restoration. The main advantages are transformations into a set of frequency and scales and the ability to progressively reduce the size of the gap to one sample wide at the transform apex. Two types of multi-scale transform were considered for comparative evaluation; 2-dimensional (2D) discrete cosines (DCT) pyramid and 2D discrete wavelets (DWT). For image gap estimation, a family of conventional weighted interpolators and directional edge-guided interpolators are developed and evaluated. Two types of edges were considered; ‘local’ edges or textures and ‘global’ edges such as the boundaries between objects or within/across patterns in the image. For local edge, or texture, modelling a number of methods were explored which aim to reconstruct a set of gradients across the restored gap as those computed from the known neighbourhood. These differential gradients are estimated along the geometrical vertical, horizontal and cross directions for each pixel of the gap. The edge-guided interpolators aim to operate on distinct regions confined within edge lines. For global edge-guided interpolation, two main methods explored are Sobel and Canny detectors. The latter provides improved edge detection. The combination and integration of different multi-scale domains, local edge interpolators, global edge-guided interpolators and iterative estimation of edges provided a variety of configurations that were comparatively explored and evaluated. For evaluation a set of images commonly used in the literature work were employed together with simulated regular and random image gaps at a variety of loss rate. The performance measures used are the peak signal to noise ratio (PSNR) and structure similarity index (SSIM). The results obtained are better than the state of the art reported in the literature
Robust density modelling using the student's t-distribution for human action recognition
The extraction of human features from videos is often inaccurate and prone to outliers. Such outliers can severely affect density modelling when the Gaussian distribution is used as the model since it is highly sensitive to outliers. The Gaussian distribution is also often used as base component of graphical models for recognising human actions in the videos (hidden Markov model and others) and the presence of outliers can significantly affect the recognition accuracy. In contrast, the Student's t-distribution is more robust to outliers and can be exploited to improve the recognition rate in the presence of abnormal data. In this paper, we present an HMM which uses mixtures of t-distributions as observation probabilities and show how experiments over two well-known datasets (Weizmann, MuHAVi) reported a remarkable improvement in classification accuracy. © 2011 IEEE
A Compact Sift-Based Strategy for Visual Information Retrieval in Large Image Databases
This paper applies the Standard Scale Invariant Feature Transform (S-SIFT) algorithm to accomplish the image descriptors of an eye region for a set of human eyes images from the UBIRIS database despite photometric transformations. The core assumption is that textured regions are locally planar and stationary. A descriptor with this type of invariance is sufficient to discern and describe a textured area regardless of the viewpoint and lighting in a perspective image, and it permits the identification of similar types of texture in a figure, such as an iris texture on an eye. It also enables to establish the correspondence between texture regions from distinct images acquired from different viewpoints (as, for example, two views of the front of a house), scales and/or subjected to linear transformations such as translation. Experiments have confirmed that the S-SIFT algorithm is a potent tool for a variety of problems in image identification
Automated retinal analysis
Diabetes is a chronic disease affecting over 2% of the population in the UK [1]. Long-term complications of diabetes can affect many different systems of the body including the retina of the eye. In the retina, diabetes can lead to a disease called diabetic retinopathy, one of the leading causes of blindness in the working population of industrialised countries. The risk of visual loss from diabetic retinopathy can be reduced if treatment is given at the onset of sight-threatening retinopathy. To detect early indicators of the disease, the UK National Screening Committee have recommended that diabetic patients should receive annual screening by digital colour fundal photography [2]. Manually grading retinal images is a subjective and costly process requiring highly skilled staff. This thesis describes an automated diagnostic system based oil image processing and neural network techniques, which analyses digital fundus images so that early signs of sight threatening retinopathy can be identified. Within retinal analysis this research has concentrated on the development of four algorithms: optic nerve head segmentation, lesion segmentation, image quality assessment and vessel width measurements. This research amalgamated these four algorithms with two existing techniques to form an integrated diagnostic system. The diagnostic system when used as a 'pre-filtering' tool successfully reduced the number of images requiring human grading by 74.3%: this was achieved by identifying and excluding images without sight threatening maculopathy from manual screening
- …