1,308 research outputs found
Efficiency of texture image enhancement by DCT-based filtering
International audienceTextures or high-detailed structures as well as image object shapes contain information that is widely exploited in pattern recognition and image classification. Noise can deteriorate these features and has to be removed. In this paper, we consider the influence of textural properties on efficiency of image enhancement by noise suppression for the posterior treatment. Among possible variants of denoising, filters based on discrete cosine transform known to be effective in removing additive white Gaussian noise are considered. It is shown that noise removal in texture images using the considered techniques can distort fine texture details. To detect such situations and to avoid texture degradation due to filtering, filtering efficiency predictors, including neural network based predictor, applicable to a wide class of images are proposed. These predictors use simple statistical parameters to estimate performance of the considered filters. Image enhancement is analysed in terms of both standard criteria and metrics of image visual quality for various scenarios of texture roughness and noise characteristics. The discrete cosine transform based filters are compared to several counterparts. Problems of noise removal in texture images are demonstrated for all of them. A special case of spatially correlated noise is considered as well. Potential efficiency of filtering is analysed for both studied noise models. It is shown that studied filters are close to the potential limits
MASCOT : metadata for advanced scalable video coding tools : final report
The goal of the MASCOT project was to develop new video coding schemes and tools that provide both an increased coding efficiency as well as extended scalability features compared to technology that was available at the beginning of the project. Towards that goal the following tools would be used: - metadata-based coding tools; - new spatiotemporal decompositions; - new prediction schemes. Although the initial goal was to develop one single codec architecture that was able to combine all new coding tools that were foreseen when the project was formulated, it became clear that this would limit the selection of the new tools. Therefore the consortium decided to develop two codec frameworks within the project, a standard hybrid DCT-based codec and a 3D wavelet-based codec, which together are able to accommodate all tools developed during the course of the project
Pigment Melanin: Pattern for Iris Recognition
Recognition of iris based on Visible Light (VL) imaging is a difficult
problem because of the light reflection from the cornea. Nonetheless, pigment
melanin provides a rich feature source in VL, unavailable in Near-Infrared
(NIR) imaging. This is due to biological spectroscopy of eumelanin, a chemical
not stimulated in NIR. In this case, a plausible solution to observe such
patterns may be provided by an adaptive procedure using a variational technique
on the image histogram. To describe the patterns, a shape analysis method is
used to derive feature-code for each subject. An important question is how much
the melanin patterns, extracted from VL, are independent of iris texture in
NIR. With this question in mind, the present investigation proposes fusion of
features extracted from NIR and VL to boost the recognition performance. We
have collected our own database (UTIRIS) consisting of both NIR and VL images
of 158 eyes of 79 individuals. This investigation demonstrates that the
proposed algorithm is highly sensitive to the patterns of cromophores and
improves the iris recognition rate.Comment: To be Published on Special Issue on Biometrics, IEEE Transaction on
Instruments and Measurements, Volume 59, Issue number 4, April 201
Quality Adaptive Least Squares Trained Filters for Video Compression Artifacts Removal Using a No-reference Block Visibility Metric
Compression artifacts removal is a challenging problem because videos can be compressed at different qualities. In this paper, a least squares approach that is self-adaptive to the visual quality of the input sequence is proposed. For compression artifacts, the visual quality of an image is measured by a no-reference block visibility metric. According to the blockiness visibility of an input image, an appropriate set of filter coefficients that are trained beforehand is selected for optimally removing coding artifacts and reconstructing object details. The performance of the proposed algorithm is evaluated on a variety of sequences compressed at different qualities in comparison to several other deblocking techniques. The proposed method outperforms the others significantly both objectively and subjectively
Graph Spectral Image Processing
Recent advent of graph signal processing (GSP) has spurred intensive studies
of signals that live naturally on irregular data kernels described by graphs
(e.g., social networks, wireless sensor networks). Though a digital image
contains pixels that reside on a regularly sampled 2D grid, if one can design
an appropriate underlying graph connecting pixels with weights that reflect the
image structure, then one can interpret the image (or image patch) as a signal
on a graph, and apply GSP tools for processing and analysis of the signal in
graph spectral domain. In this article, we overview recent graph spectral
techniques in GSP specifically for image / video processing. The topics covered
include image compression, image restoration, image filtering and image
segmentation
- …