5,904 research outputs found
A Neural Network Based Classifier for a Segmented Facial Expression Recognition System Based on Haar Wavelet Transform
Automatic recognition of facial expressions is a vital component of natural human-machine interfaces. Facial expressions convey information about one's emotional state and helps regulate our social norms by helping detect and interpret a scene. In this paper, we propose a novel face expression recognition scheme based on Haar discrete wavelet transform and a neural network classifier. First, the sample image undergoes preprocessing where noise is removed using binary image processing techniques. Then feature vectors are extracted using DWT from corresponding pixels in the image. The extracted image pixel data are used as the input to the neural network. We demonstrate experimentally that when wavelet coefficients are fed into a back-propagation based neural network for classification, a high recognition rate can be achieved by using a very small proportion of transform coefficients. Based on our experimental results, the proposed scheme gives satisfactory results
An adaptive perception-based image preprocessing method
The aim of this paper is to introduce an adaptive preprocessing procedure based on human perception in order to increase the performance of some standard image processing techniques. Specifically, image frequency content has been weighted by the corresponding value of the contrast sensitivity function, in agreement with the sensitiveness of human eye to the different image frequencies and contrasts. The 2D Rational dilation wavelet transform has been employed for representing image frequencies. In fact, it provides an adaptive and flexible multiresolution framework, enabling an
easy and straightforward adaptation to the image frequency content. Preliminary experimental results show that the proposed preprocessing allows us to increase the performance of some standard image enhancement algorithms in terms of visual quality and often also in terms of PSNR
Automated analysis of quantitative image data using isomorphic functional mixed models, with application to proteomics data
Image data are increasingly encountered and are of growing importance in many
areas of science. Much of these data are quantitative image data, which are
characterized by intensities that represent some measurement of interest in the
scanned images. The data typically consist of multiple images on the same
domain and the goal of the research is to combine the quantitative information
across images to make inference about populations or interventions. In this
paper we present a unified analysis framework for the analysis of quantitative
image data using a Bayesian functional mixed model approach. This framework is
flexible enough to handle complex, irregular images with many local features,
and can model the simultaneous effects of multiple factors on the image
intensities and account for the correlation between images induced by the
design. We introduce a general isomorphic modeling approach to fitting the
functional mixed model, of which the wavelet-based functional mixed model is
one special case. With suitable modeling choices, this approach leads to
efficient calculations and can result in flexible modeling and adaptive
smoothing of the salient features in the data. The proposed method has the
following advantages: it can be run automatically, it produces inferential
plots indicating which regions of the image are associated with each factor, it
simultaneously considers the practical and statistical significance of
findings, and it controls the false discovery rate.Comment: Published in at http://dx.doi.org/10.1214/10-AOAS407 the Annals of
Applied Statistics (http://www.imstat.org/aoas/) by the Institute of
Mathematical Statistics (http://www.imstat.org
- …