5 research outputs found

    Categorizing facial expressions : a comparison of computational models

    Get PDF
    The original publication is available at www.springerlink.com Copyright SpringerRecognizing expressions is a key part of human social interaction, and processing of facial expression information is largely automatic for humans, but it is a non-trivial task for a computational system. The purpose of this work is to develop computational models capable of differentiating between a range of human facial expressions. Raw face images are examples of high-dimensional data, so here we use two dimensionality reduction techniques: principal component analysis and curvilinear component analysis. We also preprocess the images with a bank of Gabor filters, so that important features in the face images may be identified. Subsequently, the faces are classified using a support vector machine. We show that it is possible to differentiate faces with a prototypical expression from the neutral expression. Moreover, we can achieve this with data that has been massively reduced in size: in the best case the original images are reduced to just 5 components. We also investigate the effect size on face images, a concept which has not been reported previously on faces. This enables us to identify those areas of the face that are involved in the production of a facial expression.Peer reviewe

    Improving the accuracy of convolutional neural networks by ddentifying and removing outlier images in datasets using t-SNE

    Get PDF
    In the field of supervised machine learning, the quality of a classifier model is directly correlated with the quality of the data that is used to train the model. The presence of unwanted outliers in the data could significantly reduce the accuracy of a model or, even worse, result in a biased model leading to an inaccurate classification. Identifying the presence of outliers and eliminating them is, therefore, crucial for building good quality training datasets. Pre-processing procedures for dealing with missing and outlier data, commonly known as feature engineering, are standard practice in machine learning problems. They help to make better assumptions about the data and also prepare datasets in a way that best expose the underlying problem to the machine learning algorithms. In this work, we propose a multistage method for detecting and removing outliers in high-dimensional data. Our proposed method is based on utilising a technique called t-distributed stochastic neighbour embedding (t-SNE) to reduce high-dimensional map of features into a lower, two-dimensional, probability density distribution and then use a simple descriptive statistical method called interquartile range (IQR) to identifying any outlier values from the density distribution of the features. t-SNE is a machine learning algorithm and a nonlinear dimensionality reduction technique well-suited for embedding high-dimensional data for visualisation in a low-dimensional space of two or three dimensions. We applied this method on a dataset containing images for training a convolutional neural network model (ConvNet) for an image classification problem. The dataset contains four different classes of images: three classes contain defects in construction (mould, stain, and paint deterioration) and a no-defect class (normal). We used the transfer learning technique to modify a pre-trained VGG-16 model. We used this model as a feature extractor and as a benchmark to evaluate our method. We have shown that, when using this method, we can identify and remove the outlier images in the dataset. After removing the outlier images from the dataset and re-training the VGG-16 model, the results have also shown that the accuracy of the classification has significantly improved and the number of misclassified cases has also dropped. While many feature engineering techniques for handling missing and outlier data are common in predictive machine learning problems involving numerical or categorical data, there is little work on developing techniques for handling outliers in high-dimensional data which can be used to improve the quality of machine learning problems involving images such as ConvNet models for image classification and object detection problems

    Investigations into the Robustness of Audio-Visual Gender Classification to Background Noise and Illumination Effects

    Full text link

    Analysis of Linear and Nonlinear dimensionality Reduction Methods for Gender Classification of Face Images

    Get PDF
    Original article can be found at: http://www.informaworld.com/smpp/title~content=t713697751--Copyright Informa / Taylor and Francis Group. DOI : 10.1080/00207720500381573Data in many real world applications are high dimensional and learning algorithms like neural networks may have problems in handling high dimensional data. However, the Intrinsic Dimension is often much less than the original dimension of the data. Here, we use fractal based methods to estimate the Intrinsic Dimension and show that a nonlinear projection method called Curvilinear Component Analysis can effectively reduce the original dimension to the Intrinsic Dimension. We apply this approach for dimensionality reduction of the face images data and use neural network classifiers for Gender Classification.Peer reviewe
    corecore