51 research outputs found
Novel Facial Image Recognition Techniques Employing Principal Component Analysis
Recently, pattern recognition/classification has received considerable attention in diverse engineering fields such as biomedical imaging, speaker identification, fingerprint recognition, and face recognition, etc. This study contributes novel techniques for facial image recognition based on the Two dimensional principal component analysis in the transform domain. These algorithms reduce the storage requirements by an order of magnitude and the computational complexity by a factor of 2 while maintaining the excellent recognition accuracy of the recently reported methods. The proposed recognition systems employ different structures, multicriteria and multitransform. In addition, principal component analysis in the transform domain in conjunction with vector quantization is developed which result in further improvement in the recognition accuracy and dimensionality reduction. Experimental results confirm the excellent properties of the proposed algorithms
Evaluation of face recognition algorithms under noise
One of the major applications of computer vision and image processing is face recognition,
where a computerized algorithm automatically identifies a person’s face from
a large image dataset or even from a live video. This thesis addresses facial recognition,
a topic that has been widely studied due to its importance in many applications
in both civilian and military domains. The application of face recognition systems
has expanded from security purposes to social networking sites, managing fraud, and
improving user experience. Numerous algorithms have been designed to perform face
recognition with good accuracy. This problem is challenging due to the dynamic nature
of the human face and the different poses that it can take. Regardless of the
algorithm, facial recognition accuracy can be heavily affected by the presence of noise.
This thesis presents a comparison of traditional and deep learning face recognition
algorithms under the presence of noise. For this purpose, Gaussian and salt-andpepper
noises are applied to the face images drawn from the ORL Dataset. The
image recognition is performed using each of the following eight algorithms: principal
component analysis (PCA), two-dimensional PCA (2D-PCA), linear discriminant
analysis (LDA), independent component analysis (ICA), discrete cosine transform
(DCT), support vector machine (SVM), convolution neural network (CNN) and Alex
Net. The ORL dataset was used in the experiments to calculate the evaluation accuracy
for each of the investigated algorithms. Each algorithm is evaluated with two
experiments; in the first experiment only one image per person is used for training,
whereas in the second experiment, five images per person are used for training. The investigated traditional algorithms are implemented with MATLAB and the deep
learning algorithms approaches are implemented with Python. The results show that
the best performance was obtained using the DCT algorithm with 92% dominant
eigenvalues and 95.25 % accuracy, whereas for deep learning, the best performance
was using a CNN with accuracy of 97.95%, which makes it the best choice under noisy
conditions
Face recognition via the overlapping energy histogram
In this paper we investigate the face recognition problem via the overlapping energy histogram of the DCT coefficients. Particularly, we investigate some important issues relating to the recognition performance, such as the issue of selecting threshold and the number of bins. These selection methods utilise information obtained from the training dataset. Experimentation is conducted on the Yale face database and results indicate that the proposed parameter selection methods perform well in selecting the threshold and number of bins. Furthermore, we show that the proposed overlapping energy histogram approach outperforms the Eigenfaces, 2DPCA and energy histogram significantly.<br /
Unsupervised Doppler Radar Based Activity Recognition for e-Healthcare
Passive radio frequency (RF) sensing and monitoring of human daily activities
in elderly care homes is an emerging topic. Micro-Doppler radars are an
appealing solution considering their non-intrusiveness, deep penetration, and
high-distance range. Unsupervised activity recognition using Doppler radar data
has not received attention, in spite of its importance in case of unlabelled or
poorly labelled activities in real scenarios. This study proposes two
unsupervised feature extraction methods for the purpose of human activity
monitoring using Doppler-streams. These include a local Discrete Cosine
Transform (DCT)-based feature extraction method and a local entropy-based
feature extraction method. In addition, a novel application of Convolutional
Variational Autoencoder (CVAE) feature extraction is employed for the first
time for Doppler radar data. The three feature extraction architectures are
compared with the previously used Convolutional Autoencoder (CAE) and linear
feature extraction based on Principal Component Analysis (PCA) and 2DPCA.
Unsupervised clustering is performed using K-Means and K-Medoids. The results
show the superiority of DCT-based method, entropy-based method, and CVAE
features compared to CAE, PCA, and 2DPCA, with more than 5\%-20\% average
accuracy. In regards to computation time, the two proposed methods are
noticeably much faster than the existing CVAE. Furthermore, for
high-dimensional data visualisation, three manifold learning techniques are
considered. The methods are compared for the projection of raw data as well as
the encoded CVAE features. All three methods show an improved visualisation
ability when applied to the encoded CVAE features
Heterogeneous Techniques used in Face Recognition: A Survey
Face Recognition has become one of the important areas of research in computer vision. Human Communication is a combination of both verbal and non-verbal. For interaction in the society, face serve as the primary canvas used to express distinct emotions non-verbally. The face of one person provides the most important natural means of communication. In this paper, we will discuss the various works done in the area of face recognition where focus is on intelligent approaches like PCA, LDA, DFLD, SVD, GA etc. In the current trend, combination of these existing techniques are being taken into consideration and are discussed in this paper.Keywords: Principal Component Analysis (PCA), Linear Discriminant Analysis (LDA), Genetic Algorithm (GA), Direct Fractional LDA (DFLD
Pattern Recognition
Pattern recognition is a very wide research field. It involves factors as diverse as sensors, feature extraction, pattern classification, decision fusion, applications and others. The signals processed are commonly one, two or three dimensional, the processing is done in real- time or takes hours and days, some systems look for one narrow object class, others search huge databases for entries with at least a small amount of similarity. No single person can claim expertise across the whole field, which develops rapidly, updates its paradigms and comprehends several philosophical approaches. This book reflects this diversity by presenting a selection of recent developments within the area of pattern recognition and related fields. It covers theoretical advances in classification and feature extraction as well as application-oriented works. Authors of these 25 works present and advocate recent achievements of their research related to the field of pattern recognition
Time And Space Efficient Techniques For Facial Recognition
In recent years, there has been an increasing interest in face recognition. As a result, many new facial recognition techniques have been introduced. Recent developments in the field of face recognition have led to an increase in the number of available face recognition commercial products. However, Face recognition techniques are currently constrained by three main factors: recognition accuracy, computational complexity, and storage requirements. The problem is that most of the current face recognition techniques succeed in improving one or two of these factors at the expense of the others. In this dissertation, four novel face recognition techniques that improve the storage and computational requirements of face recognition systems are presented and analyzed. Three of the four novel face recognition techniques to be introduced, namely, Quantized/truncated Transform Domain (QTD), Frequency Domain Thresholding and Quantization (FD-TQ), and Normalized Transform Domain (NTD). All the three techniques utilize the Two-dimensional Discrete Cosine Transform (DCT-II), which reduces the dimensionality of facial feature images, thereby reducing the computational complexity. The fourth novel face recognition technique is introduced, namely, the Normalized Histogram Intensity (NHI). It is based on utilizing the pixel intensity histogram of poses\u27 subimages, which reduces the computational complexity and the needed storage requirements. Various simulation experiments using MATLAB were conducted to test the proposed methods. For the purpose of benchmarking the performance of the proposed methods, the simulation experiments were performed using current state-of-the-art face recognition techniques, namely, Two Dimensional Principal Component Analysis (2DPCA), Two-Directional Two-Dimensional Principal Component Analysis ((2D)^2PCA), and Transform Domain Two Dimensional Principal Component Analysis (TD2DPCA). The experiments were applied to the ORL, Yale, and FERET databases. The experimental results for the proposed techniques confirm that the use of any of the four novel techniques examined in this study results in a significant reduction in computational complexity and storage requirements compared to the state-of-the-art techniques without sacrificing the recognition accuracy
Improving Human Face Recognition Using Deep Learning Based Image Registration And Multi-Classifier Approaches
Face detection, registration, and recognition have become a fascinating field for researchers. The motivation behind the enormous interest in the topic is the need to improve the accuracy of many real-time applications. Countless methodologies have been acknowledged and presented in the past years. The complexity of the human face visual and the significant changes based on different effects make it more challenging to design as well as implementing a powerful computational system for object recognition in addition to human face recognition. Using supervised learning often requires extensive training for the computer which results in high execution times. It is an essential step in the face recognition to apply strong preprocessing approaches such as face registration to achieve a high recognition accuracy rate. Although there are exist approaches do both detection and recognition, we believe the absence of a complete end-to-end system capable of performing recognition from an arbitrary scene is in large part due to the difficulty in alignment. Often, the face registration is ignored, with the assumption that the detector will perform a rough alignment, leading to suboptimal recognition performance. In this research, we presented an enhanced approach to improve human face recognition using a back-propagation neural network (BPNN) and features extraction based on the correlation between the training images. A key contribution of this paper is the generation of a new set called the T-Dataset from the original training data set, which is used to train the BPNN. We generated the T-Dataset using the correlation between the training images without using a common technique of image density. The correlated T-Dataset provides a high distinction layer between the training images, which helps the BPNN to converge faster and achieve better accuracy. Data and features reduction is essential in the face recognition process, and researchers have recently focused on the modern neural network. Therefore, we used using a classical conventional Principal Component Analysis (PCA) and Local Binary Patterns (LBP) to prove that there is a potential improvement even using traditional methods. We applied five distance measurement algorithms and then combined them to obtain the T-Dataset, which we fed into the BPNN. We achieved higher face recognition accuracy with less computational cost compared with the current approach by using reduced image features. We test the proposed framework on two small data sets, the YALE and AT&T data sets, as the ground truth. We achieved tremendous accuracy. Furthermore, we evaluate our method on one of the state-of-the-art benchmark data sets, Labeled Faces in the Wild (LFW), where we produce a competitive face recognition performance. In addition, we presented an enhanced framework to improve the face registration using deep learning model. We used deep architectures such as VGG16 and VGG19 to train our method. We trained our model to learn the transformation parameters (Rotation, scaling, and shifting). By leaning the transformation parameters, we will able to transfer the image back to the frontal domain. We used the LFW dataset to evaluate our method, and we achieve high accuracy
- …