300 research outputs found
High Performance Techniques for Face Recognition
The identification of individuals using face recognition techniques is a challenging task. This is due to the variations resulting from facial expressions, makeup, rotations, illuminations, gestures, etc. Also, facial images contain a great deal of redundant information, which negatively affects the performance of the recognition system. The dimensionality and the redundancy of the facial features have a direct effect on the face recognition accuracy. Not all the features in the feature vector space are useful. For example, non-discriminating features in the feature vector space not only degrade the recognition accuracy but also increase the computational complexity. In the field of computer vision, pattern recognition, and image processing, face recognition has become a popular research topic. This is due to its wide spread applications in security and control, which allow the identified individual to access secure areas, personal information, etc. The performance of any recognition system depends on three factors: 1) the storage requirements, 2) the computational complexity, and 3) the recognition rates. Two different recognition system families are presented and developed in this dissertation. Each family consists of several face recognition systems. Each system contains three main steps, namely, preprocessing, feature extraction, and classification. Several preprocessing steps, such as cropping, facial detection, dividing the facial image into sub-images, etc. are applied to the facial images. This reduces the effect of the irrelevant information (background) and improves the system performance. In this dissertation, either a Neural Network (NN) based classifier or Euclidean distance is used for classification purposes. Five widely used databases, namely, ORL, YALE, FERET, FEI, and LFW, each containing different facial variations, such as light condition, rotations, facial expressions, facial details, etc., are used to evaluate the proposed systems. The experimental results of the proposed systems are analyzed using K-folds Cross Validation (CV). In the family-1, Several systems are proposed for face recognition. Each system employs different integrated tools in the feature extraction step. These tools, Two Dimensional Discrete Multiwavelet Transform (2D DMWT), 2D Radon Transform (2D RT), 2D or 3D DWT, and Fast Independent Component Analysis (FastICA), are applied to the processed facial images to reduce the dimensionality and to obtain discriminating features. Each proposed system produces a unique representation, and achieves less storage requirements and better performance than the existing methods. For further facial compression, there are three face recognition systems in the second family. Each system uses different integrated tools to obtain better facial representation. The integrated tools, Vector Quantization (VQ), Discrete cosine Transform (DCT), and 2D DWT, are applied to the facial images for further facial compression and better facial representation. In the systems using the tools VQ/2D DCT and VQ/ 2D DWT, each pose in the databases is represented by one centroid with 4*4*16 dimensions. In the third system, VQ/ Facial Part Detection (FPD), each person in the databases is represented by four centroids with 4*Centroids (4*4*16) dimensions. The systems in the family-2 are proposed to further reduce the dimensions of the data compared to the systems in the family-1 while attaining comparable results. For example, in family-1, the integrated tools, FastICA/ 2D DMWT, applied to different combinations of sub-images in the FERET database with K-fold=5 (9 different poses used in the training mode), reduce the dimensions of the database by 97.22% and achieve 99% accuracy. In contrast, the integrated tools, VQ/ FPD, in the family-2 reduce the dimensions of the data by 99.31% and achieve 97.98% accuracy. In this example, the integrated tools, VQ/ FPD, accomplished further data compression and less accuracy compared to those reported by FastICA/ 2D DMWT tools. Various experiments and simulations using MATLAB are applied. The experimental results of both families confirm the improvements in the storage requirements, as well as the recognition rates as compared to some recently reported methods
Different Facial Recognition Techniques in Transform Domains
The human face is frequently used as the biometric signal presented to a machine for identification purposes. Several challenges are encountered while designing face identification systems. The challenges are either caused by the process of capturing the face image itself, or occur while processing the face poses. Since the face image not only contains the face, this adds to the data dimensionality, and thus degrades the performance of the recognition system. Face Recognition (FR) has been a major signal processing topic of interest in the last few decades. Most common applications of the FR include, forensics, access authorization to facilities, or simply unlocking of a smart phone. The three factors governing the performance of a FR system are: the storage requirements, the computational complexity, and the recognition accuracy. The typical FR system consists of the following main modules in each of the Training and Testing phases: Preprocessing, Feature Extraction, and Classification. The ORL, YALE, FERET, FEI, Cropped AR, and Georgia Tech datasets are used to evaluate the performance of the proposed systems. The proposed systems are categorized into Single-Transform and Two-Transform systems. In the first category, the features are extracted from a single domain, that of the Two-Dimensional Discrete Cosine Transform (2D DCT). In the latter category, the Two-Dimensional Discrete Wavelet Transform (2D DWT) coefficients are combined with those of the 2D DCT to form one feature vector. The feature vectors are either used directly or further processed to obtain the persons\u27 final models. The Principle Component Analysis (PCA), the Sparse Representation, Vector Quantization (VQ) are employed as a second step in the Feature Extraction Module. Additionally, a technique is proposed in which the feature vector is composed of appropriately selected 2D DCT and 2D DWT coefficients based on a residual minimization algorithm
Increasing Accuracy Performance through Optimal Feature Extraction Algorithms
This research developed models and techniques to improve the three key modules of popular recognition systems: preprocessing, feature extraction, and classification. Improvements were made in four key areas: processing speed, algorithm complexity, storage space, and accuracy. The focus was on the application areas of the face, traffic sign, and speaker recognition. In the preprocessing module of facial and traffic sign recognition, improvements were made through the utilization of grayscaling and anisotropic diffusion. In the feature extraction module, improvements were made in two different ways; first, through the use of mixed transforms and second through a convolutional neural network (CNN) that best fits specific datasets. The mixed transform system consists of various combinations of the Discrete Wavelet Transform (DWT) and Discrete Cosine Transform (DCT), which have a reliable track record for image feature extraction. In terms of the proposed CNN, a neuroevolution system was used to determine the characteristics and layout of a CNN to best extract image features for particular datasets. In the speaker recognition system, the improvement to the feature extraction module comprised of a quantized spectral covariance matrix and a two-dimensional Principal Component Analysis (2DPCA) function. In the classification module, enhancements were made in visual recognition through the use of two neural networks: the multilayer sigmoid and convolutional neural network. Results show that the proposed improvements in the three modules led to an increase in accuracy as well as reduced algorithmic complexity, with corresponding reductions in storage space and processing time
Recommended from our members
From content-based to semantic image retrieval. Low level feature extraction, classification using image processing and neural networks, content based image retrieval, hybrid low level and high level based image retrieval in the compressed DCT domain.
Digital image archiving urgently requires advanced techniques for more efficient storage and retrieval methods because of the increasing amount of digital. Although JPEG supply systems to compress image data efficiently, the problems of how to organize the image database structure for efficient indexing and retrieval, how to index and retrieve image data from DCT compressed domain and how to interpret image data semantically are major obstacles for further development of digital image database system. In content-based image, image analysis is the primary step to extract useful information from image databases. The difficulty in content-based image retrieval is how to summarize the low-level features into high-level or semantic descriptors to facilitate the retrieval procedure. Such a shift toward a semantic visual data learning or detection of semantic objects generates an urgent need to link the low level features with semantic understanding of the observed visual information. To solve such a -semantic gap¿ problem, an efficient way is to develop a number of classifiers to identify the presence of semantic image components that can be connected to semantic descriptors. Among various semantic objects, the human face is a very important example, which is usually also the most significant element in many images and photos. The presence of faces can usually be correlated to specific scenes with semantic inference according to a given ontology. Therefore, face detection can be an efficient tool to annotate images for semantic descriptors. In this thesis, a paradigm to process, analyze and interpret digital images is proposed. In order to speed up access to desired images, after accessing image data, image features are presented for analysis. This analysis gives not only a structure for content-based image retrieval but also the basic units
ii
for high-level semantic image interpretation. Finally, images are interpreted and classified into some semantic categories by semantic object detection categorization algorithm
Biometric Applications Based on Multiresolution Analysis Tools
This dissertation is dedicated to the development of new algorithms for biometric applications based on multiresolution analysis tools. Biometric is a unique, measurable characteristic of a human being that can be used to automatically recognize an individual or verify an individual\u27s identity. Biometrics can measure physiological, behavioral, physical and chemical characteristics of an individual. Physiological characteristics are based on measurements derived from direct measurement of a part of human body, such as, face, fingerprint, iris, retina etc. We focussed our investigations to fingerprint and face recognition since these two biometric modalities are used in conjunction to obtain reliable identification by various border security and law enforcement agencies. We developed an efficient and robust human face recognition algorithm for potential law enforcement applications. A generic fingerprint compression algorithm based on state of the art multiresolution analysis tool to speed up data archiving and recognition was also proposed. Finally, we put forth a new fingerprint matching algorithm by generating an efficient set of fingerprint features to minimize false matches and improve identification accuracy. Face recognition algorithms were proposed based on curvelet transform using kernel based principal component analysis and bidirectional two-dimensional principal component analysis and numerous experiments were performed using popular human face databases. Significant improvements in recognition accuracy were achieved and the proposed methods drastically outperformed conventional face recognition systems that employed linear one-dimensional principal component analysis. Compression schemes based on wave atoms decomposition were proposed and major improvements in peak signal to noise ratio were obtained in comparison to Federal Bureau of Investigation\u27s wavelet scalar quantization scheme. Improved performance was more pronounced and distinct at higher compression ratios. Finally, a fingerprint matching algorithm based on wave atoms decomposition, bidirectional two dimensional principal component analysis and extreme learning machine was proposed and noteworthy improvements in accuracy were realized
Sparse representation based hyperspectral image compression and classification
Abstract
This thesis presents a research work on applying sparse representation to lossy hyperspectral image
compression and hyperspectral image classification. The proposed lossy hyperspectral image
compression framework introduces two types of dictionaries distinguished by the terms sparse
representation spectral dictionary (SRSD) and multi-scale spectral dictionary (MSSD), respectively.
The former is learnt in the spectral domain to exploit the spectral correlations, and the
latter in wavelet multi-scale spectral domain to exploit both spatial and spectral correlations in
hyperspectral images. To alleviate the computational demand of dictionary learning, either a
base dictionary trained offline or an update of the base dictionary is employed in the compression
framework. The proposed compression method is evaluated in terms of different objective
metrics, and compared to selected state-of-the-art hyperspectral image compression schemes, including
JPEG 2000. The numerical results demonstrate the effectiveness and competitiveness of
both SRSD and MSSD approaches.
For the proposed hyperspectral image classification method, we utilize the sparse coefficients
for training support vector machine (SVM) and k-nearest neighbour (kNN) classifiers. In particular,
the discriminative character of the sparse coefficients is enhanced by incorporating contextual
information using local mean filters. The classification performance is evaluated and compared
to a number of similar or representative methods. The results show that our approach could outperform
other approaches based on SVM or sparse representation.
This thesis makes the following contributions. It provides a relatively thorough investigation
of applying sparse representation to lossy hyperspectral image compression. Specifically,
it reveals the effectiveness of sparse representation for the exploitation of spectral correlations
in hyperspectral images. In addition, we have shown that the discriminative character of sparse
coefficients can lead to superior performance in hyperspectral image classification.EM201
Multi-modal association learning using spike-timing dependent plasticity (STDP)
We propose an associative learning model that can integrate facial images with speech signals to target a subject in a reinforcement learning (RL) paradigm. Through this approach, the rules of learning will involve associating paired stimuli (stimulus–stimulus, i.e., face–speech), which is also known as predictor-choice pairs.
Prior to a learning simulation, we extract the features of the biometrics used in the study. For facial features, we experiment by using two approaches: principal component analysis (PCA)-based Eigenfaces and singular value decomposition (SVD). For speech features, we use wavelet packet decomposition (WPD). The
experiments show that the PCA-based Eigenfaces feature extraction approach produces better results than SVD. We implement the proposed learning model by using the Spike- Timing-Dependent Plasticity (STDP) algorithm, which depends on the time and rate of pre-post synaptic spikes. The key contribution of our study is the implementation of learning rules via STDP and firing rate in spatiotemporal neural networks based on the Izhikevich spiking model. In our learning, we implement learning for response group association by following the reward-modulated STDP in terms of RL, wherein the firing rate of the response groups determines the reward that will be given. We perform a number of experiments that use existing face samples from the Olivetti Research Laboratory (ORL) dataset, and speech samples from TIDigits. After several experiments and simulations are performed to recognize a subject, the results show that the proposed learning model can associate the
predictor (face) with the choice (speech) at optimum performance rates of 77.26% and 82.66% for training and testing, respectively. We also perform learning by using real data, that is, an experiment is conducted on a sample of face–speech data, which have been collected in a manner similar to that of the initial data. The performance results are 79.11% and 77.33% for training and testing, respectively. Based on these results, the proposed learning model can produce high learning performance in terms
of combining heterogeneous data (face–speech). This finding opens possibilities to expand RL in the field of biometric authenticatio
Motion compensation and very low bit rate video coding
Recently, many activities of the International Telecommunication Union (ITU) and the International Standard Organization (ISO) are leading to define new standards for very low bit-rate video coding, such as H.263 and MPEG-4 after successful applications of the international standards H.261 and MPEG-1/2 for video coding above 64kbps. However, at very low bit-rate the classic block matching based DCT video coding scheme suffers seriously from blocking artifacts which degrade the quality of reconstructed video frames considerably. To solve this problem, a new technique in which motion compensation is based on dense motion field is presented in this dissertation.
Four efficient new video coding algorithms based on this new technique for very low bit-rate are proposed. (1) After studying model-based video coding algorithms, we propose an optical flow based video coding algorithm with thresh-olding techniques. A statistic model is established for distribution of intensity difference between two successive frames, and four thresholds are used to control the bit-rate and the quality of reconstructed frames. It outperforms the typical model-based techniques in terms of complexity and quality of reconstructed frames. (2) An efficient algorithm using DCT coded optical flow. It is found that dense motion fields can be modeled as the first order auto-regressive model, and efficiently compressed with DCT technique, hence achieving very low bit-rate and higher visual quality than the H.263/TMN5. (3) A region-based discrete wavelet transform video coding algorithm. This algorithm implements dense motion field and regions are segmented according to their content significance. The DWT is applied to residual images region by region, and bits are adaptively allocated to regions. It improves the visual quality and PSNR of significant regions while maintaining low bit-rate. (4) A segmentation-based video coding algorithm for stereo sequence. A correlation-feedback algorithm with Kalman filter is utilized to improve the accuracy of optical flow fields. Three criteria, which are associated with 3-D information, 2-D connectivity and motion vector fields, respectively, are defined for object segmentation. A chain code is utilized to code the shapes of the segmented objects. it can achieve very high compression ratio up to several thousands
Forensic Face Sketch Recognition Using Computer Vision
Now - a - days need for technologies for identification, detection and recognition of suspects has increased. One of the most common biometric techniques is face recognition, since face is the convenient way used by the people to identify each - other. Understanding how humans recognize face sketches drawn by artists is of significant value to both criminal investigators and forensic researchers in Computer Vision. However studies say that hand - drawn face sketches are still very limited in terms of artists and number of sketches because after any incident a forensic artist prepares a victims sketches on behalf of the descripti on provided by an eyewitness. Sometimes suspects used special mask to hide some common features of faces like nose, eyes, lips, face - color etc. but the outliner features of face biometrics one could never hide. In this work, I concentrated on some specific facial geometric feature which could be used to calculate some ratios of similarities from the template photograph database against the forensic sketches. This paper describes the design of a system for forensic face sketch recognition by a computer visi on approach like Two - Dimensional Discrete Cosine Transform (2D - DCT) and the Self - Organizing Map (SOM) Neural Network simulated in MATLAB
- …