46,030 research outputs found
Examplers based image fusion features for face recognition
Examplers of a face are formed from multiple gallery images of a person and
are used in the process of classification of a test image. We incorporate such
examplers in forming a biologically inspired local binary decisions on
similarity based face recognition method. As opposed to single model approaches
such as face averages the exampler based approach results in higher recognition
accu- racies and stability. Using multiple training samples per person, the
method shows the following recognition accuracies: 99.0% on AR, 99.5% on FERET,
99.5% on ORL, 99.3% on EYALE, 100.0% on YALE and 100.0% on CALTECH face
databases. In addition to face recognition, the method also detects the natural
variability in the face images which can find application in automatic tagging
of face images
Face Recognition: A Novel Multi-Level Taxonomy based Survey
In a world where security issues have been gaining growing importance, face
recognition systems have attracted increasing attention in multiple application
areas, ranging from forensics and surveillance to commerce and entertainment.
To help understanding the landscape and abstraction levels relevant for face
recognition systems, face recognition taxonomies allow a deeper dissection and
comparison of the existing solutions. This paper proposes a new, more
encompassing and richer multi-level face recognition taxonomy, facilitating the
organization and categorization of available and emerging face recognition
solutions; this taxonomy may also guide researchers in the development of more
efficient face recognition solutions. The proposed multi-level taxonomy
considers levels related to the face structure, feature support and feature
extraction approach. Following the proposed taxonomy, a comprehensive survey of
representative face recognition solutions is presented. The paper concludes
with a discussion on current algorithmic and application related challenges
which may define future research directions for face recognition.Comment: This paper is a preprint of a paper submitted to IET Biometrics. If
accepted, the copy of record will be available at the IET Digital Librar
Texture image analysis and texture classification methods - A review
Tactile texture refers to the tangible feel of a surface and visual texture
refers to see the shape or contents of the image. In the image processing, the
texture can be defined as a function of spatial variation of the brightness
intensity of the pixels. Texture is the main term used to define objects or
concepts of a given image. Texture analysis plays an important role in computer
vision cases such as object recognition, surface defect detection, pattern
recognition, medical image analysis, etc. Since now many approaches have been
proposed to describe texture images accurately. Texture analysis methods
usually are classified into four categories: statistical methods, structural,
model-based and transform-based methods. This paper discusses the various
methods used for texture or analysis in details. New researches shows the power
of combinational methods for texture analysis, which can't be in specific
category. This paper provides a review on well known combinational methods in a
specific section with details. This paper counts advantages and disadvantages
of well-known texture image descriptors in the result part. Main focus in all
of the survived methods is on discrimination performance, computational
complexity and resistance to challenges such as noise, rotation, etc. A brief
review is also made on the common classifiers used for texture image
classification. Also, a survey on texture image benchmark datasets is included.Comment: 29 Pages, Keywords: Texture Image, Texture Analysis, Texture
classification, Feature extraction, Image processing, Local Binary Patterns,
Benchmark texture image dataset
Fractional Local Neighborhood Intensity Pattern for Image Retrieval using Genetic Algorithm
In this paper, a new texture descriptor named "Fractional Local Neighborhood
Intensity Pattern" (FLNIP) has been proposed for content based image retrieval
(CBIR). It is an extension of the Local Neighborhood Intensity Pattern
(LNIP)[1]. FLNIP calculates the relative intensity difference between a
particular pixel and the center pixel of a 3x3 window by considering the
relationship with adjacent neighbors. In this work, the fractional change in
the local neighborhood involving the adjacent neighbors has been calculated
first with respect to one of the eight neighbors of the center pixel of a 3x3
window. Next, the fractional change has been calculated with respect to the
center itself. The two values of fractional change are next compared to
generate a binary bit pattern. Both sign and magnitude information are encoded
in a single descriptor as it deals with the relative change in magnitude in the
adjacent neighborhood i.e., the comparison of the fractional change. The
descriptor is applied on four multi-resolution images -- one being the raw
image and the other three being filtered gaussian images obtained by applying
gaussian filters of different standard deviations on the raw image to signify
the importance of exploring texture information at different resolutions in an
image. The four sets of distances obtained between the query and the target
image are then combined with a genetic algorithm based approach to improve the
retrieval performance by minimizing the distance between similar class images.
The performance of the method has been tested for image retrieval on four
popular databases. The precision and recall values observed on these databases
have been compared with recent state-of-art local patterns. The proposed method
has shown a significant improvement over many other existing methods.Comment: MTAP, Springer(Minor Revision
Evaluation of the Spatio-Temporal features and GAN for Micro-expression Recognition System
Owing to the development and advancement of artificial intelligence, numerous
works were established in the human facial expression recognition system.
Meanwhile, the detection and classification of micro-expressions are attracting
attentions from various research communities in the recent few years. In this
paper, we first review the processes of a conventional optical-flow-based
recognition system, which comprised of facial landmarks annotations, optical
flow guided images computation, features extraction and emotion class
categorization. Secondly, a few approaches have been proposed to improve the
feature extraction part, such as exploiting GAN to generate more image samples.
Particularly, several variations of optical flow are computed in order to
generate optimal images to lead to high recognition accuracy. Next, GAN, a
combination of Generator and Discriminator, is utilized to generate new "fake"
images to increase the sample size. Thirdly, a modified state-of-the-art
Convolutional neural networks is proposed. To verify the effectiveness of the
the proposed method, the results are evaluated on spontaneous micro-expression
databases, namely SMIC, CASME II and SAMM. Both the F1-score and accuracy
performance metrics are reported in this paper.Comment: 15 pages, 16 figures, 6 table
Local Neighborhood Intensity Pattern: A new texture feature descriptor for image retrieval
In this paper, a new texture descriptor based on the local neighborhood
intensity difference is proposed for content based image retrieval (CBIR). For
computation of texture features like Local Binary Pattern (LBP), the center
pixel in a 3*3 window of an image is compared with all the remaining neighbors,
one pixel at a time to generate a binary bit pattern. It ignores the effect of
the adjacent neighbors of a particular pixel for its binary encoding and also
for texture description. The proposed method is based on the concept that
neighbors of a particular pixel hold a significant amount of texture
information that can be considered for efficient texture representation for
CBIR. Taking this into account, we develop a new texture descriptor, named as
Local Neighborhood Intensity Pattern (LNIP) which considers the relative
intensity difference between a particular pixel and the center pixel by
considering its adjacent neighbors and generate a sign and a magnitude pattern.
Since sign and magnitude patterns hold complementary information to each other,
these two patterns are concatenated into a single feature descriptor to
generate a more concrete and useful feature descriptor. The proposed descriptor
has been tested for image retrieval on four databases, including three texture
image databases - Brodatz texture image database, MIT VisTex database and
Salzburg texture database and one face database AT&T face database. The
precision and recall values observed on these databases are compared with some
state-of-art local patterns. The proposed method showed a significant
improvement over many other existing methods.Comment: Expert Systems with Applications(Elsevier
From BoW to CNN: Two Decades of Texture Representation for Texture Classification
Texture is a fundamental characteristic of many types of images, and texture
representation is one of the essential and challenging problems in computer
vision and pattern recognition which has attracted extensive research
attention. Since 2000, texture representations based on Bag of Words (BoW) and
on Convolutional Neural Networks (CNNs) have been extensively studied with
impressive performance. Given this period of remarkable evolution, this paper
aims to present a comprehensive survey of advances in texture representation
over the last two decades. More than 200 major publications are cited in this
survey covering different aspects of the research, which includes (i) problem
description; (ii) recent advances in the broad categories of BoW-based,
CNN-based and attribute-based methods; and (iii) evaluation issues,
specifically benchmark datasets and state of the art results. In retrospect of
what has been achieved so far, the survey discusses open challenges and
directions for future research.Comment: Accepted by IJC
Facial Expression Recognition Based on Complexity Perception Classification Algorithm
Facial expression recognition (FER) has always been a challenging issue in
computer vision. The different expressions of emotion and uncontrolled
environmental factors lead to inconsistencies in the complexity of FER and
variability of between expression categories, which is often overlooked in most
facial expression recognition systems. In order to solve this problem
effectively, we presented a simple and efficient CNN model to extract facial
features, and proposed a complexity perception classification (CPC) algorithm
for FER. The CPC algorithm divided the dataset into an easy classification
sample subspace and a complex classification sample subspace by evaluating the
complexity of facial features that are suitable for classification. The
experimental results of our proposed algorithm on Fer2013 and CK-plus datasets
demonstrated the algorithm's effectiveness and superiority over other
state-of-the-art approaches
Local Jet Pattern: A Robust Descriptor for Texture Classification
Methods based on local image features have recently shown promise for texture
classification tasks, especially in the presence of large intra-class variation
due to illumination, scale, and viewpoint changes. Inspired by the theories of
image structure analysis, this paper presents a simple, efficient, yet robust
descriptor namely local jet pattern (LJP) for texture classification. In this
approach, a jet space representation of a texture image is derived from a set
of derivatives of Gaussian (DtGs) filter responses up to second order, so
called local jet vectors (LJV), which also satisfy the Scale Space properties.
The LJP is obtained by utilizing the relationship of center pixel with the
local neighborhood information in jet space. Finally, the feature vector of a
texture region is formed by concatenating the histogram of LJP for all elements
of LJV. All DtGs responses up to second order together preserves the intrinsic
local image structure, and achieves invariance to scale, rotation, and
reflection. This allows us to develop a texture classification framework which
is discriminative and robust. Extensive experiments on five standard texture
image databases, employing nearest subspace classifier (NSC), the proposed
descriptor achieves 100%, 99.92%, 99.75%, 99.16%, and 99.65% accuracy for
Outex_TC-00010 (Outex_TC10), and Outex_TC-00012 (Outex_TC12), KTH-TIPS,
Brodatz, CUReT, respectively, which are outperforms the state-of-the-art
methods.Comment: Accepted in Multimedia Tools and Applications, Springe
A Review on Facial Micro-Expressions Analysis: Datasets, Features and Metrics
Facial micro-expressions are very brief, spontaneous facial expressions that
appear on the face of humans when they either deliberately or unconsciously
conceal an emotion. Micro-expression has shorter duration than
macro-expression, which makes it more challenging for human and machine. Over
the past ten years, automatic micro-expressions recognition has attracted
increasing attention from researchers in psychology, computer science,
security, neuroscience and other related disciplines. The aim of this paper is
to provide the insights of automatic micro-expressions and recommendations for
future research. There has been a lot of datasets released over the last decade
that facilitated the rapid growth in this field. However, comparison across
different datasets is difficult due to the inconsistency in experiment
protocol, features used and evaluation methods. To address these issues, we
review the datasets, features and the performance metrics deployed in the
literature. Relevant challenges such as the spatial temporal settings during
data collection, emotional classes versus objective classes in data labelling,
face regions in data analysis, standardisation of metrics and the requirements
for real-world implementation are discussed. We conclude by proposing some
promising future directions to advancing micro-expressions research.Comment: Preprint submitted to IEEE Transaction
- …