9,543 research outputs found
Improving Texture Categorization with Biologically Inspired Filtering
Within the domain of texture classification, a lot of effort has been spent
on local descriptors, leading to many powerful algorithms. However,
preprocessing techniques have received much less attention despite their
important potential for improving the overall classification performance. We
address this question by proposing a novel, simple, yet very powerful
biologically-inspired filtering (BF) which simulates the performance of human
retina. In the proposed approach, given a texture image, after applying a DoG
filter to detect the "edges", we first split the filtered image into two "maps"
alongside the sides of its edges. The feature extraction step is then carried
out on the two "maps" instead of the input image. Our algorithm has several
advantages such as simplicity, robustness to illumination and noise, and
discriminative power. Experimental results on three large texture databases
show that with an extremely low computational cost, the proposed method
improves significantly the performance of many texture classification systems,
notably in noisy environments. The source codes of the proposed algorithm can
be downloaded from https://sites.google.com/site/nsonvu/code.Comment: 11 page
Automatic Palaeographic Exploration of Genizah Manuscripts
The Cairo Genizah is a collection of hand-written documents containing approximately
350,000 fragments of mainly Jewish texts discovered in the late 19th
century. The
fragments are today spread out in some 75 libraries and private collections worldwide,
but there is an ongoing effort to document and catalogue all extant fragments.
Palaeographic information plays a key role in the study of the Genizah collection.
Script style, and–more specifically–handwriting, can be used to identify fragments that
might originate from the same original work. Such matched fragments, commonly
referred to as “joins”, are currently identified manually by experts, and presumably only
a small fraction of existing joins have been discovered to date. In this work, we show
that automatic handwriting matching functions, obtained from non-specific features
using a corpus of writing samples, can perform this task quite reliably. In addition, we
explore the problem of grouping various Genizah documents by script style, without
being provided any prior information about the relevant styles. The automatically
obtained grouping agrees, for the most part, with the palaeographic taxonomy. In cases
where the method fails, it is due to apparent similarities between related scripts
Kinship Verification from Videos using Spatio-Temporal Texture Features and Deep Learning
Automatic kinship verification using facial images is a relatively new and
challenging research problem in computer vision. It consists in automatically
predicting whether two persons have a biological kin relation by examining
their facial attributes. While most of the existing works extract shallow
handcrafted features from still face images, we approach this problem from
spatio-temporal point of view and explore the use of both shallow texture
features and deep features for characterizing faces. Promising results,
especially those of deep features, are obtained on the benchmark UvA-NEMO Smile
database. Our extensive experiments also show the superiority of using videos
over still images, hence pointing out the important role of facial dynamics in
kinship verification. Furthermore, the fusion of the two types of features
(i.e. shallow spatio-temporal texture features and deep features) shows
significant performance improvements compared to state-of-the-art methods.Comment: 7 page
One-shot learning of object categories
Learning visual models of object categories notoriously requires hundreds or thousands of training examples. We show that it is possible to learn much information about a category from just one, or a handful, of images. The key insight is that, rather than learning from scratch, one can take advantage of knowledge coming from previously learned categories, no matter how different these categories might be. We explore a Bayesian implementation of this idea. Object categories are represented by probabilistic models. Prior knowledge is represented as a probability density function on the parameters of these models. The posterior model for an object category is obtained by updating the prior in the light of one or more observations. We test a simple implementation of our algorithm on a database of 101 diverse object categories. We compare category models learned by an implementation of our Bayesian approach to models learned from by maximum likelihood (ML) and maximum a posteriori (MAP) methods. We find that on a database of more than 100 categories, the Bayesian approach produces informative models when the number of training examples is too small for other methods to operate successfully
- …