27,199 research outputs found
Robust Feature Detection and Local Classification for Surfaces Based on Moment Analysis
The stable local classification of discrete surfaces with respect to features such as edges and corners or concave and convex regions, respectively, is as quite difficult as well as indispensable for many surface processing applications. Usually, the feature detection is done via a local curvature analysis. If concerned with large triangular and irregular grids, e.g., generated via a marching cube algorithm, the detectors are tedious to treat and a robust classification is hard to achieve. Here, a local classification method on surfaces is presented which avoids the evaluation of discretized curvature quantities. Moreover, it provides an indicator for smoothness of a given discrete surface and comes together with a built-in multiscale. The proposed classification tool is based on local zero and first moments on the discrete surface. The corresponding integral quantities are stable to compute and they give less noisy results compared to discrete curvature quantities. The stencil width for the integration of the moments turns out to be the scale parameter. Prospective surface processing applications are the segmentation on surfaces, surface comparison, and matching and surface modeling. Here, a method for feature preserving fairing of surfaces is discussed to underline the applicability of the presented approach.
Morphological granulometry for classification of evolving and ordered texture images.
In this work we investigate the use of morphological granulometric moments as texture descriptors to predict time or class of texture images which evolve over time or follow an intrinsic ordering of textures. A cubic polynomial regression was used to model each of several granulometric moments as a function of time or class. These models are then combined and used to predict time or class. The methodology was developed on synthetic images of evolving textures and then successfully applied to classify a sequence of corrosion images to a point on an evolution time scale. Classification performance of the new regression approach is compared to that of linear discriminant analysis, neural networks and support vector machines. We also apply our method to images of black tea leaves, which are ordered according to granule size, and very high classification accuracy was attained compared to existing published results for these images. It was also found that granulometric moments provide much improved classification compared to grey level co-occurrence features for shape-based texture images
From Pixels to Sentiment: Fine-tuning CNNs for Visual Sentiment Prediction
Visual multimedia have become an inseparable part of our digital social
lives, and they often capture moments tied with deep affections. Automated
visual sentiment analysis tools can provide a means of extracting the rich
feelings and latent dispositions embedded in these media. In this work, we
explore how Convolutional Neural Networks (CNNs), a now de facto computational
machine learning tool particularly in the area of Computer Vision, can be
specifically applied to the task of visual sentiment prediction. We accomplish
this through fine-tuning experiments using a state-of-the-art CNN and via
rigorous architecture analysis, we present several modifications that lead to
accuracy improvements over prior art on a dataset of images from a popular
social media platform. We additionally present visualizations of local patterns
that the network learned to associate with image sentiment for insight into how
visual positivity (or negativity) is perceived by the model.Comment: Accepted for publication in Image and Vision Computing. Models and
source code available at https://github.com/imatge-upc/sentiment-201
FAME: Face Association through Model Evolution
We attack the problem of learning face models for public faces from
weakly-labelled images collected from web through querying a name. The data is
very noisy even after face detection, with several irrelevant faces
corresponding to other people. We propose a novel method, Face Association
through Model Evolution (FAME), that is able to prune the data in an iterative
way, for the face models associated to a name to evolve. The idea is based on
capturing discriminativeness and representativeness of each instance and
eliminating the outliers. The final models are used to classify faces on novel
datasets with possibly different characteristics. On benchmark datasets, our
results are comparable to or better than state-of-the-art studies for the task
of face identification.Comment: Draft version of the stud
History of art paintings through the lens of entropy and complexity
Art is the ultimate expression of human creativity that is deeply influenced
by the philosophy and culture of the corresponding historical epoch. The
quantitative analysis of art is therefore essential for better understanding
human cultural evolution. Here we present a large-scale quantitative analysis
of almost 140 thousand paintings, spanning nearly a millennium of art history.
Based on the local spatial patterns in the images of these paintings, we
estimate the permutation entropy and the statistical complexity of each
painting. These measures map the degree of visual order of artworks into a
scale of order-disorder and simplicity-complexity that locally reflects
qualitative categories proposed by art historians. The dynamical behavior of
these measures reveals a clear temporal evolution of art, marked by transitions
that agree with the main historical periods of art. Our research shows that
different artistic styles have a distinct average degree of entropy and
complexity, thus allowing a hierarchical organization and clustering of styles
according to these metrics. We have further verified that the identified groups
correspond well with the textual content used to qualitatively describe the
styles, and that the employed complexity-entropy measures can be used for an
effective classification of artworks.Comment: 10 two-column pages, 5 figures; accepted for publication in PNAS
[supplementary information available at
http://www.pnas.org/highwire/filestream/824089/field_highwire_adjunct_files/0/pnas.1800083115.sapp.pdf
Classification of ordered texture images using regression modelling and granulometric features
Structural information available from the granulometry of an image has been used widely in image texture analysis and classification. In this paper we present a method for classifying texture images which follow an intrinsic ordering of textures, using polynomial regression to express granulometric moments as a function of class label. Separate models are built for each individual moment and combined for back-prediction of the class label of a new image. The methodology was developed on synthetic images of evolving textures and tested using real images of 8 different grades of cut-tear-curl black tea leaves. For comparison, grey level co-occurrence (GLCM) based features were also computed, and both feature types were used in a range of classifiers including the regression approach. Experimental results demonstrate the superiority of the granulometric moments over GLCM-based features for classifying these tea images
Deep Architectures and Ensembles for Semantic Video Classification
This work addresses the problem of accurate semantic labelling of short
videos. To this end, a multitude of different deep nets, ranging from
traditional recurrent neural networks (LSTM, GRU), temporal agnostic networks
(FV,VLAD,BoW), fully connected neural networks mid-stage AV fusion and others.
Additionally, we also propose a residual architecture-based DNN for video
classification, with state-of-the art classification performance at
significantly reduced complexity. Furthermore, we propose four new approaches
to diversity-driven multi-net ensembling, one based on fast correlation measure
and three incorporating a DNN-based combiner. We show that significant
performance gains can be achieved by ensembling diverse nets and we investigate
factors contributing to high diversity. Based on the extensive YouTube8M
dataset, we provide an in-depth evaluation and analysis of their behaviour. We
show that the performance of the ensemble is state-of-the-art achieving the
highest accuracy on the YouTube-8M Kaggle test data. The performance of the
ensemble of classifiers was also evaluated on the HMDB51 and UCF101 datasets,
and show that the resulting method achieves comparable accuracy with
state-of-the-art methods using similar input features
The Morphological Content of Ten EDisCS Clusters at 0.5 < z < 0.8
We describe Hubble Space Telescope (HST) imaging of 10 of the 20 ESO Distant
Cluster Survey (EDisCS) fields. Each ~40 square arcminute field was imaged in
the F814W filter with the Advanced Camera for Surveys Wide Field Camera. Based
on these data, we present visual morphological classifications for the ~920
sources per field that are brighter than I_auto=23 mag. We use these
classifications to quantify the morphological content of 10
intermediate-redshift (0.5 < z < 0.8) galaxy clusters within the HST survey
region. The EDisCS results, combined with previously published data from seven
higher redshift clusters, show no statistically significant evidence for
evolution in the mean fractions of elliptical, S0, and late-type (Sp+Irr)
galaxies in clusters over the redshift range 0.5 < z < 1.2. In contrast,
existing studies of lower redshift clusters have revealed a factor of ~2
increase in the typical S0 fraction between z=0.4 and z=0, accompanied by a
commensurate decrease in the Sp+Irr fraction and no evolution in the elliptical
fraction. The EDisCS clusters demonstrate that cluster morphological fractions
plateau beyond z ~ 0.4. They also exhibit a mild correlation between
morphological content and cluster velocity dispersion, highlighting the
importance of careful sample selection in evaluating evolution. We discuss
these findings in the context of a recently proposed scenario in which the
fractions of passive (E,S0) and star-forming (Sp,Irr) galaxies are determined
primarily by the growth history of clusters.Comment: 18 pages, 7 figures; To be published in ApJ; minor changes made to
table label
- …