20,493 research outputs found
Why my photos look sideways or upside down? Detecting Canonical Orientation of Images using Convolutional Neural Networks
Image orientation detection requires high-level scene understanding. Humans
use object recognition and contextual scene information to correctly orient
images. In literature, the problem of image orientation detection is mostly
confronted by using low-level vision features, while some approaches
incorporate few easily detectable semantic cues to gain minor improvements. The
vast amount of semantic content in images makes orientation detection
challenging, and therefore there is a large semantic gap between existing
methods and human behavior. Also, existing methods in literature report highly
discrepant detection rates, which is mainly due to large differences in
datasets and limited variety of test images used for evaluation. In this work,
for the first time, we leverage the power of deep learning and adapt
pre-trained convolutional neural networks using largest training dataset
to-date for the image orientation detection task. An extensive evaluation of
our model on different public datasets shows that it remarkably generalizes to
correctly orient a large set of unconstrained images; it also significantly
outperforms the state-of-the-art and achieves accuracy very close to that of
humans
Why my photos look sideways or upside down? Detecting Canonical Orientation of Images using Convolutional Neural Networks
Image orientation detection requires high-level scene understanding. Humans
use object recognition and contextual scene information to correctly orient
images. In literature, the problem of image orientation detection is mostly
confronted by using low-level vision features, while some approaches
incorporate few easily detectable semantic cues to gain minor improvements. The
vast amount of semantic content in images makes orientation detection
challenging, and therefore there is a large semantic gap between existing
methods and human behavior. Also, existing methods in literature report highly
discrepant detection rates, which is mainly due to large differences in
datasets and limited variety of test images used for evaluation. In this work,
for the first time, we leverage the power of deep learning and adapt
pre-trained convolutional neural networks using largest training dataset
to-date for the image orientation detection task. An extensive evaluation of
our model on different public datasets shows that it remarkably generalizes to
correctly orient a large set of unconstrained images; it also significantly
outperforms the state-of-the-art and achieves accuracy very close to that of
humans
Hybrid image representation methods for automatic image annotation: a survey
In most automatic image annotation systems, images are represented with low level features using either global
methods or local methods. In global methods, the entire image is used as a unit. Local methods divide images into blocks where fixed-size sub-image blocks are adopted as sub-units; or into regions by using segmented regions as sub-units in images. In contrast to typical automatic image annotation methods that use either global or local features exclusively, several recent methods have considered incorporating the two kinds of information, and believe that the combination of the two levels of features is
beneficial in annotating images. In this paper, we provide a
survey on automatic image annotation techniques according to
one aspect: feature extraction, and, in order to complement
existing surveys in literature, we focus on the emerging image annotation methods: hybrid methods that combine both global and local features for image representation
Geometric results on linear actions of reductive Lie groups for applications to homogeneous dynamics
Several problems in number theory when reformulated in terms of homogenous
dynamics involve study of limiting distributions of translates of algebraically
defined measures on orbits of reductive groups. The general non-divergence and
linearization techniques, in view of Ratner's measure classification for
unipotent flows, reduce such problems to dynamical questions about linear
actions of reductive groups on finite dimensional vectors spaces. This article
provides general results which resolve these linear dynamical questions in
terms of natural group theoretic or geometric conditions
Bidirectional-Convolutional LSTM Based Spectral-Spatial Feature Learning for Hyperspectral Image Classification
This paper proposes a novel deep learning framework named
bidirectional-convolutional long short term memory (Bi-CLSTM) network to
automatically learn the spectral-spatial feature from hyperspectral images
(HSIs). In the network, the issue of spectral feature extraction is considered
as a sequence learning problem, and a recurrent connection operator across the
spectral domain is used to address it. Meanwhile, inspired from the widely used
convolutional neural network (CNN), a convolution operator across the spatial
domain is incorporated into the network to extract the spatial feature.
Besides, to sufficiently capture the spectral information, a bidirectional
recurrent connection is proposed. In the classification phase, the learned
features are concatenated into a vector and fed to a softmax classifier via a
fully-connected operator. To validate the effectiveness of the proposed
Bi-CLSTM framework, we compare it with several state-of-the-art methods,
including the CNN framework, on three widely used HSIs. The obtained results
show that Bi-CLSTM can improve the classification performance as compared to
other methods
- âŠ