4,213 research outputs found
Why my photos look sideways or upside down? Detecting Canonical Orientation of Images using Convolutional Neural Networks
Image orientation detection requires high-level scene understanding. Humans
use object recognition and contextual scene information to correctly orient
images. In literature, the problem of image orientation detection is mostly
confronted by using low-level vision features, while some approaches
incorporate few easily detectable semantic cues to gain minor improvements. The
vast amount of semantic content in images makes orientation detection
challenging, and therefore there is a large semantic gap between existing
methods and human behavior. Also, existing methods in literature report highly
discrepant detection rates, which is mainly due to large differences in
datasets and limited variety of test images used for evaluation. In this work,
for the first time, we leverage the power of deep learning and adapt
pre-trained convolutional neural networks using largest training dataset
to-date for the image orientation detection task. An extensive evaluation of
our model on different public datasets shows that it remarkably generalizes to
correctly orient a large set of unconstrained images; it also significantly
outperforms the state-of-the-art and achieves accuracy very close to that of
humans
Why my photos look sideways or upside down? Detecting Canonical Orientation of Images using Convolutional Neural Networks
Image orientation detection requires high-level scene understanding. Humans
use object recognition and contextual scene information to correctly orient
images. In literature, the problem of image orientation detection is mostly
confronted by using low-level vision features, while some approaches
incorporate few easily detectable semantic cues to gain minor improvements. The
vast amount of semantic content in images makes orientation detection
challenging, and therefore there is a large semantic gap between existing
methods and human behavior. Also, existing methods in literature report highly
discrepant detection rates, which is mainly due to large differences in
datasets and limited variety of test images used for evaluation. In this work,
for the first time, we leverage the power of deep learning and adapt
pre-trained convolutional neural networks using largest training dataset
to-date for the image orientation detection task. An extensive evaluation of
our model on different public datasets shows that it remarkably generalizes to
correctly orient a large set of unconstrained images; it also significantly
outperforms the state-of-the-art and achieves accuracy very close to that of
humans
A Joint 3D-2D based Method for Free Space Detection on Roads
In this paper, we address the problem of road segmentation and free space
detection in the context of autonomous driving. Traditional methods either use
3-dimensional (3D) cues such as point clouds obtained from LIDAR, RADAR or
stereo cameras or 2-dimensional (2D) cues such as lane markings, road
boundaries and object detection. Typical 3D point clouds do not have enough
resolution to detect fine differences in heights such as between road and
pavement. Image based 2D cues fail when encountering uneven road textures such
as due to shadows, potholes, lane markings or road restoration. We propose a
novel free road space detection technique combining both 2D and 3D cues. In
particular, we use CNN based road segmentation from 2D images and plane/box
fitting on sparse depth data obtained from SLAM as priors to formulate an
energy minimization using conditional random field (CRF), for road pixels
classification. While the CNN learns the road texture and is unaffected by
depth boundaries, the 3D information helps in overcoming texture based
classification failures. Finally, we use the obtained road segmentation with
the 3D depth data from monocular SLAM to detect the free space for the
navigation purposes. Our experiments on KITTI odometry dataset, Camvid dataset,
as well as videos captured by us, validate the superiority of the proposed
approach over the state of the art.Comment: Accepted for publication at IEEE WACV 201
Recommended from our members
The role of HG in the analysis of temporal iteration and interaural correlation
Image orientation detection using LBP-based features and logistic regression
open3noopenGianluigi Ciocca;Claudio Cusano;Raimondo SchettiniGianluigi, Ciocca; Cusano, Claudio; Raimondo, Schettin
Change blindness: eradication of gestalt strategies
Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149β164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by Β±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task
Human Motion Trajectory Prediction: A Survey
With growing numbers of intelligent autonomous systems in human environments,
the ability of such systems to perceive, understand and anticipate human
behavior becomes increasingly important. Specifically, predicting future
positions of dynamic agents and planning considering such predictions are key
tasks for self-driving vehicles, service robots and advanced surveillance
systems. This paper provides a survey of human motion trajectory prediction. We
review, analyze and structure a large selection of work from different
communities and propose a taxonomy that categorizes existing methods based on
the motion modeling approach and level of contextual information used. We
provide an overview of the existing datasets and performance metrics. We
discuss limitations of the state of the art and outline directions for further
research.Comment: Submitted to the International Journal of Robotics Research (IJRR),
37 page
Automatic object classification for surveillance videos.
PhDThe recent popularity of surveillance video systems, specially located in urban
scenarios, demands the development of visual techniques for monitoring purposes.
A primary step towards intelligent surveillance video systems consists on automatic
object classification, which still remains an open research problem and the keystone
for the development of more specific applications.
Typically, object representation is based on the inherent visual features. However,
psychological studies have demonstrated that human beings can routinely categorise
objects according to their behaviour. The existing gap in the understanding
between the features automatically extracted by a computer, such as appearance-based
features, and the concepts unconsciously perceived by human beings but
unattainable for machines, or the behaviour features, is most commonly known
as semantic gap. Consequently, this thesis proposes to narrow the semantic gap
and bring together machine and human understanding towards object classification.
Thus, a Surveillance Media Management is proposed to automatically detect and
classify objects by analysing the physical properties inherent in their appearance
(machine understanding) and the behaviour patterns which require a higher level of
understanding (human understanding). Finally, a probabilistic multimodal fusion
algorithm bridges the gap performing an automatic classification considering both
machine and human understanding.
The performance of the proposed Surveillance Media Management framework
has been thoroughly evaluated on outdoor surveillance datasets. The experiments
conducted demonstrated that the combination of machine and human understanding
substantially enhanced the object classification performance. Finally, the inclusion
of human reasoning and understanding provides the essential information to bridge
the semantic gap towards smart surveillance video systems
Audio-coupled video content understanding of unconstrained video sequences
Unconstrained video understanding is a difficult task. The main aim of this thesis is to
recognise the nature of objects, activities and environment in a given video clip using
both audio and video information. Traditionally, audio and video information has not
been applied together for solving such complex task, and for the first time we propose,
develop, implement and test a new framework of multi-modal (audio and video) data
analysis for context understanding and labelling of unconstrained videos.
The framework relies on feature selection techniques and introduces a novel algorithm
(PCFS) that is faster than the well-established SFFS algorithm. We use the framework for
studying the benefits of combining audio and video information in a number of different
problems. We begin by developing two independent content recognition modules. The
first one is based on image sequence analysis alone, and uses a range of colour, shape,
texture and statistical features from image regions with a trained classifier to recognise
the identity of objects, activities and environment present. The second module uses audio
information only, and recognises activities and environment. Both of these approaches
are preceded by detailed pre-processing to ensure that correct video segments containing
both audio and video content are present, and that the developed system can be made
robust to changes in camera movement, illumination, random object behaviour etc. For
both audio and video analysis, we use a hierarchical approach of multi-stage
classification such that difficult classification tasks can be decomposed into simpler and
smaller tasks.
When combining both modalities, we compare fusion techniques at different levels of
integration and propose a novel algorithm that combines advantages of both feature and
decision-level fusion. The analysis is evaluated on a large amount of test data comprising
unconstrained videos collected for this work. We finally, propose a decision correction
algorithm which shows that further steps towards combining multi-modal classification
information effectively with semantic knowledge generates the best possible results
Automatic Image Orientation Determination with Natural Image Statistics
In this paper, we propose a new method for automatically determining image orientations. This method is based on a set of natural image statistics collected from a multi-scale multi-orientation image decomposition (e.g., wavelets). From these statistics, a two-stage hierarchal classification with multiple binary SVM classifiers is employed to de- termine image orientation. The proposed method is evaluated and compared to existing methods with experiments performed on 18040 natural images, where it showed promising performance
- β¦