4,822 research outputs found
Compensating for Large In-Plane Rotations in Natural Images
Rotation invariance has been studied in the computer vision community
primarily in the context of small in-plane rotations. This is usually achieved
by building invariant image features. However, the problem of achieving
invariance for large rotation angles remains largely unexplored. In this work,
we tackle this problem by directly compensating for large rotations, as opposed
to building invariant features. This is inspired by the neuro-scientific
concept of mental rotation, which humans use to compare pairs of rotated
objects. Our contributions here are three-fold. First, we train a Convolutional
Neural Network (CNN) to detect image rotations. We find that generic CNN
architectures are not suitable for this purpose. To this end, we introduce a
convolutional template layer, which learns representations for canonical
'unrotated' images. Second, we use Bayesian Optimization to quickly sift
through a large number of candidate images to find the canonical 'unrotated'
image. Third, we use this method to achieve robustness to large angles in an
image retrieval scenario. Our method is task-agnostic, and can be used as a
pre-processing step in any computer vision system.Comment: Accepted at Indian Conference on Computer Vision, Graphics and Image
Processing (ICVGIP) 201
Symmetric hyperbolic monopoles
Hyperbolic monopole solutions can be obtained from circle-invariant ADHM data
if the curvature of hyperbolic space is suitably tuned. Here we give explicit
ADHM data corresponding to axial hyperbolic monopoles in a simple, tractable
form, as well as expressions for the axial monopole fields. The data is
deformed into new 1-parameter families preserving dihedral and twisted-line
symmetries. In many cases explicit expressions are presented for their spectral
curves and rational maps of both Donaldson and Jarvis type.Comment: 20 pages, 1 figur
Topological Defects and Interactions in Nematic Emulsions
Inverse nematic emulsions in which surfactant-coated water droplets are
dispersed in a nematic host fluid have distinctive properties that set them
apart from dispersions of two isotropic fluids or of nematic droplets in an
isotropic fluid. We present a comprehensive theoretical study of the
distortions produced in the nematic host by the dispersed droplets and of
solvent mediated dipolar interactions between droplets that lead to their
experimentally observed chaining. A single droplet in a nematic host acts like
a macroscopic hedgehog defect. Global boundary conditions force the nucleation
of compensating topological defects in the nematic host. Using variational
techniques, we show that in the lowest energy configuration, a single water
droplet draws a single hedgehog out of the nematic host to form a tightly bound
dipole. Configurations in which the water droplet is encircled by a
disclination ring have higher energy. The droplet-dipole induces distortions in
the nematic host that lead to an effective dipole-dipole interaction between
droplets and hence to chaining.Comment: 17 double column pages prepared by RevTex, 15 eps figures included in
text, 2 gif figures for Fig. 1
Active Estimation of Distance in a Robotic Vision System that Replicates Human Eye Movement
Many visual cues, both binocular and monocular, provide 3D information. When an agent moves with respect to a scene, an important cue is the different motion of objects located at various distances. While a motion parallax is evident for large translations of the agent, in most head/eye systems a small parallax occurs also during rotations of the cameras. A similar parallax is present also in the human eye. During a relocation of gaze, the shift in the retinal projection of an object depends not only on the amplitude of the movement, but also on the distance of the object with respect to the observer. This study proposes a method for estimating distance on the basis of the parallax that emerges from rotations of a camera. A pan/tilt system specifically designed to reproduce the oculomotor parallax present in the human eye was used to replicate the oculomotor strategy by which humans scan visual scenes. We show that the oculomotor parallax provides accurate estimation of distance during sequences of eye movements. In a system that actively scans a visual scene, challenging tasks such as image segmentation and figure/ground segregation greatly benefit from this cue.National Science Foundation (BIC-0432104, CCF-0130851
Why my photos look sideways or upside down? Detecting Canonical Orientation of Images using Convolutional Neural Networks
Image orientation detection requires high-level scene understanding. Humans
use object recognition and contextual scene information to correctly orient
images. In literature, the problem of image orientation detection is mostly
confronted by using low-level vision features, while some approaches
incorporate few easily detectable semantic cues to gain minor improvements. The
vast amount of semantic content in images makes orientation detection
challenging, and therefore there is a large semantic gap between existing
methods and human behavior. Also, existing methods in literature report highly
discrepant detection rates, which is mainly due to large differences in
datasets and limited variety of test images used for evaluation. In this work,
for the first time, we leverage the power of deep learning and adapt
pre-trained convolutional neural networks using largest training dataset
to-date for the image orientation detection task. An extensive evaluation of
our model on different public datasets shows that it remarkably generalizes to
correctly orient a large set of unconstrained images; it also significantly
outperforms the state-of-the-art and achieves accuracy very close to that of
humans
Why my photos look sideways or upside down? Detecting Canonical Orientation of Images using Convolutional Neural Networks
Image orientation detection requires high-level scene understanding. Humans
use object recognition and contextual scene information to correctly orient
images. In literature, the problem of image orientation detection is mostly
confronted by using low-level vision features, while some approaches
incorporate few easily detectable semantic cues to gain minor improvements. The
vast amount of semantic content in images makes orientation detection
challenging, and therefore there is a large semantic gap between existing
methods and human behavior. Also, existing methods in literature report highly
discrepant detection rates, which is mainly due to large differences in
datasets and limited variety of test images used for evaluation. In this work,
for the first time, we leverage the power of deep learning and adapt
pre-trained convolutional neural networks using largest training dataset
to-date for the image orientation detection task. An extensive evaluation of
our model on different public datasets shows that it remarkably generalizes to
correctly orient a large set of unconstrained images; it also significantly
outperforms the state-of-the-art and achieves accuracy very close to that of
humans
Unobtrusive and pervasive video-based eye-gaze tracking
Eye-gaze tracking has long been considered a desktop technology that finds its use inside the traditional office setting, where the operating conditions may be controlled. Nonetheless, recent advancements in mobile technology and a growing interest in capturing natural human behaviour have motivated an emerging interest in tracking eye movements within unconstrained real-life conditions, referred to as pervasive eye-gaze tracking. This critical review focuses on emerging passive and unobtrusive video-based eye-gaze tracking methods in recent literature, with the aim to identify different research avenues that are being followed in response to the challenges of pervasive eye-gaze tracking. Different eye-gaze tracking approaches are discussed in order to bring out their strengths and weaknesses, and to identify any limitations, within the context of pervasive eye-gaze tracking, that have yet to be considered by the computer vision community.peer-reviewe
- …