10,953 research outputs found
Cortical Dynamics of Navigation and Steering in Natural Scenes: Motion-Based Object Segmentation, Heading, and Obstacle Avoidance
Visually guided navigation through a cluttered natural scene is a challenging problem that animals and humans accomplish with ease. The ViSTARS neural model proposes how primates use motion information to segment objects and determine heading for purposes of goal approach and obstacle avoidance in response to video inputs from real and virtual environments. The model produces trajectories similar to those of human navigators. It does so by predicting how computationally complementary processes in cortical areas MT-/MSTv and MT+/MSTd compute object motion for tracking and self-motion for navigation, respectively. The model retina responds to transients in the input stream. Model V1 generates a local speed and direction estimate. This local motion estimate is ambiguous due to the neural aperture problem. Model MT+ interacts with MSTd via an attentive feedback loop to compute accurate heading estimates in MSTd that quantitatively simulate properties of human heading estimation data. Model MT interacts with MSTv via an attentive feedback loop to compute accurate estimates of speed, direction and position of moving objects. This object information is combined with heading information to produce steering decisions wherein goals behave like attractors and obstacles behave like repellers. These steering decisions lead to navigational trajectories that closely match human performance.National Science Foundation (SBE-0354378, BCS-0235398); Office of Naval Research (N00014-01-1-0624); National Geospatial Intelligence Agency (NMA201-01-1-2016
Deep Lidar CNN to Understand the Dynamics of Moving Vehicles
Perception technologies in Autonomous Driving are experiencing their golden
age due to the advances in Deep Learning. Yet, most of these systems rely on
the semantically rich information of RGB images. Deep Learning solutions
applied to the data of other sensors typically mounted on autonomous cars (e.g.
lidars or radars) are not explored much. In this paper we propose a novel
solution to understand the dynamics of moving vehicles of the scene from only
lidar information. The main challenge of this problem stems from the fact that
we need to disambiguate the proprio-motion of the 'observer' vehicle from that
of the external 'observed' vehicles. For this purpose, we devise a CNN
architecture which at testing time is fed with pairs of consecutive lidar
scans. However, in order to properly learn the parameters of this network,
during training we introduce a series of so-called pretext tasks which also
leverage on image data. These tasks include semantic information about
vehicleness and a novel lidar-flow feature which combines standard image-based
optical flow with lidar scans. We obtain very promising results and show that
including distilled image information only during training, allows improving
the inference results of the network at test time, even when image data is no
longer used.Comment: Presented in IEEE ICRA 2018. IEEE Copyrights: Personal use of this
material is permitted. Permission from IEEE must be obtained for all other
uses. (V2 just corrected comments on arxiv submission
SceneFlowFields: Dense Interpolation of Sparse Scene Flow Correspondences
While most scene flow methods use either variational optimization or a strong
rigid motion assumption, we show for the first time that scene flow can also be
estimated by dense interpolation of sparse matches. To this end, we find sparse
matches across two stereo image pairs that are detected without any prior
regularization and perform dense interpolation preserving geometric and motion
boundaries by using edge information. A few iterations of variational energy
minimization are performed to refine our results, which are thoroughly
evaluated on the KITTI benchmark and additionally compared to state-of-the-art
on MPI Sintel. For application in an automotive context, we further show that
an optional ego-motion model helps to boost performance and blends smoothly
into our approach to produce a segmentation of the scene into static and
dynamic parts.Comment: IEEE Winter Conference on Applications of Computer Vision (WACV),
201
A Generic Framework for Tracking Using Particle Filter With Dynamic Shape Prior
©2007 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or distribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE. This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.DOI: 10.1109/TIP.2007.894244Tracking deforming objects involves estimating the global motion of the object and its local deformations as functions of time. Tracking algorithms using Kalman filters or particle filters (PFs) have been proposed for tracking such objects, but these have limitations due to the lack of dynamic shape information. In this paper, we propose a novel method based on employing a locally linear embedding in order to incorporate dynamic shape information into the particle filtering framework for tracking highly deformable objects in the presence of noise and clutter. The PF also models image statistics such as mean and variance of the given data which can be useful in obtaining proper separation of object and backgroun
Online Visual Robot Tracking and Identification using Deep LSTM Networks
Collaborative robots working on a common task are necessary for many
applications. One of the challenges for achieving collaboration in a team of
robots is mutual tracking and identification. We present a novel pipeline for
online visionbased detection, tracking and identification of robots with a
known and identical appearance. Our method runs in realtime on the limited
hardware of the observer robot. Unlike previous works addressing robot tracking
and identification, we use a data-driven approach based on recurrent neural
networks to learn relations between sequential inputs and outputs. We formulate
the data association problem as multiple classification problems. A deep LSTM
network was trained on a simulated dataset and fine-tuned on small set of real
data. Experiments on two challenging datasets, one synthetic and one real,
which include long-term occlusions, show promising results.Comment: IEEE/RSJ International Conference on Intelligent Robots and Systems
(IROS), Vancouver, Canada, 2017. IROS RoboCup Best Paper Awar
Video foreground detection based on symmetric alpha-stable mixture models.
Background subtraction (BS) is an efficient technique for detecting moving objects in video sequences. A simple BS process involves building a model of the background and extracting regions of the foreground (moving objects) with the assumptions that the camera remains stationary and there exist no movements in the background. These assumptions restrict the applicability of BS methods to real-time object detection in video. In this paper, we propose an extended cluster BS technique with a mixture of symmetric alpha stable (SS) distributions. An on-line self-adaptive mechanism is presented that allows automated estimation of the model parameters using the log moment method. Results over real video sequences from indoor and outdoor environments, with data from static and moving video cameras are presented. The SS mixture model is shown to improve the detection performance compared with a cluster BS method using a Gaussian mixture model and the method of Li et al. [11]
- …