9,225 research outputs found
Egocentric Vision-based Future Vehicle Localization for Intelligent Driving Assistance Systems
Predicting the future location of vehicles is essential for safety-critical
applications such as advanced driver assistance systems (ADAS) and autonomous
driving. This paper introduces a novel approach to simultaneously predict both
the location and scale of target vehicles in the first-person (egocentric) view
of an ego-vehicle. We present a multi-stream recurrent neural network (RNN)
encoder-decoder model that separately captures both object location and scale
and pixel-level observations for future vehicle localization. We show that
incorporating dense optical flow improves prediction results significantly
since it captures information about motion as well as appearance change. We
also find that explicitly modeling future motion of the ego-vehicle improves
the prediction accuracy, which could be especially beneficial in intelligent
and automated vehicles that have motion planning capability. To evaluate the
performance of our approach, we present a new dataset of first-person videos
collected from a variety of scenarios at road intersections, which are
particularly challenging moments for prediction because vehicle trajectories
are diverse and dynamic.Comment: To appear on ICRA 201
Index to 1984 NASA Tech Briefs, volume 9, numbers 1-4
Short announcements of new technology derived from the R&D activities of NASA are presented. These briefs emphasize information considered likely to be transferrable across industrial, regional, or disciplinary lines and are issued to encourage commercial application. This index for 1984 Tech B Briefs contains abstracts and four indexes: subject, personal author, originating center, and Tech Brief Number. The following areas are covered: electronic components and circuits, electronic systems, physical sciences, materials, life sciences, mechanics, machinery, fabrication technology, and mathematics and information sciences
Pyramidal Fisher Motion for Multiview Gait Recognition
The goal of this paper is to identify individuals by analyzing their gait.
Instead of using binary silhouettes as input data (as done in many previous
works) we propose and evaluate the use of motion descriptors based on densely
sampled short-term trajectories. We take advantage of state-of-the-art people
detectors to define custom spatial configurations of the descriptors around the
target person. Thus, obtaining a pyramidal representation of the gait motion.
The local motion features (described by the Divergence-Curl-Shear descriptor)
extracted on the different spatial areas of the person are combined into a
single high-level gait descriptor by using the Fisher Vector encoding. The
proposed approach, coined Pyramidal Fisher Motion, is experimentally validated
on the recent `AVA Multiview Gait' dataset. The results show that this new
approach achieves promising results in the problem of gait recognition.Comment: Submitted to International Conference on Pattern Recognition, ICPR,
201
- …