5,204 research outputs found
LCrowdV: Generating Labeled Videos for Simulation-based Crowd Behavior Learning
We present a novel procedural framework to generate an arbitrary number of
labeled crowd videos (LCrowdV). The resulting crowd video datasets are used to
design accurate algorithms or training models for crowded scene understanding.
Our overall approach is composed of two components: a procedural simulation
framework for generating crowd movements and behaviors, and a procedural
rendering framework to generate different videos or images. Each video or image
is automatically labeled based on the environment, number of pedestrians,
density, behavior, flow, lighting conditions, viewpoint, noise, etc.
Furthermore, we can increase the realism by combining synthetically-generated
behaviors with real-world background videos. We demonstrate the benefits of
LCrowdV over prior lableled crowd datasets by improving the accuracy of
pedestrian detection and crowd behavior classification algorithms. LCrowdV
would be released on the WWW
Role Playing Learning for Socially Concomitant Mobile Robot Navigation
In this paper, we present the Role Playing Learning (RPL) scheme for a mobile
robot to navigate socially with its human companion in populated environments.
Neural networks (NN) are constructed to parameterize a stochastic policy that
directly maps sensory data collected by the robot to its velocity outputs,
while respecting a set of social norms. An efficient simulative learning
environment is built with maps and pedestrians trajectories collected from a
number of real-world crowd data sets. In each learning iteration, a robot
equipped with the NN policy is created virtually in the learning environment to
play itself as a companied pedestrian and navigate towards a goal in a socially
concomitant manner. Thus, we call this process Role Playing Learning, which is
formulated under a reinforcement learning (RL) framework. The NN policy is
optimized end-to-end using Trust Region Policy Optimization (TRPO), with
consideration of the imperfectness of robot's sensor measurements. Simulative
and experimental results are provided to demonstrate the efficacy and
superiority of our method
leave a trace - A People Tracking System Meets Anomaly Detection
Video surveillance always had a negative connotation, among others because of
the loss of privacy and because it may not automatically increase public
safety. If it was able to detect atypical (i.e. dangerous) situations in real
time, autonomously and anonymously, this could change. A prerequisite for this
is a reliable automatic detection of possibly dangerous situations from video
data. This is done classically by object extraction and tracking. From the
derived trajectories, we then want to determine dangerous situations by
detecting atypical trajectories. However, due to ethical considerations it is
better to develop such a system on data without people being threatened or even
harmed, plus with having them know that there is such a tracking system
installed. Another important point is that these situations do not occur very
often in real, public CCTV areas and may be captured properly even less. In the
artistic project leave a trace the tracked objects, people in an atrium of a
institutional building, become actor and thus part of the installation.
Visualisation in real-time allows interaction by these actors, which in turn
creates many atypical interaction situations on which we can develop our
situation detection. The data set has evolved over three years and hence, is
huge. In this article we describe the tracking system and several approaches
for the detection of atypical trajectories
It's the Human that Matters: Accurate User Orientation Estimation for Mobile Computing Applications
Ubiquity of Internet-connected and sensor-equipped portable devices sparked a
new set of mobile computing applications that leverage the proliferating
sensing capabilities of smart-phones. For many of these applications, accurate
estimation of the user heading, as compared to the phone heading, is of
paramount importance. This is of special importance for many crowd-sensing
applications, where the phone can be carried in arbitrary positions and
orientations relative to the user body. Current state-of-the-art focus mainly
on estimating the phone orientation, require the phone to be placed in a
particular position, require user intervention, and/or do not work accurately
indoors; which limits their ubiquitous usability in different applications. In
this paper we present Humaine, a novel system to reliably and accurately
estimate the user orientation relative to the Earth coordinate system.
Humaine requires no prior-configuration nor user intervention and works
accurately indoors and outdoors for arbitrary cell phone positions and
orientations relative to the user body. The system applies statistical analysis
techniques to the inertial sensors widely available on today's cell phones to
estimate both the phone and user orientation. Implementation of the system on
different Android devices with 170 experiments performed at different indoor
and outdoor testbeds shows that Humaine significantly outperforms the
state-of-the-art in diverse scenarios, achieving a median accuracy of
averaged over a wide variety of phone positions. This is
better than the-state-of-the-art. The accuracy is bounded by the error in the
inertial sensors readings and can be enhanced with more accurate sensors and
sensor fusion.Comment: Accepted for publication in the 11th International Conference on
Mobile and Ubiquitous Systems: Computing, Networking and Services
(Mobiquitous 2014
- …