255,692 research outputs found
How simple rules determine pedestrian behavior and crowd disasters
With the increasing size and frequency of mass events, the study of crowd
disasters and the simulation of pedestrian flows have become important research
areas. Yet, even successful modeling approaches such as those inspired by
Newtonian force models are still not fully consistent with empirical
observations and are sometimes hard to calibrate. Here, a novel cognitive
science approach is proposed, which is based on behavioral heuristics. We
suggest that, guided by visual information, namely the distance of obstructions
in candidate lines of sight, pedestrians apply two simple cognitive procedures
to adapt their walking speeds and directions. While simpler than previous
approaches, this model predicts individual trajectories and collective patterns
of motion in good quantitative agreement with a large variety of empirical and
experimental data. This includes the emergence of self-organization phenomena,
such as the spontaneous formation of unidirectional lanes or stop-and-go waves.
Moreover, the combination of pedestrian heuristics with body collisions
generates crowd turbulence at extreme densities-a phenomenon that has been
observed during recent crowd disasters. By proposing an integrated treatment of
simultaneous interactions between multiple individuals, our approach overcomes
limitations of current physics-inspired pair interaction models. Understanding
crowd dynamics through cognitive heuristics is therefore not only crucial for a
better preparation of safe mass events. It also clears the way for a more
realistic modeling of collective social behaviors, in particular of human
crowds and biological swarms. Furthermore, our behavioral heuristics may serve
to improve the navigation of autonomous robots.Comment: Article accepted for publication in PNA
A Visual Formalism for Interacting Systems
Interacting systems are increasingly common. Many examples pervade our
everyday lives: automobiles, aircraft, defense systems, telephone switching
systems, financial systems, national governments, and so on. Closer to computer
science, embedded systems and Systems of Systems are further examples of
interacting systems. Common to all of these is that some "whole" is made up of
constituent parts, and these parts interact with each other. By design, these
interactions are intentional, but it is the unintended interactions that are
problematic. The Systems of Systems literature uses the terms "constituent
systems" and "constituents" to refer to systems that interact with each other.
That practice is followed here. This paper presents a visual formalism, Swim
Lane Event-Driven Petri Nets, that is proposed as a basis for Model-Based
Testing (MBT) of interacting systems. In the absence of available tools, this
model can only support the offline form of Model-Based Testing.Comment: In Proceedings MBT 2015, arXiv:1504.0192
Action Recognition in Videos: from Motion Capture Labs to the Web
This paper presents a survey of human action recognition approaches based on
visual data recorded from a single video camera. We propose an organizing
framework which puts in evidence the evolution of the area, with techniques
moving from heavily constrained motion capture scenarios towards more
challenging, realistic, "in the wild" videos. The proposed organization is
based on the representation used as input for the recognition task, emphasizing
the hypothesis assumed and thus, the constraints imposed on the type of video
that each technique is able to address. Expliciting the hypothesis and
constraints makes the framework particularly useful to select a method, given
an application. Another advantage of the proposed organization is that it
allows categorizing newest approaches seamlessly with traditional ones, while
providing an insightful perspective of the evolution of the action recognition
task up to now. That perspective is the basis for the discussion in the end of
the paper, where we also present the main open issues in the area.Comment: Preprint submitted to CVIU, survey paper, 46 pages, 2 figures, 4
table
Visual Model-Driven Design, Verification and Implementation of Security Protocols
A novel visual model-driven approach to security protocol design, verification, and implementation is presented in this paper. User-friendly graphical models are combined with rigorous formal methods to enable protocol verification and sound automatic code generation. Domain-specific abstractions keep the graphical models simple, yet powerful enough to represent complex, realistic protocols such as SSH. The main contribution is to bring together aspects that were only partially available or not available at all in previous proposal
A Neural Model of Visually Guided Steering, Obstacle Avoidance, and Route Selection
A neural model is developed to explain how humans can approach a goal object on foot while steering around obstacles to avoid collisions in a cluttered environment. The model uses optic flow from a 3D virtual reality environment to determine the position of objects based on motion discontinuities, and computes heading direction, or the direction of self-motion, from global optic flow. The cortical representation of heading interacts with the representations of a goal and obstacles such that the goal acts as an attractor of heading, while obstacles act as repellers. In addition the model maintains fixation on the goal object by generating smooth pursuit eye movements. Eye rotations can distort the optic flow field, complicating heading perception, and the model uses extraretinal signals to correct for this distortion and accurately represent heading. The model explains how motion processing mechanisms in cortical areas MT, MST, and posterior parietal cortex can be used to guide steering. The model quantitatively simulates human psychophysical data about visually-guided steering, obstacle avoidance, and route selection.Air Force Office of Scientific Research (F4960-01-1-0397); National Geospatial-Intelligence Agency (NMA201-01-1-2016); National Science Foundation (SBE-0354378); Office of Naval Research (N00014-01-1-0624
Understanding of Object Manipulation Actions Using Human Multi-Modal Sensory Data
Object manipulation actions represent an important share of the Activities of
Daily Living (ADLs). In this work, we study how to enable service robots to use
human multi-modal data to understand object manipulation actions, and how they
can recognize such actions when humans perform them during human-robot
collaboration tasks. The multi-modal data in this study consists of videos,
hand motion data, applied forces as represented by the pressure patterns on the
hand, and measurements of the bending of the fingers, collected as human
subjects performed manipulation actions. We investigate two different
approaches. In the first one, we show that multi-modal signal (motion, finger
bending and hand pressure) generated by the action can be decomposed into a set
of primitives that can be seen as its building blocks. These primitives are
used to define 24 multi-modal primitive features. The primitive features can in
turn be used as an abstract representation of the multi-modal signal and
employed for action recognition. In the latter approach, the visual features
are extracted from the data using a pre-trained image classification deep
convolutional neural network. The visual features are subsequently used to
train the classifier. We also investigate whether adding data from other
modalities produces a statistically significant improvement in the classifier
performance. We show that both approaches produce a comparable performance.
This implies that image-based methods can successfully recognize human actions
during human-robot collaboration. On the other hand, in order to provide
training data for the robot so it can learn how to perform object manipulation
actions, multi-modal data provides a better alternative
- …