2,316 research outputs found
How simple rules determine pedestrian behavior and crowd disasters
With the increasing size and frequency of mass events, the study of crowd
disasters and the simulation of pedestrian flows have become important research
areas. Yet, even successful modeling approaches such as those inspired by
Newtonian force models are still not fully consistent with empirical
observations and are sometimes hard to calibrate. Here, a novel cognitive
science approach is proposed, which is based on behavioral heuristics. We
suggest that, guided by visual information, namely the distance of obstructions
in candidate lines of sight, pedestrians apply two simple cognitive procedures
to adapt their walking speeds and directions. While simpler than previous
approaches, this model predicts individual trajectories and collective patterns
of motion in good quantitative agreement with a large variety of empirical and
experimental data. This includes the emergence of self-organization phenomena,
such as the spontaneous formation of unidirectional lanes or stop-and-go waves.
Moreover, the combination of pedestrian heuristics with body collisions
generates crowd turbulence at extreme densities-a phenomenon that has been
observed during recent crowd disasters. By proposing an integrated treatment of
simultaneous interactions between multiple individuals, our approach overcomes
limitations of current physics-inspired pair interaction models. Understanding
crowd dynamics through cognitive heuristics is therefore not only crucial for a
better preparation of safe mass events. It also clears the way for a more
realistic modeling of collective social behaviors, in particular of human
crowds and biological swarms. Furthermore, our behavioral heuristics may serve
to improve the navigation of autonomous robots.Comment: Article accepted for publication in PNA
Investigating pedestrians’ obstacle avoidance behaviour
Modelling and simulating pedestrian motions are standard ways to investigate crowd dynamics aimed to enhance pedestrians’ safety. Movement of people is affected by interactions with one another and with the physical environment that it may be a worthy line of research. This paper studies the impact of speed on how pedestrians respond to the obstacles (i.e. Obstacles avoidance behaviour). A field experiment was performed in which a group of people were instructed to perform some obstacles avoidance tasks at two levels of normal and high speeds. Trajectories of the participants are extracted from the video recordings for the subsequent intentions:(i) to seek out the impact of total speed, x and yaxis (ii) to observe the impact of the speed on the movement direction, x-axis, (iii) to find out the impact of speed on the lateral direction, y-axis. The results of the experiments could be used to enhance the current pedestrian simulation models
SANTO: Social Aerial NavigaTion in Outdoors
In recent years, the advances in remote connectivity, miniaturization of electronic components and computing power has led to the integration of these technologies in daily devices like cars or aerial vehicles. From these, a consumer-grade option that has gained popularity are the drones or unmanned aerial vehicles, namely quadrotors. Although until recently they have not been used for commercial applications, their inherent potential for a number of tasks where small and intelligent devices are needed is huge. However, although the integrated hardware has advanced exponentially, the refinement of software used for these applications has not beet yet exploited enough. Recently, this shift is visible in the improvement of common tasks in the field of robotics, such as object tracking or autonomous navigation. Moreover, these challenges can become bigger when taking into account the dynamic nature of the real world, where the insight about the current environment is constantly changing. These settings are considered in the improvement of robot-human interaction, where the potential use of these devices is clear, and algorithms are being developed to improve this situation. By the use of the latest advances in artificial intelligence, the human brain behavior is simulated by the so-called neural networks, in such a way that computing system performs as similar as possible as the human behavior. To this end, the system does learn by error which, in an akin way to the human learning, requires a set of previous experiences quite considerable, in order for the algorithm to retain the manners. Applying these technologies to robot-human interaction do narrow the gap. Even so, from a bird's eye, a noticeable time slot used for the application of these technologies is required for the curation of a high-quality dataset, in order to ensure that the learning process is optimal and no wrong actions are retained. Therefore, it is essential to have a development platform in place to ensure these principles are enforced throughout the whole process of creation and optimization of the algorithm. In this work, multiple already-existing handicaps found in pipelines of this computational gauge are exposed, approaching each of them in a independent and simple manner, in such a way that the solutions proposed can be leveraged by the maximum number of workflows. On one side, this project concentrates on reducing the number of bugs introduced by flawed data, as to help the researchers to focus on developing more sophisticated models. On the other side, the shortage of integrated development systems for this kind of pipelines is envisaged, and with special care those using simulated or controlled environments, with the goal of easing the continuous iteration of these pipelines.Thanks to the increasing popularity of drones, the research and development of autonomous capibilities has become easier. However, due to the challenge of integrating multiple technologies, the available software stack to engage this task is restricted. In this thesis, we accent the divergencies among unmanned-aerial-vehicle simulators and propose a platform to allow faster and in-depth prototyping of machine learning algorithms for this drones
Social-aware drone navigation using social force model
Robot’s navigation is one of the hardest challenges to deal with, because
real environments imply highly dynamic objects moving in all directions.
The main ideal goal is to conduct a safe navigation within the environment,
avoiding obstacles and reaching the final proposed goal. Nowadays, with
the last advances in technology, we are able to see robots almost everywhere,
and this can lead us to think about the robot’s role in the future,
and where we would find them, and it is no exaggerated to say, that practically,
flying and land-based robots are going to live together with people,
interacting in our houses, streets and shopping centers. Moreover, we will
notice their presence, gradually inserted in our human societies, every time
doing more human tasks, which in the past years were unthinkable.
Therefore, if we think about robots moving or flying around us, we must
consider safety, the distance the robot should take to make the human feel
comfortable, and the different reactions people would have. The main goal
of this work is to accompany people making use of a flying robot. The term
social navigation gives us the path to follow when we talk about a social environment.
Robots must be able to navigate between humans, giving sense
of security to those who are walking close to them. In this work, we present
a model called Social Force Model, which states that the human social interaction
between persons and objects is inspired in the fluid dynamics de-
fined by Newton’s equations, and also, we introduce the extended version
which complements the initial method with the human-robot interaction
force.
In the robotics field, the use of tools for helping the development and
the implementation part are crucial. The fast advances in technology allows
the international community to have access to cheaper and more compact
hardware and software than a decade ago. It is becoming more and
more usual to have access to more powerful technology which helps us to
run complex algorithms, and because of that, we can run bigger systems
in reduced space, making robots more intelligent, more compact and more
robust against failures. Our case was not an exception, in the next chapters
we will present the procedure we followed to implement the approaches,
supported by different simulation tools and software. Because of the nature
of the problem we were facing, we made use of Robotic Operating System
along with Gazebo, which help us to have a good outlook of how the code
will work in real-life experiments.
In this work, both real and simulated experiments are presented, in
which we expose the interaction conducted by the 3D Aerial Social Force
Model, between humans, objects and in this case the AR.Drone, a flying
drone property of the Instituto de Robótica e Informática Industrial. We
focus on making the drone navigation more socially acceptable by the humans
around; the main purpose of the drone is to accompany a person,
which we will call the "main" person in this work, who is going to try to
navigate side-by-side, with a behavior being dictated with some forces exerted
by the environment, and also is going to try to be the more socially
close acceptable possible to the remaining humans around. Also, it is presented
a comparison between the 3D Aerial Social Force Model and the
Artificial Potential Fields method, a well-known method and widely used
in robot navigation. We present both methods and the description of the
forces each one involves.
Along with these two models, there is also another important topic to
introduce. As we said, the robot must be able to accompany a pedestrian in
his way, and for that reason, the forecasting capacity is an important feature
since the robot does not know the final destination of the human to accompany.
It is essential to give it the ability to predict the human movements.
In this work, we used the differential values between the past position values
to know how much is changing through time. This gives us an accurate
idea of how the human would behave or which direction he/she would
take next.
Furthermore, we present a description of the human motion prediction
model based on linear regression. The motivation behind the idea of building
a Regression Model was the simplicity of the implementation, the robustness
and the very accurate results of the approach. The previous main
human positions are taken, in order to forecast the new position of the human,
the next seconds. This is done with the main purpose of letting the
drone know about the direction the human is taking, to move forward beside
the human, as if the drone was accompanying him. The optimization
for the linear regression model, to find the right weights for our model, was
carried out by gradient descent, implementing also de RMSprop variant in
order to reach convergence in a faster way. The strategy that was followed
to build the prediction model is explained with detail later in this work.
The presence of social robots has grown during the past years, many
researchers have contributed and many techniques are being used to give
them the capacity of interacting safely and effectively with the people, and
it is a hot topic which has matured a lot, but still there is many research to
be investigated
- …