256,645 research outputs found
Mobile and Accessible Learning for MOOCs
Many modern web-based systems provide a ‘responsive’ design that allows material and services to be accessed on mobile and desktop devices, with the aim of providing ubiquitous access. Besides offering access to learning materials such as podcasts and videos across multiple locations, mobile, wearable and ubiquitous technologies have some additional affordances that may enable new forms of learning on MOOCs. We can divide these into two categories: firstly, context-sensitive features including delivery of content for a specific location, enabling a seamless continuity of learning across settings, and linking people in a location with others in a virtual representation of that place; secondly social learning opportunities to connect people as they move within and across locations, to enable crowd-sourced learning. In this paper we explore these aspects of mobile and accessible learning for MOOCs, drawing on examples from MOOC courses, mobile toolkits, and crowd-sourced learning sites
Reasoning about crowd evacuations as emergent phenomena when using participatory computational models
How do students apply systems thinking to make sense of a computational model of crowd evacuation? We developed a participatory simulation in which users play the role of evacuees that move through a narrow passageway. This simulation demonstrates that when exceeding a certain speed, moving through narrow bottlenecks, is more likely to create clogs, leading to a slower passing rate. The participatory simulation was introduced in a lesson about school evacuation in a group of 9th graders. Their explanations of crowd evacuation, were compared to a similar group of 9th graders who learned the same ideas in a lecture without using the simulation. We found that using the simulation did not improve students’ system thinking about crowd evacuation compared to lecture-based instruction. About 80% of the students in both groups suggested partial/incomplete explanations of the inverse relationship between the desire to move faster as individuals and the opposite consequence of slower evacuation. Interviews with students revealed that some of them perceived the simulation scenario to be different from the organized and coordinated evacuation drills that they partook. Others, were engrossed in their own experiences as evacuees, that obscured their ability to relate the motion of individual evacuees and the overall evacuation rate of the crowd. In a second study, we examined whether prior learning of a different emergent process (spread of a disease) with a computational model, can prepare students for learning the counterintuitive phenomenon of crowd evacuation. We found that introducing a participatory simulation of the spread of a disease in a different group of 9th graders, increased their appreciation of the evacuation simulation as a learning tool, and consequently–their explanations. We conclude that computational models have the potential to enhance systems thinking, but their affordances depend on prior preparation for learning with other complex systems models
Time-continuous microscopic pedestrian models: an overview
We give an overview of time-continuous pedestrian models with a focus on
data-driven modelling. Starting from pioneer, reactive force-based models we
move forward to modern, active pedestrian models with sophisticated
collision-avoidance and anticipation techniques through optimisation problems.
The overview focuses on the mathematical aspects of the models and their
different components. We include methods used for data-based calibration of
model parameters, hybrid approaches incorporating neural networks, and purely
data-based models fitted by deep learning. Some development perspectives of
modelling paradigms we expect to grow in the coming years are outlined in the
conclusion.Comment: 26 pages; chapter accepted for publication in Crowd Dynamics (vol. 4
DRL-VO: Learning to Navigate Through Crowded Dynamic Scenes Using Velocity Obstacles
This paper proposes a novel learning-based control policy with strong
generalizability to new environments that enables a mobile robot to navigate
autonomously through spaces filled with both static obstacles and dense crowds
of pedestrians. The policy uses a unique combination of input data to generate
the desired steering angle and forward velocity: a short history of lidar data,
kinematic data about nearby pedestrians, and a sub-goal point. The policy is
trained in a reinforcement learning setting using a reward function that
contains a novel term based on velocity obstacles to guide the robot to
actively avoid pedestrians and move towards the goal. Through a series of 3D
simulated experiments with up to 55 pedestrians, this control policy is able to
achieve a better balance between collision avoidance and speed (i.e., higher
success rate and faster average speed) than state-of-the-art model-based and
learning-based policies, and it also generalizes better to different crowd
sizes and unseen environments. An extensive series of hardware experiments
demonstrate the ability of this policy to directly work in different real-world
environments with different crowd sizes with zero retraining. Furthermore, a
series of simulated and hardware experiments show that the control policy also
works in highly constrained static environments on a different robot platform
without any additional training. Lastly, several important lessons that can be
applied to other robot learning systems are summarized. Multimedia
demonstrations are available at
https://www.youtube.com/watch?v=KneELRT8GzU&list=PLouWbAcP4zIvPgaARrV223lf2eiSR-eSS.Comment: Accepted by IEEE Transactions on Robotics (T-RO), 202
- …