12,716 research outputs found

    LCrowdV: Generating Labeled Videos for Simulation-based Crowd Behavior Learning

    Full text link
    We present a novel procedural framework to generate an arbitrary number of labeled crowd videos (LCrowdV). The resulting crowd video datasets are used to design accurate algorithms or training models for crowded scene understanding. Our overall approach is composed of two components: a procedural simulation framework for generating crowd movements and behaviors, and a procedural rendering framework to generate different videos or images. Each video or image is automatically labeled based on the environment, number of pedestrians, density, behavior, flow, lighting conditions, viewpoint, noise, etc. Furthermore, we can increase the realism by combining synthetically-generated behaviors with real-world background videos. We demonstrate the benefits of LCrowdV over prior lableled crowd datasets by improving the accuracy of pedestrian detection and crowd behavior classification algorithms. LCrowdV would be released on the WWW

    Analysing Pedestrian Traffic Around Public Displays

    Get PDF
    This paper presents a powerful approach to evaluating public technologies by capturing and analysing pedestrian traffic using computer vision. This approach is highly flexible and scales better than traditional ethnographic techniques often used to evaluate technology in public spaces. This technique can be used to evaluate a wide variety of public installations and the data collected complements existing approaches. Our technique allows behavioural analysis of both interacting users and non-interacting passers-by. This gives us the tools to understand how technology changes public spaces, how passers-by approach or avoid public technologies, and how different interaction styles work in public spaces. In the paper, we apply this technique to two large public displays and a street performance. The results demonstrate how metrics such as walking speed and proximity can be used for analysis, and how this can be used to capture disruption to pedestrian traffic and passer-by approach patterns

    A large-scale real-life crowd steering experiment via arrow-like stimuli

    Full text link
    We introduce "Moving Light": an unprecedented real-life crowd steering experiment that involved about 140.000 participants among the visitors of the Glow 2017 Light Festival (Eindhoven, NL). Moving Light targets one outstanding question of paramount societal and technological importance: "can we seamlessly and systematically influence routing decisions in pedestrian crowds?" Establishing effective crowd steering methods is extremely relevant in the context of crowd management, e.g. when it comes to keeping floor usage within safety limits (e.g. during public events with high attendance) or at designated comfort levels (e.g. in leisure areas). In the Moving Light setup, visitors walking in a corridor face a choice between two symmetric exits defined by a large central obstacle. Stimuli, such as arrows, alternate at random and perturb the symmetry of the environment to bias choices. While visitors move in the experiment, they are tracked with high space and time resolution, such that the efficiency of each stimulus at steering individual routing decisions can be accurately evaluated a posteriori. In this contribution, we first describe the measurement concept in the Moving Light experiment and then we investigate quantitatively the steering capability of arrow indications.Comment: 8 page

    Towards a Scalable Hardware/Software Co-Design Platform for Real-time Pedestrian Tracking Based on a ZYNQ-7000 Device

    Get PDF
    Currently, most designers face a daunting task to research different design flows and learn the intricacies of specific software from various manufacturers in hardware/software co-design. An urgent need of creating a scalable hardware/software co-design platform has become a key strategic element for developing hardware/software integrated systems. In this paper, we propose a new design flow for building a scalable co-design platform on FPGA-based system-on-chip. We employ an integrated approach to implement a histogram oriented gradients (HOG) and a support vector machine (SVM) classification on a programmable device for pedestrian tracking. Not only was hardware resource analysis reported, but the precision and success rates of pedestrian tracking on nine open access image data sets are also analysed. Finally, our proposed design flow can be used for any real-time image processingrelated products on programmable ZYNQ-based embedded systems, which benefits from a reduced design time and provide a scalable solution for embedded image processing products
    • …
    corecore