1,997 research outputs found

    LCrowdV: Generating Labeled Videos for Simulation-based Crowd Behavior Learning

    Full text link
    We present a novel procedural framework to generate an arbitrary number of labeled crowd videos (LCrowdV). The resulting crowd video datasets are used to design accurate algorithms or training models for crowded scene understanding. Our overall approach is composed of two components: a procedural simulation framework for generating crowd movements and behaviors, and a procedural rendering framework to generate different videos or images. Each video or image is automatically labeled based on the environment, number of pedestrians, density, behavior, flow, lighting conditions, viewpoint, noise, etc. Furthermore, we can increase the realism by combining synthetically-generated behaviors with real-world background videos. We demonstrate the benefits of LCrowdV over prior lableled crowd datasets by improving the accuracy of pedestrian detection and crowd behavior classification algorithms. LCrowdV would be released on the WWW

    Learning to Detect Pedestrian Flow in Traffic Intersections from Synthetic Data

    Get PDF
    Detecting pedestrian flow in different directions at at traffic-intersection has always been a challenging task. Challenges include different weather conditions, different crowd densities, occlusions, lack of available data, and so on. The emergence of deep learning and computer vision algorithms has shown promises to deal with these problems. Most of the recent works only focus on either detecting combined pedestrian flow or counting the total number of pedestrians. In this work, we have tried to detect not only combined pedestrian flow but also pedestrian flow indifferent directions. Our contributions are, 1) we are introducing a synthetic pedestrian dataset that we have created using a videogame and a real-world dataset we have collected from the street. Our dataset has small, medium and high-density pedestrians crossing a crossroad, captured from different camera height, 2) We have proposed a Pedestrian Flow Inference Model (PFIM) that is trained on the synthetic dataset first and then is tested extensively on our real-world dataset. While testing on real-world dataset, we have embraced domain adaptation to reduce the domain gap between synthetic data and real-world data. Our proposed Pedestrian Flow Inference Model (PFIM) can detect pedestrian density and flow regardless of the height of the camera in three different ways - from left to right direction, from right to left direction, and total. Combining all, It has successfully tackled the challenges mentioned above and achieved state-of the-art performances

    Development of a Realistic Crowd Simulation Environment for Fine-grained Validation of People Tracking Methods

    Full text link
    Generally, crowd datasets can be collected or generated from real or synthetic sources. Real data is generated by using infrastructure-based sensors (such as static cameras or other sensors). The use of simulation tools can significantly reduce the time required to generate scenario-specific crowd datasets, facilitate data-driven research, and next build functional machine learning models. The main goal of this work was to develop an extension of crowd simulation (named CrowdSim2) and prove its usability in the application of people-tracking algorithms. The simulator is developed using the very popular Unity 3D engine with particular emphasis on the aspects of realism in the environment, weather conditions, traffic, and the movement and models of individual agents. Finally, three methods of tracking were used to validate generated dataset: IOU-Tracker, Deep-Sort, and Deep-TAMA

    Human Centered Computer Vision Techniques for Intelligent Video Surveillance Systems

    Get PDF
    Nowadays, intelligent video surveillance systems are being developed to support human operators in different monitoring and investigation tasks. Although relevant results have been achieved by the research community in several computer vision tasks, some real applications still exhibit several open issues. In this context, this thesis focused on two challenging computer vision tasks: person re-identification and crowd counting. Person re-identification aims to retrieve images of a person of interest, selected by the user, in different locations over time, reducing the time required to the user to analyse all the available videos. Crowd counting consists of estimating the number of people in a given image or video. Both tasks present several complex issues. In this thesis, a challenging video surveillance application scenario is considered in which it is not possible to collect and manually annotate images of a target scene (e.g., when a new camera installation is made by Law Enforcement Agency) to train a supervised model. Two human centered solutions for the above mentioned tasks are then proposed, in which the role of the human operators is fundamental. For person re-identification, the human-in-the-loop approach is proposed, which exploits the operator feedback on retrieved pedestrian images during system operation, to improve system's effectiveness. The proposed solution is based on revisiting relevance feedback algorithms for content-based image retrieval, and on developing a specific feedback protocol, to find a trade-off between the human effort and re-identification performance. For crowd counting, the use of a synthetic training set is proposed to develop a scene-specific model, based on a minimal amount of information of the target scene required to the user. Both solutions are empirically investigated using state-of-the-art supervised models based on Convolutional Neural Network, on benchmark data sets

    A Semi-Automated Technique for Transcribing Accurate Crowd Motions

    Get PDF
    We present a novel technique for transcribing crowds in video scenes that allows extracting the positions of moving objects in video frames. The technique can be used as a more precise alternative to image processing methods, such as background-removal or automated pedestrian detection based on feature extraction and classification. By manually projecting pedestrian actors on a two-dimensional plane and translating screen coordinates to absolute real-world positions using the cross ratio, we provide highly accurate and complete results at the cost of increased processing time. We are able to completely avoid most errors found in other automated annotation techniques, resulting from sources such as noise, occlusion, shadows, view angle or the density of pedestrians. It is further possible to process scenes that are difficult or impossible to transcribe by automated image processing methods, such as low-contrast or low-light environments. We validate our model by comparing it to the results of both background-removal and feature extraction and classification in a variety of scenes
    corecore