108 research outputs found

    A Recent Trend in Individual Counting Approach Using Deep Network

    Get PDF
    In video surveillance scheme, counting individuals is regarded as a crucial task. Of all the individual counting techniques in existence, the regression technique can offer enhanced performance under overcrowded area. However, this technique is unable to specify the details of counting individual such that it fails in locating the individual. On contrary, the density map approach is very effective to overcome the counting problems in various situations such as heavy overlapping and low resolution. Nevertheless, this approach may break down in cases when only the heads of individuals appear in video scenes, and it is also restricted to the feature’s types. The popular technique to obtain the pertinent information automatically is Convolutional Neural Network (CNN). However, the CNN based counting scheme is unable to sufficiently tackle three difficulties, namely, distributions of non-uniform density, changes of scale and variation of drastic scale. In this study, we cater a review on current counting techniques which are in correlation with deep net in different applications of crowded scene. The goal of this work is to specify the effectiveness of CNN applied on popular individuals counting approaches for attaining higher precision results

    DecideNet: Counting Varying Density Crowds Through Attention Guided Detection and Density Estimation

    Full text link
    In real-world crowd counting applications, the crowd densities vary greatly in spatial and temporal domains. A detection based counting method will estimate crowds accurately in low density scenes, while its reliability in congested areas is downgraded. A regression based approach, on the other hand, captures the general density information in crowded regions. Without knowing the location of each person, it tends to overestimate the count in low density areas. Thus, exclusively using either one of them is not sufficient to handle all kinds of scenes with varying densities. To address this issue, a novel end-to-end crowd counting framework, named DecideNet (DEteCtIon and Density Estimation Network) is proposed. It can adaptively decide the appropriate counting mode for different locations on the image based on its real density conditions. DecideNet starts with estimating the crowd density by generating detection and regression based density maps separately. To capture inevitable variation in densities, it incorporates an attention module, meant to adaptively assess the reliability of the two types of estimations. The final crowd counts are obtained with the guidance of the attention module to adopt suitable estimations from the two kinds of density maps. Experimental results show that our method achieves state-of-the-art performance on three challenging crowd counting datasets.Comment: CVPR 201

    DPDnet: A Robust People Detector using Deep Learning with an Overhead Depth Camera

    Full text link
    In this paper we propose a method based on deep learning that detects multiple people from a single overhead depth image with high reliability. Our neural network, called DPDnet, is based on two fully-convolutional encoder-decoder neural blocks based on residual layers. The Main Block takes a depth image as input and generates a pixel-wise confidence map, where each detected person in the image is represented by a Gaussian-like distribution. The refinement block combines the depth image and the output from the main block, to refine the confidence map. Both blocks are simultaneously trained end-to-end using depth images and head position labels. The experimental work shows that DPDNet outperforms state-of-the-art methods, with accuracies greater than 99% in three different publicly available datasets, without retraining not fine-tuning. In addition, the computational complexity of our proposal is independent of the number of people in the scene and runs in real time using conventional GPUs

    Vision based system for detecting and counting mobility aids in surveillance videos

    Get PDF
    Automatic surveillance video analysis is popular among computer vision researchers due to its wide range of applications that require automated systems. Automated systems are to replace manual analysis of videos which is tiresome, expensive, and time-consuming. Image and video processing techniques are often used in the design of automatic detection and monitoring systems. Compared with normal indoor videos, outdoor surveillance videos are often difficult to process due to the uncontrolled environment, camera angle, and varying lighting and weather conditions. This research aims to contribute to the computer vision field by proposing an object detection and tracking algorithm that can handle multi-object and multi-class scenarios. The problem is solved by developing an application to count disabled pedestrians in surveillance videos by automatically detecting and tracking mobility aids and pedestrians. The application demonstrates that the proposed ideas achieve the desired outcomes. There are extensive studies on pedestrian detection and gait analysis in the computer vision field, but limited work is carried out on identifying disabled pedestrians or mobility aids. Detection of mobility aids in videos is challenging since the disabled person often occludes mobility aids and visibility of mobility aid depends on the direction of the walk with respect to the camera. For example, a walking stick is visible most times in front-on view while it is occluded when it happens to be on the walker's rear side. Furthermore, people use various mobility aids and their make and type changes with time as technology advances. The system should detect the majority of mobility aids to report reliable counting data. The literature review revealed that no system exists for detecting disabled pedestrians or mobility aids in surveillance videos. A lack of annotated image data containing mobility aids is also an obstacle to developing a machine-learning-based solution to detect mobility aids. In the first part of this thesis, we explored moving pedestrians' video data to extract the gait signals using manual and automated procedures. Manual extraction involved marking the pedestrians' head and leg locations and analysing those signals in the time domain. Analysis of stride length and velocity features indicate an abnormality if a walker is physically disabled. The automated system is built by combining the \acrshort{yolo} object detector, GMM based foreground modelling and star skeletonisation in a pipeline to extract the gait signal. The automated system failed to recognise a disabled person from its gait due to poor localisation by \acrshort{yolo}, incorrect segmentation and silhouette extraction due to moving backgrounds and shadows. The automated gait analysis approach failed due to various factors including environmental constraints, viewing angle, occlusions, shadows, imperfections in foreground modelling, object segmentation and silhouette extraction. In the later part of this thesis, we developed a CNN based approach to detect mobility aids and pedestrians. The task of identifying and counting disabled pedestrians in surveillance videos is divided into three sub-tasks: mobility aid and person detection, tracking and data association of detected objects, and counting healthy and disabled pedestrians. A modern object detector called YOLO, an improved data association algorithm (SORT), and a new pairing approach are applied to complete the three sub-tasks. Improvement of the SORT algorithm and introducing a pairing approach are notable contributions to the computer vision field. The SORT algorithm is strictly one class and without an object counting feature. SORT is enhanced to be multi-class and able to track accelerating or temporarily occluded objects. The pairing strategy associates a mobility aid with the nearest pedestrian and monitors them over time to see if the pair is reliable. A reliable pair represents a disabled pedestrian and counting reliable pairs calculates the number of disabled people in the video. The thesis also introduces an image database that was gathered as part of this study. The dataset comprises 5819 images belonging to eight different object classes, including five mobility aids, pedestrians, cars, and bicycles. The dataset was needed to train a CNN that can detect mobility aids in videos. The proposed mobility aid counting system is evaluated on a range of surveillance videos collected from outdoors with real-world scenarios. The results prove that the proposed solution offers a satisfactory performance in picking mobility aids from outdoor surveillance videos. The counting accuracy of 94% on test videos meets the design goals set by the advocacy group that need this application. Most test videos had objects from multiple classes in them. The system detected five mobility aids (wheelchair, crutch, walking stick, walking frame and mobility scooter), pedestrians and two distractors (car and bicycle). The training system on distractors' classes was to ensure the system can distinguish objects that are similar to mobility aids from mobility aids. In some cases, the convolutional neural network reports a mobility aid with an incorrect type. For example, the shape of crutch and stick are very much alike, and therefore, the system confuses one with the other. However, it does not affect the final counts as the aim was to get the overall counts of mobility aids (of any type) and determining the exact type of mobility aid is optional

    Scene and crowd analysis using synthetic data generation with 3D quality improvements and deep network architectures

    Get PDF
    In this thesis, a scene analysis mainly focusing on vision-based techniques have been explored. The vision-based scene analysis techniques have a wide range of applications from surveillance, security to agriculture. A vision sensor can provide rich information about the environment such as colour, depth, shape, size and much more. This information can be further processed to have an in-depth knowledge of the scene such as type of environment, objects and distances. Hence, this thesis covers initially the background on human detection in particular pedestrian and crowd detection methods and introduces various vision-based techniques used in human detection. Followed by a detailed analysis of the use of synthetic data to improve the performance of state-of-the-art Deep Learning techniques and a multi-purpose synthetic data generation tool is proposed. The tool is a real-time graphics simulator which generates multiple types of synthetic data applicable for pedestrian detection, crowd density estimation, image segmentation, depth estimation, and 3D pose estimation. In the second part of the thesis, a novel technique has been proposed to improve the quality of the synthetic data. The inter-reflection also known as global illumination is a naturally occurring phenomena and is a major problem for 3D scene generation from an image. Thus, the proposed methods utilised a reverted ray-tracing technique to reduce the effect of inter-reflection problem and increased the quality of generated data. In addition, a method to improve the quality of the density map is discussed in the following chapter. The density map is the most commonly used technique to estimate crowds. However, the current procedure used to generate the map is not content-aware i.e., density map does not highlight the humans’ heads according to their size in the image. Thus, a novel method to generate a content-aware density map was proposed and demonstrated that the use of such maps can elevate the performance of an existing Deep Learning architecture. In the final part, a Deep Learning architecture has been proposed to estimate the crowd in the wild. The architecture tackled the challenging aspect such as perspective distortion by implementing several techniques like pyramid style inputs, scale aggregation method and self-attention mechanism to estimate a crowd density map and achieved state-of-the-art results at the time

    NAPC: A Neural Algorithm for Automated Passenger Counting in Public Transport on a Privacy-Friendly Dataset

    Get PDF
    Real-time load information in public transport is of high importance for both passengers and service providers. Neural algorithms have shown a high performance on various object counting tasks and play a continually growing methodological role in developing automated passenger counting systems. However, the publication of public-space video footage is often contradicted by legal and ethical considerations to protect the passengers’ privacy. This work proposes an end-to-end Long Short-Term Memory network with a problem-adapted cost function that learned to count boarding and alighting passengers on a publicly available, comprehensive dataset of approx. 13,000 manually annotated low-resolution 3D LiDAR video recordings (depth information only) from the doorways of a regional train. These depth recordings do not allow the identification of single individuals. For each door opening phase, the trained models predict the correct passenger count (ranging from 0 to 67) in approx. 96% of boarding and alighting, respectively. Repeated training with different training and validation sets confirms the independence of this result from a specific test set.DFG, 414044773, Open Access Publizieren 2021 - 2022 / Technische Universität Berli
    corecore