1,338 research outputs found

    DPDnet: A Robust People Detector using Deep Learning with an Overhead Depth Camera

    Full text link
    In this paper we propose a method based on deep learning that detects multiple people from a single overhead depth image with high reliability. Our neural network, called DPDnet, is based on two fully-convolutional encoder-decoder neural blocks based on residual layers. The Main Block takes a depth image as input and generates a pixel-wise confidence map, where each detected person in the image is represented by a Gaussian-like distribution. The refinement block combines the depth image and the output from the main block, to refine the confidence map. Both blocks are simultaneously trained end-to-end using depth images and head position labels. The experimental work shows that DPDNet outperforms state-of-the-art methods, with accuracies greater than 99% in three different publicly available datasets, without retraining not fine-tuning. In addition, the computational complexity of our proposal is independent of the number of people in the scene and runs in real time using conventional GPUs

    Fast heuristic method to detect people in frontal depth images

    Get PDF
    This paper presents a new method for detecting people using only depth images captured by a camera in a frontal position. The approach is based on first detecting all the objects present in the scene and determining their average depth (distance to the camera). Next, for each object, a 3D Region of Interest (ROI) is processed around it in order to determine if the characteristics of the object correspond to the biometric characteristics of a human head. The results obtained using three public datasets captured by three depth sensors with different spatial resolutions and different operation principle (structured light, active stereo vision and Time of Flight) are presented. These results demonstrate that our method can run in realtime using a low-cost CPU platform with a high accuracy, being the processing times smaller than 1 ms per frame for a 512 × 424 image resolution with a precision of 99.26% and smaller than 4 ms per frame for a 1280 × 720 image resolution with a precision of 99.77%

    People counting system using existing surveillance video camera

    Get PDF
    The Casa da Música Foundation, responsible for the management of Casa da Música do Porto building, has the need to obtain statistical data related to the number of building’s visitors. This information is a valuable tool for the elaboration of periodical reports concerning the success of this cultural institution. For this reason it was necessary to develop a system capable of returning the number of visitors for a requested period of time. This represents a complex task due to the building’s unique architectural design, characterized by very large doors and halls, and the sudden large number of people that pass through them in moments preceding and proceeding the different activities occurring in the building. To achieve the technical solution for this challenge, several image processing methods, for people detection with still cameras, were first studied. The next step was the development of a real time algorithm, using OpenCV libraries and computer vision concepts,to count individuals with the desired accuracy. This algorithm includes the scientific and technical knowledge acquired in the study of the previous methods. The themes developed in this thesis comprise the fields of background maintenance, shadow and highlight detection, and blob detection and tracking. A graphical interface was also built, to help on the development, test and tunning of the proposed system, as a complement to the work. Furthermore, tests to the system were also performed, to certify the proposed techniques against a set of limited circumstances. The results obtained revealed that the algorithm was successfully applied to count the number of people in complex environments with reliable accuracy.A Fundação Casa da Música, responsável pela gestão do edifício da Casa da Música, tem a necessidade de obter dados estatísticos relativos ao número de visitantes. Esta informação é uma ferramenta valiosa para a elaboração periódica de relatórios de afluência para a avaliação do sucesso desta instituição cultural. Por este motivo existe a necessidade da elaboração de um sistema capaz de fornecer o número de visitantes para um determinado período de tempo. Esta tarefa é dificultada pelas características arquitetônicas, únicas do edifício, com portas largas e amplos halls, e devido ao súbito número de pessoas que passam por estas áreas em momentos que antecedem e procedem concertos, ou qualquer outras actividades. Para alcançar uma solução técnica para este desafio foi inicialmente elaborado um estado da arte relativo a métodos de processamento de imagem para deteção de pessoas com câmeras de vídeo. O passo seguinte foi, utilizando bibliotecas de OpenCV e conceitos de visão computacional, o desenvolvimento de um algoritmo em tempo real para contar pessoas com a precisão desejada. Este algoritmo inclui o conhecimento científico e técnico adquirido em métodos previamente estudados. Os temas desenvolvidos nesta tese compreendem os campos de manutenção do fundo, deteção de zonas sub e sobre iluminadas e deteção e seguimento de blobs. Foi também construida uma interface gráfica para ajudar o desenvolvimento, teste e afinação do sistema proposto como complemento ao trabalho desenvolvido. Além disso, perante um conjunto limitado de circunstâncias, foram efectuados testes ao sistema em ordem a certificar as técnicas propostas. Os resultados obtidos revelaram que o algoritmo foi aplicado com sucesso para contar pessoas em ambientes complexos com precisão

    Deep understanding of shopper behaviours and interactions using RGB-D vision

    Get PDF
    AbstractIn retail environments, understanding how shoppers move about in a store's spaces and interact with products is very valuable. While the retail environment has several favourable characteristics that support computer vision, such as reasonable lighting, the large number and diversity of products sold, as well as the potential ambiguity of shoppers' movements, mean that accurately measuring shopper behaviour is still challenging. Over the past years, machine-learning and feature-based tools for people counting as well as interactions analytic and re-identification were developed with the aim of learning shopper skills based on occlusion-free RGB-D cameras in a top-view configuration. However, after moving into the era of multimedia big data, machine-learning approaches evolved into deep learning approaches, which are a more powerful and efficient way of dealing with the complexities of human behaviour. In this paper, a novel VRAI deep learning application that uses three convolutional neural networks to count the number of people passing or stopping in the camera area, perform top-view re-identification and measure shopper–shelf interactions from a single RGB-D video flow with near real-time performances has been introduced. The framework is evaluated on the following three new datasets that are publicly available: TVHeads for people counting, HaDa for shopper–shelf interactions and TVPR2 for people re-identification. The experimental results show that the proposed methods significantly outperform all competitive state-of-the-art methods (accuracy of 99.5% on people counting, 92.6% on interaction classification and 74.5% on re-id), bringing to different and significative insights for implicit and extensive shopper behaviour analysis for marketing applications

    Towards dense people detection with deep learning and depth images

    Get PDF
    This paper describes a novel DNN-based system, named PD3net, that detects multiple people from a single depth image, in real time. The proposed neural network processes a depth image and outputs a likelihood map in image coordinates, where each detection corresponds to a Gaussian-shaped local distribution, centered at each person?s head. This likelihood map encodes both the number of detected people as well as their position in the image, from which the 3D position can be computed. The proposed DNN includes spatially separated convolutions to increase performance, and runs in real-time with low budget GPUs. We use synthetic data for initially training the network, followed by fine tuning with a small amount of real data. This allows adapting the network to different scenarios without needing large and manually labeled image datasets. Due to that, the people detection system presented in this paper has numerous potential applications in different fields, such as capacity control, automatic video-surveillance, people or groups behavior analysis, healthcare or monitoring and assistance of elderly people in ambient assisted living environments. In addition, the use of depth information does not allow recognizing the identity of people in the scene, thus enabling their detection while preserving their privacy. The proposed DNN has been experimentally evaluated and compared with other state-of-the-art approaches, including both classical and DNN-based solutions, under a wide range of experimental conditions. The achieved results allows concluding that the proposed architecture and the training strategy are effective, and the network generalize to work with scenes different from those used during training. We also demonstrate that our proposal outperforms existing methods and can accurately detect people in scenes with significant occlusions.Ministerio de Economía y CompetitividadUniversidad de AlcaláAgencia Estatal de Investigació

    Joint Probabilistic People Detection in Overlapping Depth Images

    Get PDF
    Privacy-preserving high-quality people detection is a vital computer vision task for various indoor scenarios, e.g. people counting, customer behavior analysis, ambient assisted living or smart homes. In this work a novel approach for people detection in multiple overlapping depth images is proposed. We present a probabilistic framework utilizing a generative scene model to jointly exploit the multi-view image evidence, allowing us to detect people from arbitrary viewpoints. Our approach makes use of mean-field variational inference to not only estimate the maximum a posteriori (MAP) state but to also approximate the posterior probability distribution of people present in the scene. Evaluation shows state-of-the-art results on a novel data set for indoor people detection and tracking in depth images from the top-view with high perspective distortions. Furthermore it can be demonstrated that our approach (compared to the the mono-view setup) successfully exploits the multi-view image evidence and robustly converges in only a few iterations

    People counting using an overhead fisheye camera

    Full text link
    As climate change concerns grow, the reduction of energy consumption is seen as one of many potential solutions. In the US, a considerable amount of energy is wasted in commercial buildings due to sub-optimal heating, ventilation and air conditioning that operate with no knowledge of the occupancy level in various rooms and open areas. In this thesis, I develop an approach to passive occupancy estimation that does not require occupants to carry any type of beacon, but instead uses an overhead camera with fisheye lens (360 by 180 degree field of view). The difficulty with fisheye images is that occupants may appear not only in the upright position, but also upside-down, horizontally and diagonally, and thus algorithms developed for typical side-mounted, standard-lens cameras tend to fail. As the top-performing people detection algorithms today use deep learning, a logical step would be to develop and train a new neural-network model. However, there exist no large fisheye-image datasets with person annotations to facilitate training a new model. Therefore, I developed two people-counting methods that leverage YOLO (version 3), a state-of-the-art object detection method trained on standard datasets. In one approach, YOLO is applied to 24 rotated and highly-overlapping windows, and the results are post-processed to produce a people count. In the other approach, regions of interest are first extracted via background subtraction and only windows that include such regions are supplied to YOLO and post-processed. I carried out extensive experimental evaluation of both algorithms and showed their superior performance compared to a benchmark method
    • …
    corecore