94 research outputs found

    Continuous Human Activity Tracking over a Large Area with Multiple Kinect Sensors

    Get PDF
    In recent years, researchers had been inquisitive about the use of technology to enhance the healthcare and wellness of patients with dementia. Dementia symptoms are associated with the decline in thinking skills and memory severe enough to reduce a person’s ability to pay attention and perform daily activities. Progression of dementia can be assessed by monitoring the daily activities of the patients. This thesis encompasses continuous localization and behavioral analysis of patient’s motion pattern over a wide area indoor living space using multiple calibrated Kinect sensors connected over the network. The skeleton data from all the sensor is transferred to the host computer via TCP sockets into Unity software where it is integrated into a single world coordinate system using calibration technique. Multiple cameras are placed with some overlap in the field of view for the successful calibration of the cameras and continuous tracking of the patients. Localization and behavioral data are stored in a CSV file for further analysis

    People Detection and Tracking with Kinect for Mobile Platforms

    Get PDF
    Human detection is a key ability for robot applications that operate in environments where people are present, or in situation where those applications are requested to interact with them. It’s the case for social robots like aids for the rehabilitation of inmates in hospitals, assistance in office, guides for museum tours. In this thesis we will investigate on how we can make use of the new Microsoft’s gaming sensor, the Kinect, to address the issues of real-time people detection and tracking, since the sensor has been built in order to detect people and track their movements. We developed a system that is able of detecting and tracking people in near real-time both on fixed environments and mobile platforms. We tested four different classifiers on different situations. The best classifier showed very good detection and tracking results whereas, because of some segmentation problems, the performances of the complete system have been subjected to a lowering with respect to the theoretical ones. We developed also a method for getting rid of some of these segmentation problems and it showed some improvements for the complete system together with some drawbacks that affected the theoretical results. However the complete system works good and with a frame rate of 2 fps on average. Most of the computational load is due again to the segmentation module, so an improvement of this module would lead to both improvements on the real-time performances and on the detection result

    Detection of abnormal passenger behaviors on ships, using RGBD cameras

    Get PDF
    El objetivo de este trabajo fin de Máster (TFM) es el diseño, implementación, y evaluación de un sistema inteligente de videovigilancia, que permita la detección, seguimiento y conteo de personas, así como la detección de estampidas, para grandes embarcaciones. El sistema desarrollado debe ser portable, y funcionar en tiempo real. Para ello se ha realizado un estudio de las tecnologías disponibles en sistemas embebidos, para elegir las que mejor se adecúan al objetivo del TFM. Se ha desarrollado un sistema de detección de personas basado en una MobileNet-SSD, complementado con un banco de filtros de Kalman para el seguimiento. Además, se ha incorporado un detector de estampidas basado en el análisis de la entropía del flujo óptico. Todo ello se ha implementado y evaluado en un dispositivo embebido que incluye una unidad VPU. Los resultados obtenidos han permitido validar la propuesta.The aim of this Final Master Thesis (TFM) is the design, implementation and evaluation of an intelligent video surveillance system that allows the detection, monitoring and counting of people, as well as the detection of stampedes, for large ships. The developed system must be portable and work in real time. To this end, a study has been carried out of the technologies available in embedded systems, in order to choose those that best suit the objective of the TFM. A people detection system based on a MobileNetSSD has been developed, complemented by a Kalman filter bank for monitoring. In addition, a stampede detector based on optical flow entropy analysis has been incorporated. All this has been implemented and evaluated in an embedded device that includes a Vision Processing Unit (VPU) unit. The results obtained have allowed the validation of the proposal.Máster Universitario en Ingeniería de Telecomunicación (M125

    A robust people detection, tracking, and counting system

    Full text link
    The ability to track moving people is a key aspect of autonomous robot systems in real-world environments. Whilst for many tasks knowing the approximate positions of people may be sufficient, the ability to identify unique people is needed to accurately count people in the real world. To accomplish the people counting task, a robust system for people detection, tracking and identification is needed. This paper presents our approach for robust real world people detection, tracking and counting using a PrimeSense RGBD camera. Our past research, upon which we built, is highlighted and novel methods to solve the problems of sensor self-localisation, false negatives due to persons physically interacting with the environment, and track misassociation due to crowdedness are presented. An empirical evaluation of our approach in a major Sydney public train station (N=420) was conducted, and results demonstrating our methods in the complexities of this challenging environment are presented

    Novel robust computer vision algorithms for micro autonomous systems

    Get PDF
    People detection and tracking are an essential component of many autonomous platforms, interactive systems and intelligent vehicles used in various search and rescues operations and similar humanitarian applications. Currently, researchers are focusing on the use of vision sensors such as cameras due to their advantages over other sensor types. Cameras are information rich, relatively inexpensive and easily available. Additionally, 3D information is obtained from stereo vision, or by triangulating over several frames in monocular configurations. Another method to obtain 3D data is by using RGB-D sensors (e.g. Kinect) that provide both image and depth data. This method is becoming more attractive over the past few years due to its affordable price and availability for researchers. The aim of this research was to find robust multi-target detection and tracking algorithms for Micro Autonomous Systems (MAS) that incorporate the use of the RGB-D sensor. Contributions include the discovery of novel robust computer vision algorithms. It proposed a new framework for human body detection, from video file, to detect a single person adapted from Viola and Jones framework. The 2D Multi Targets Detection and Tracking (MTDT) algorithm applied the Gaussian Mixture Model (GMM) to reduce noise in the pre-processing stage. Blob analysis was used to detect targets, and Kalman filter was used to track targets. The 3D MTDT extends beyond 2D with the use of depth data from the RGB-D sensor in the pre-processing stage. Bayesian model was employed to provide multiple cues. It includes detection of the upper body, face, skin colour, motion and shape. Kalman filter proved for speed and robustness of the track management. Simultaneous Localisation and Mapping (SLAM) fusing with 3D information was investigated. The new framework introduced front end and back end processing. The front end consists of localisation steps, post refinement and loop closing system. The back-end focus on the post-graph optimisation to eliminate errors.The proposed computer vision algorithms proved for better speed and robustness. The frameworks produced impressive results. New algorithms can be used to improve performances in real time applications including surveillance, vision navigation, environmental perception and vision-based control system on MAS

    DPDnet: A Robust People Detector using Deep Learning with an Overhead Depth Camera

    Full text link
    In this paper we propose a method based on deep learning that detects multiple people from a single overhead depth image with high reliability. Our neural network, called DPDnet, is based on two fully-convolutional encoder-decoder neural blocks based on residual layers. The Main Block takes a depth image as input and generates a pixel-wise confidence map, where each detected person in the image is represented by a Gaussian-like distribution. The refinement block combines the depth image and the output from the main block, to refine the confidence map. Both blocks are simultaneously trained end-to-end using depth images and head position labels. The experimental work shows that DPDNet outperforms state-of-the-art methods, with accuracies greater than 99% in three different publicly available datasets, without retraining not fine-tuning. In addition, the computational complexity of our proposal is independent of the number of people in the scene and runs in real time using conventional GPUs

    Automatic visual detection of human behavior: a review from 2000 to 2014

    Get PDF
    Due to advances in information technology (e.g., digital video cameras, ubiquitous sensors), the automatic detection of human behaviors from video is a very recent research topic. In this paper, we perform a systematic and recent literature review on this topic, from 2000 to 2014, covering a selection of 193 papers that were searched from six major scientific publishers. The selected papers were classified into three main subjects: detection techniques, datasets and applications. The detection techniques were divided into four categories (initialization, tracking, pose estimation and recognition). The list of datasets includes eight examples (e.g., Hollywood action). Finally, several application areas were identified, including human detection, abnormal activity detection, action recognition, player modeling and pedestrian detection. Our analysis provides a road map to guide future research for designing automatic visual human behavior detection systems.This work is funded by the Portuguese Foundation for Science and Technology (FCT - Fundacao para a Ciencia e a Tecnologia) under research Grant SFRH/BD/84939/2012

    Development of new intelligent autonomous robotic assistant for hospitals

    Get PDF
    Continuous technological development in modern societies has increased the quality of life and average life-span of people. This imposes an extra burden on the current healthcare infrastructure, which also creates the opportunity for developing new, autonomous, assistive robots to help alleviate this extra workload. The research question explored the extent to which a prototypical robotic platform can be created and how it may be implemented in a hospital environment with the aim to assist the hospital staff with daily tasks, such as guiding patients and visitors, following patients to ensure safety, and making deliveries to and from rooms and workstations. In terms of major contributions, this thesis outlines five domains of the development of an actual robotic assistant prototype. Firstly, a comprehensive schematic design is presented in which mechanical, electrical, motor control and kinematics solutions have been examined in detail. Next, a new method has been proposed for assessing the intrinsic properties of different flooring-types using machine learning to classify mechanical vibrations. Thirdly, the technical challenge of enabling the robot to simultaneously map and localise itself in a dynamic environment has been addressed, whereby leg detection is introduced to ensure that, whilst mapping, the robot is able to distinguish between people and the background. The fourth contribution is geometric collision prediction into stabilised dynamic navigation methods, thus optimising the navigation ability to update real-time path planning in a dynamic environment. Lastly, the problem of detecting gaze at long distances has been addressed by means of a new eye-tracking hardware solution which combines infra-red eye tracking and depth sensing. The research serves both to provide a template for the development of comprehensive mobile assistive-robot solutions, and to address some of the inherent challenges currently present in introducing autonomous assistive robots in hospital environments.Open Acces

    Towards dense people detection with deep learning and depth images

    Get PDF
    This paper describes a novel DNN-based system, named PD3net, that detects multiple people from a single depth image, in real time. The proposed neural network processes a depth image and outputs a likelihood map in image coordinates, where each detection corresponds to a Gaussian-shaped local distribution, centered at each person?s head. This likelihood map encodes both the number of detected people as well as their position in the image, from which the 3D position can be computed. The proposed DNN includes spatially separated convolutions to increase performance, and runs in real-time with low budget GPUs. We use synthetic data for initially training the network, followed by fine tuning with a small amount of real data. This allows adapting the network to different scenarios without needing large and manually labeled image datasets. Due to that, the people detection system presented in this paper has numerous potential applications in different fields, such as capacity control, automatic video-surveillance, people or groups behavior analysis, healthcare or monitoring and assistance of elderly people in ambient assisted living environments. In addition, the use of depth information does not allow recognizing the identity of people in the scene, thus enabling their detection while preserving their privacy. The proposed DNN has been experimentally evaluated and compared with other state-of-the-art approaches, including both classical and DNN-based solutions, under a wide range of experimental conditions. The achieved results allows concluding that the proposed architecture and the training strategy are effective, and the network generalize to work with scenes different from those used during training. We also demonstrate that our proposal outperforms existing methods and can accurately detect people in scenes with significant occlusions.Ministerio de Economía y CompetitividadUniversidad de AlcaláAgencia Estatal de Investigació
    corecore