19,207 research outputs found

    Radar and RGB-depth sensors for fall detection: a review

    Get PDF
    This paper reviews recent works in the literature on the use of systems based on radar and RGB-Depth (RGB-D) sensors for fall detection, and discusses outstanding research challenges and trends related to this research field. Systems to detect reliably fall events and promptly alert carers and first responders have gained significant interest in the past few years in order to address the societal issue of an increasing number of elderly people living alone, with the associated risk of them falling and the consequences in terms of health treatments, reduced well-being, and costs. The interest in radar and RGB-D sensors is related to their capability to enable contactless and non-intrusive monitoring, which is an advantage for practical deployment and users’ acceptance and compliance, compared with other sensor technologies, such as video-cameras, or wearables. Furthermore, the possibility of combining and fusing information from The heterogeneous types of sensors is expected to improve the overall performance of practical fall detection systems. Researchers from different fields can benefit from multidisciplinary knowledge and awareness of the latest developments in radar and RGB-D sensors that this paper is discussing

    Robust 3D Action Recognition through Sampling Local Appearances and Global Distributions

    Full text link
    3D action recognition has broad applications in human-computer interaction and intelligent surveillance. However, recognizing similar actions remains challenging since previous literature fails to capture motion and shape cues effectively from noisy depth data. In this paper, we propose a novel two-layer Bag-of-Visual-Words (BoVW) model, which suppresses the noise disturbances and jointly encodes both motion and shape cues. First, background clutter is removed by a background modeling method that is designed for depth data. Then, motion and shape cues are jointly used to generate robust and distinctive spatial-temporal interest points (STIPs): motion-based STIPs and shape-based STIPs. In the first layer of our model, a multi-scale 3D local steering kernel (M3DLSK) descriptor is proposed to describe local appearances of cuboids around motion-based STIPs. In the second layer, a spatial-temporal vector (STV) descriptor is proposed to describe the spatial-temporal distributions of shape-based STIPs. Using the Bag-of-Visual-Words (BoVW) model, motion and shape cues are combined to form a fused action representation. Our model performs favorably compared with common STIP detection and description methods. Thorough experiments verify that our model is effective in distinguishing similar actions and robust to background clutter, partial occlusions and pepper noise

    Fall detection and activity recognition using human skeleton features

    Get PDF
    Human activity recognition has attracted the attention of researchers around the world. This is an interesting problem that can be addressed in different ways. Many approaches have been presented during the last years. These applications present solutions to recognize different kinds of activities such as if the person is walking, running, jumping, jogging, or falling, among others. Amongst all these activities, fall detection has special importance because it is a common dangerous event for people of all ages with a more negative impact on the elderly population. Usually, these applications use sensors to detect sudden changes in the movement of the person. These kinds of sensors can be embedded in smartphones, necklaces, or smart wristbands to make them “wearable” devices. The main inconvenience is that these devices have to be placed on the subjects’ bodies. This might be uncomfortable and is not always feasible because this type of sensor must be monitored constantly, and can not be used in open spaces with unknown people. In this way, fall detection from video camera images presents some advantages over the wearable sensor-based approaches. This paper presents a vision-based approach to fall detection and activity recognition. The main contribution of the proposed method is to detect falls only by using images from a standard video-camera without the need to use environmental sensors. It carries out the detection using human skeleton estimation for features extraction. The use of human skeleton detection opens the possibility for detecting not only falls but also different kind of activities for several subjects in the same scene. So this approach can be used in real environments, where a large number of people may be present at the same time. The method is evaluated with the UP-FALL public dataset and surpasses the performance of other fall detection and activities recognition systems that use that dataset

    A multi-scale filament extraction method: getfilaments

    Get PDF
    Far-infrared imaging surveys of Galactic star-forming regions with Herschel have shown that a substantial part of the cold interstellar medium appears as a fascinating web of omnipresent filamentary structures. This highly anisotropic ingredient of the interstellar material further complicates the difficult problem of the systematic detection and measurement of dense cores in the strongly variable but (relatively) isotropic backgrounds. Observational evidence that stars form in dense filaments creates severe problems for automated source extraction methods that must reliably distinguish sources not only from fluctuating backgrounds and noise, but also from the filamentary structures. A previous paper presented the multi-scale, multi-wavelength source extraction method getsources based on a fine spatial scale decomposition and filtering of irrelevant scales from images. In this paper, a multi-scale, multi-wavelength filament extraction method getfilaments is presented that solves this problem, substantially improving the robustness of source extraction with getsources in filamentary backgrounds. The main difference is that the filaments extracted by getfilaments are now subtracted by getsources from detection images during source extraction, greatly reducing the chances of contaminating catalogs with spurious sources. The intimate physical relationship between forming stars and filaments seen in Herschel observations demands that accurate filament extraction methods must remove the contribution of sources and that accurate source extraction methods must be able to remove underlying filamentary structures. Source extraction with getsources now provides researchers also with clean images of filaments, free of sources, noise, and isotropic backgrounds.Comment: 15 pages, 19 figures, to be published in Astronomy & Astrophysics; language polished for better readabilit
    corecore