25,970 research outputs found

    A Fallen Person Detector with a Privacy-Preserving Edge-AI Camera

    Get PDF
    As the population ages, Ambient-Assisted Living (AAL) environments are increasingly used to support older individuals’ safety and autonomy. In this study, we propose a low-cost, privacy-preserving sensor system integrated with mobile robots to enhance fall detection in AAL environments. We utilized the Luxonis OAKD Edge-AI camera mounted on a mobile robot to detect fallen individuals. The system was trained using YOLOv6 network on the E-FPDS dataset and optimized with a knowledge distillation approach onto the more compact YOLOv5 network, which was deployed on the camera. We evaluated the system’s performance using a custom dataset captured with a robot-mounted camera. We achieved a precision of 96.52%, a recall of 95.10%, and a recognition rate of 15 frames per second. The proposed system enhances the safety and autonomy of older individuals by enabling the rapid detection and response to falls.This work has been part supported by the visuAAL project on Privacy-Aware and Acceptable Video-Based Technologies and Services for Active and Assisted Living (https://www.visuaal-itn.eu/) funded by the EU H2020 Marie Skłodowska-Curie grant agreement No. 861091. The project has also been part supported by the SFI Future Innovator Award SFI/21/FIP/DO/9955 project Smart Hangar

    Fast and Robust Detection of Fallen People from a Mobile Robot

    Full text link
    This paper deals with the problem of detecting fallen people lying on the floor by means of a mobile robot equipped with a 3D depth sensor. In the proposed algorithm, inspired by semantic segmentation techniques, the 3D scene is over-segmented into small patches. Fallen people are then detected by means of two SVM classifiers: the first one labels each patch, while the second one captures the spatial relations between them. This novel approach showed to be robust and fast. Indeed, thanks to the use of small patches, fallen people in real cluttered scenes with objects side by side are correctly detected. Moreover, the algorithm can be executed on a mobile robot fitted with a standard laptop making it possible to exploit the 2D environmental map built by the robot and the multiple points of view obtained during the robot navigation. Additionally, this algorithm is robust to illumination changes since it does not rely on RGB data but on depth data. All the methods have been thoroughly validated on the IASLAB-RGBD Fallen Person Dataset, which is published online as a further contribution. It consists of several static and dynamic sequences with 15 different people and 2 different environments

    Multisensor-based human detection and tracking for mobile service robots

    Get PDF
    The one of fundamental issues for service robots is human-robot interaction. In order to perform such a task and provide the desired services, these robots need to detect and track people in the surroundings. In the present paper, we propose a solution for human tracking with a mobile robot that implements multisensor data fusion techniques. The system utilizes a new algorithm for laser-based legs detection using the on-board LRF. The approach is based on the recognition of typical leg patterns extracted from laser scans, which are shown to be very discriminative also in cluttered environments. These patterns can be used to localize both static and walking persons, even when the robot moves. Furthermore, faces are detected using the robot's camera and the information is fused to the legs position using a sequential implementation of Unscented Kalman Filter. The proposed solution is feasible for service robots with a similar device configuration and has been successfully implemented on two different mobile platforms. Several experiments illustrate the effectiveness of our approach, showing that robust human tracking can be performed within complex indoor environments
    corecore