10 research outputs found

    On the Query Strategies for Efficient Online Active Distillation

    Full text link
    Deep Learning (DL) requires lots of time and data, resulting in high computational demands. Recently, researchers employ Active Learning (AL) and online distillation to enhance training efficiency and real-time model adaptation. This paper evaluates a set of query strategies to achieve the best training results. It focuses on Human Pose Estimation (HPE) applications, assessing the impact of selected frames during training using two approaches: a classical offline method and a online evaluation through a continual learning approach employing knowledge distillation, on a popular state-of-the-art HPE dataset. The paper demonstrates the possibility of enabling training at the edge lightweight models, adapting them effectively to new contexts in real-time

    A Dynamic and Collaborative Deep Inference Framework for Human Motion Analysis in Telemedicine

    No full text
    Human pose estimation software has reached high levels of accuracy in extrapolating 3D spatial information of human keypoints from images and videos. Nevertheless, de- ploying such intelligent video analytic at a distance to infer kinematic data for clinical applications requires the system to satisfy, beside spatial accuracy, more stringent extra-functional constraints. These include real-time performance and robustness to the environment variability (i.e., computational workload, network bandwidth). In this paper we address these challenges by proposing a framework that implements accurate human motion analysis at a distance through collaborative and adaptive Edge-Cloud deep inference. We show how the framework adapts to edge workload variations and communication issues (e.g., delay and bandwidth variability) to preserve the global system accuracy. The paper presents the results obtained with two large datasets in which the framework accuracy and robustness are compared with a marker-based infra-red motion capture system

    On the Pose Estimation Software for Measuring Movement Features in the Finger-to-Nose Test

    No full text
    Assessing upper limb (UL) movements post-stroke is crucial to monitor and understand sensorimotor recovery. Recently, several research works focused on the relationship between reach-to-target kinematics and clinical outcomes. Since, conventionally, the assessment of sensorimotor impairments is primarily based on clinical scales and observation, and hence likely to be subjective, one of the challenges is to quantify such kinematics through automated platforms like inertial measure- ment units, optical, or electromagnetic motion capture systems. Even more challenging is to quantify UL kinematics through non-invasive systems, to avoid any influence or bias in the measurements. In this context, tools based on video cameras and deep learning software have shown to achieve high levels of accuracy for the estimation of the human pose. Nevertheless, an analysis of their accuracy in measuring kinematics features for the Finger-to-Nose Test (FNT) is missing. We first present an extended quantitative evaluation of such inference software (i.e., OpenPose) for measuring a clinically meaningful set of UL movement features. Then, we propose an algorithm and the corresponding software implementation that automates the segmentation of the FNT movements. This allows us to automat- ically extrapolate the whole set of measures from the videos with no manual intervention. We measured the software accuracy by using an infrared motion capture system on a total of 26 healthy and 26 stroke subjects

    Efficient {ROS}-Compliant {CPU}-{iGPU} Communication on Embedded Platforms

    No full text
    Many modern programmable embedded devices contain CPUs and a GPU that share the same system memory on a single die. Such a unified memory architecture (UMA) allows programmers to implement different communication models between CPU and the integrated GPU (iGPU). Although the simpler model guarantees implicit synchronization at the cost of performance, the more advanced model allows, through the zero-copy paradigm, the explicit data copying between CPU and iGPU to be eliminated with the benefit of significantly improving performance and energy savings. On the other hand, the robot operating system (ROS) has become a de-facto reference standard for developing robotic applications. It allows for application re-use and the easy integration of software blocks in complex cyber-physical systems. Although ROS compliance is strongly required for SW portability and reuse, it can lead to performance loss and elude the benefits of the zero-copy communication. In this article we present efficient techniques to implement CPU-iGPU communication by guaranteeing compliance to the ROS standard. We show how key features of each communication model are maintained and the corresponding overhead involved by the ROS compliancy

    Risk Assessment and Prediction in Human-Robot Interaction Through Assertion Mining and Pose Estimation

    No full text
    Implementing accurate run-time and resource- efficient risk predictions for human-robot interaction is an open challenge as it requires executing long and resource-intensive tasks. This paper addresses this challenge by presenting a methodology to avoid unwanted outcomes in the context of human-robot interaction, with the goal of minimising the cost of prediction. It is based on a two-phase approach. First, the occurrence of risky situations is monitored by a ”light” assertion mining. The miner continuously analyses the execution traces that describe the behaviours of humans and robots, searching for risk conditions expressed through logic formulas. When any of these is detected, the miner automatically extracts the causes that led to the risk situation. At that point, it activates the more accurate and ”heavier” predictor to monitor the risk of collision. This leads to sensible improvements in the prediction and resource- saving of the overall monitoring system. Experimental results have been conducted on an industrial case study implementing a smart manufacturing line with a Kuka LBR IIWA R820 robot

    Process-driven Collision Prediction in Human-Robot Work Environments

    No full text
    In mixed human-robot work cells the emphasis is traditionally on collision avoidance to circumvent injuries and production down times. In this paper we discuss how long in advance a collision can be predicted given the behavior of a robotic arm and the current occupancy of both the robot and the human. Assuming that the behavior of the robot is a combination of a set of predefined operations, we propose an approach to learn this behavior and use it to estimate the time before a collision. The pose of the human is estimated by a multi- camera inference application based on neural networks at the edge to preserve privacy and enforce scalability. The occupancy of the manipulator and of the human are modeled through the composition of segments which overcomes the traditional “virtual cage” and can be adapted to different human beings and robots. The system has been implemented in a real factory scenario to demonstrate its readiness regarding both industrial constraints and computational complexity

    Integrating Wearable and Camera Based Monitoring in the Digital Twin for Safety Assessment in the Industry 4.0 Era

    No full text
    The occurrence of human errors in work processes reduces the quality of results, increases the costs due to compensatory actions, and may have heavy repercussions on the workers’ safety. The definition of rules and procedures that workers have to respect has shown to be not enough to guarantee their safety, as negligence and opportunistic behaviours can unfortunately lead to catastrophic consequences. In the Industry 4.0 era, with the advent of the digital twin in smart factories, advanced systems can be exploited for automatic risk prediction and avoidance. By leveraging the new opportunities provided by the digital twin and, in particular, the introduction of wearable sensors and computer vision, we propose an automatic system for monitoring human behaviours in a smart factory in real time. The final goal is to feed cloud-based safety assessment tools that evaluate human errors and raise consequent alerts when required

    Enabling gait analysis in the telemedicine practice through portable and accurate 3D human pose estimation

    No full text
    Human pose estimation (HPE) through deep learning-based software applications is a trend topic for markerless motion analysis. Thanks to the accuracy of the state-of-the-art technology, HPE could enable gait analysis in the telemedicine practice. On the other hand, delivering such a service at a distance requires the system to satisfy multiple and different constraints like accuracy, portability, real-time, and privacy compliance at the same time. Existing solutions either guarantee accuracy and real-time (e.g., the widespread OpenPose software on well-equipped computing platforms) or portability and data privacy (e.g., light convolutional neural networks on mobile phones). We propose a portable and low-cost platform that implements real-time and accurate 3D HPE through an embedded software on a low-power off-the-shelf computing device that guarantees privacy by default and by design. We present an extended evaluation of both accuracy and performance of the proposed solution conducted with a marker-based motion capture system (i.e., Vicon) as ground truth. The results show that the platform achieves real-time performance and high-accuracy with a deviation below the error tolerance when compared to the marker-based motion capture system (e.g., less than an error of 5◩ on the estimated knee flexion difference on the entire gait cycle and correlation 0.91 < ρ < 0.99). We provide a proof-of-concept study, showing that such portable technology, considering the limited discrepancies with respect to the marker-based motion capture system and its working tolerance, could be used for gait analysis at a distance without leading to different clinical interpretation

    Preserving Data Privacy and Accuracy of Human Pose Estimation Software Based on CNNs for Remote Gait Analysis

    No full text
    In the last years there have been significant improvements in the accuracy of real-time 3D skeletal data estimation software. These applications based on convolutional neural networks (CNNs) can play a key role in a variety of clinical scenarios, from gait analysis to medical diagnosis. One of the main challenges is to apply such intelligent video analytic at a distance, which requires the system to satisfy, beside accuracy, also data privacy. To satisfy privacy by default and by design, the software has to run on ”edge” computing devices, by which the sensitive information (i.e., the video stream) is elaborated close to the camera while only the process results can be stored or sent over the communication network. In this paper we address such a challenge by evaluating the accuracy of the state-of-the-art software for human pose estimation when run ”at the edge”. We show how the most accurate platforms for pose estimation based on complex and deep neural networks can become inaccurate due to subsampling of the input video frames when run on the resource constrained edge devices. In contrast, we show that, starting from less accurate and ”lighter” CNNs and enhancing the pose estimation software with filters and interpolation primitives, the platform achieves better real- time performance and higher accuracy with a deviation below the error tolerance of a marker-based motion capture system

    Real-time Human Pose Estimation at the Edge for Gait Analysis at a Distance

    No full text
    Health telematics is a major improvement on patient lives and has shown to be a key practice to deliver healthcare services, overcoming geographical, temporal, and even organiza- tional barriers. One of the main challenges is to perform gait analysis at a distance through camera-based platforms, which requires the system to satisfy, beside accuracy and real-time, also portability and privacy compliance at the same time. We address this challenge by proposing a portable and low-cost platform that implements real-time and accurate 3D human pose estimation through an embedded software on a low-power off- the-shelf computing device that guarantees privacy by default and by design. We evaluated both accuracy and performance of the proposed solution through an infra-red marker-based motion capture system as ground truth to understand if and how such a portable technology can be used for gait analysis at a distance without leading to different clinical interpretations
    corecore