A Dynamic and Collaborative Deep Inference Framework for Human Motion Analysis in Telemedicine

Abstract

Human pose estimation software has reached high levels of accuracy in extrapolating 3D spatial information of human keypoints from images and videos. Nevertheless, de- ploying such intelligent video analytic at a distance to infer kinematic data for clinical applications requires the system to satisfy, beside spatial accuracy, more stringent extra-functional constraints. These include real-time performance and robustness to the environment variability (i.e., computational workload, network bandwidth). In this paper we address these challenges by proposing a framework that implements accurate human motion analysis at a distance through collaborative and adaptive Edge-Cloud deep inference. We show how the framework adapts to edge workload variations and communication issues (e.g., delay and bandwidth variability) to preserve the global system accuracy. The paper presents the results obtained with two large datasets in which the framework accuracy and robustness are compared with a marker-based infra-red motion capture system

    Similar works

    Full text

    thumbnail-image