Quantitative Comparison of Human and Software Reliability in the Categorisation of Sit-to-stand Motion Pattern

Abstract

The Sit-to-Stand (STS) test is used in clinical practice as an indicator of lower-limb functionality decline, especially for older adults. Due to its high variability, there is no standard approach for categorising the STS movement and recognising its motion pattern. This paper presents a comparative analysis between visual assessments and an automated-software for the categorisation of STS, relying on registrations from a force plate. 5 participants (30 +/- 6 years) took part in 2 different sessions of visual inspections on 200 STS movements under self-paced and controlled speed conditions. Assessors were asked to identify three specific STS events from the Ground Reaction Force, simultaneously with the software analysis: the start of the trunk movement (Initiation), the beginning of the stable upright stance (Standing) and the sitting movement (Sitting). The absolute agreement between the repeated raters' assessments as well as between the raters' and software's assessment in the first trial, were considered as indexes of human and software performance, respectively. No statistical differences between methods were found for the identification of the Initiation and the Sitting events at self-paced speed and for only the Sitting event at controlled speed. The estimated significant values of maximum discrepancy between visual and automated assessments were 0.200 [0.039; 0.361] s in unconstrained conditions and 0.340 [0.014; 0.666] s for standardised movements. The software assessments displayed an overall good agreement against visual evaluations of the Ground Reaction Force, relying, at the same time, on objective measures

    Similar works