Fall recovery subactivity recognition with RGB-D cameras

Abstract

Accidental falls have been identified as a cause of mortality for elders who live alone around the globe. Following a fall, additional injury can be sustained if proper fall recovery techniques are not followed. These secondary complications can be reduced if the person had access to safe recovery procedures or were assisted, either by a person or a robot. We propose a framework for in situ robotic assistance for post fall recovery scenarios. In order to assist autonomously robots need to recognize an individual's posture and subactivities (e.g., falling, rolling, move to hands and knees, crawling, and push up through legs, sitting or standing). Human body skeleton tracking through RGB-D pose estimation methods fail to identify the body parts during key phases of fall recovery due to high occlusion rates in fallen, and recovering, postures. To address this issue, we investigated how low-level image features can be leveraged to recognize an individual's subactivities. Depth cuboid similarity features (DCSFs) approach was improved with M-partitioned histograms of depth cuboid prototypes, integration of activity progression direction, and outlier spatiotemporal interest point removal. Our modified DCSF algorithm was evaluated on a unique RGB-D multiview dataset, achieving 87.43 ± 1.74% accuracy in the extensive 3003 (C15 10) combinations of training-test groups of 15 subjects in 10 trials. This result was significantly larger than the nearest competitor, and faster in the training phase. This work could lead to more accurate in situ robotic assistance for fall recovery, saving lives for victims of falls.Kalana Ishara Withanage, Ivan Lee, Russell Brinkworth, Shylie Mackintosh and Dominic Thewli

    Similar works

    Full text

    thumbnail-image