2 research outputs found

    Detecting Physical Collaborations in a Group Task Using Body-Worn Microphones and Accelerometers

    Get PDF
    This paper presents a method of using wearable accelerometers and microphones to detect instances of ad-hoc physical collaborations between members of a group. 4 people are instructed to construct a large video wall and must cooperate to complete the task. The task is loosely structured with minimal outside assistance to better reflect the ad-hoc nature of many real world construction scenarios. Audio data, recorded from chest-worn microphones, is used to reveal information on collocation, i.e. whether or not participants are near one another. Movement data, recorded using 3-axis accelerometers worn on each person's head and wrists, is used to provide information on correlated movements, such as when participants help one another to lift a heavy object. Collocation and correlated movement information is then combined to determine who is working together at any given time. The work shows how data from commonly available sensors can be combined across multiple people using a simple, low power algorithm to detect a range of physical collaborations

    Analysis of the Usefulness of Mobile Eyetracker for the Recognition of Physical Activities

    Get PDF
    We investigate the usefulness of information from a wearable eyetracker to detect physical activities during assembly and construction tasks. Large physical activities, like carrying heavy items and walking, are analysed alongside more precise, hand-tool activities like using a screwdriver. Statistical analysis of eye based features like fixation length and frequency of fixations show significant correlations for precise activities. Using this finding, we selected 10, calibration-free eye features to train a classifier for recognising up to 6 different activities. Frame-byframe and event based results are presented using data from an 8-person dataset containing over 600 activity events. We also evaluate the recognition performance when gaze features are combined with data from wearable accelerometers and microphones. Our initial results show a duration-weighted event precision and recall of up to 0.69 & 0.84 for independently trained recognition on precise activities using gaze. This indicates that gaze is suitable for spotting subtle precise activities and can be a useful source for more sophisticated classifier fusion
    corecore