446 research outputs found

    Combining inertial and visual sensing for human action recognition in tennis

    Get PDF
    In this paper, we present a framework for both the automatic extraction of the temporal location of tennis strokes within a match and the subsequent classification of these as being either a serve, forehand or backhand. We employ the use of low-cost visual sensing and low-cost inertial sensing to achieve these aims, whereby a single modality can be used or a fusion of both classification strategies can be adopted if both modalities are available within a given capture scenario. This flexibility allows the framework to be applicable to a variety of user scenarios and hardware infrastructures. Our proposed approach is quantitatively evaluated using data captured from elite tennis players. Results point to the extremely accurate performance of the proposed approach irrespective of input modality configuration

    Multi-sensor classification of tennis strokes

    Get PDF
    In this work, we investigate tennis stroke recognition using a single inertial measuring unit attached to a player’s forearm during a competitive match. This paper evaluates the best approach for stroke detection using either accelerometers, gyroscopes or magnetometers, which are embedded into the inertial measuring unit. This work concludes what is the optimal training data set for stroke classification and proves that classifiers can perform well when tested on players who were not used to train the classifier. This work provides a significant step forward for our overall goal, which is to develop next generation sports coaching tools using both inertial and visual sensors in an instrumented indoor sporting environment

    TRUSS: Tracking Risk with Ubiquitous Smart Sensing

    Get PDF
    We present TRUSS, or Tracking Risk with Ubiquitous Smart Sensing, a novel system that infers and renders safety context on construction sites by fusing data from wearable devices, distributed sensing infrastructure, and video. Wearables stream real-time levels of dangerous gases, dust, noise, light quality, altitude, and motion to base stations that synchronize the mobile devices, monitor the environment, and capture video. At the same time, low-power video collection and processing nodes track the workers as they move through the view of the cameras, identifying the tracks using information from the sensors. These processes together connect the context-mining wearable sensors to the video; information derived from the sensor data is used to highlight salient elements in the video stream. The augmented stream in turn provides users with better understanding of real-time risks, and supports informed decision-making. We tested our system in an initial deployment on an active construction site.Intel CorporationMassachusetts Institute of Technology. Media LaboratoryEni S.p.A. (Firm

    Detecting Physical Collaborations in a Group Task Using Body-Worn Microphones and Accelerometers

    Get PDF
    This paper presents a method of using wearable accelerometers and microphones to detect instances of ad-hoc physical collaborations between members of a group. 4 people are instructed to construct a large video wall and must cooperate to complete the task. The task is loosely structured with minimal outside assistance to better reflect the ad-hoc nature of many real world construction scenarios. Audio data, recorded from chest-worn microphones, is used to reveal information on collocation, i.e. whether or not participants are near one another. Movement data, recorded using 3-axis accelerometers worn on each person's head and wrists, is used to provide information on correlated movements, such as when participants help one another to lift a heavy object. Collocation and correlated movement information is then combined to determine who is working together at any given time. The work shows how data from commonly available sensors can be combined across multiple people using a simple, low power algorithm to detect a range of physical collaborations

    Analysis of the Usefulness of Mobile Eyetracker for the Recognition of Physical Activities

    Get PDF
    We investigate the usefulness of information from a wearable eyetracker to detect physical activities during assembly and construction tasks. Large physical activities, like carrying heavy items and walking, are analysed alongside more precise, hand-tool activities like using a screwdriver. Statistical analysis of eye based features like fixation length and frequency of fixations show significant correlations for precise activities. Using this finding, we selected 10, calibration-free eye features to train a classifier for recognising up to 6 different activities. Frame-byframe and event based results are presented using data from an 8-person dataset containing over 600 activity events. We also evaluate the recognition performance when gaze features are combined with data from wearable accelerometers and microphones. Our initial results show a duration-weighted event precision and recall of up to 0.69 & 0.84 for independently trained recognition on precise activities using gaze. This indicates that gaze is suitable for spotting subtle precise activities and can be a useful source for more sophisticated classifier fusion

    Functionality-power-packaging considerations in context aware wearable systems

    Get PDF
    Wearable computing places tighter constraints on architecture design than traditional mobile computing. The architecture is described in terms of miniaturization, power-awareness, global low-power design and suitability for an application. In this article we present a new methodology based on three different system properties. Functionality, power and electronic Packaging metrics are proposed and evaluated to study different trade offs. We analyze the trade offs in different context recognition scenarios. The proof of concept case study is analyzed by studying (a) interaction with household appliances by a wrist worn device (acceleration, light sensors) (b) studying walking behavior with acceleration sensors, (c) computational task and (d) gesture recognition in a wood-workshop using the combination of accelerometer and microphone sensors. After analyzing the case study, we highlight the size aspect by electronic packaging for a given functionality and present the miniaturization trends for ‘autonomous sensor button

    Low Energy Physical Activity Recognition System on Smartphones

    Get PDF
    An innovative approach to physical activity recognition based on the use of discrete variables obtained from accelerometer sensors is presented. The system first performs a discretization process for each variable, which allows efficient recognition of activities performed by users using as little energy as possible. To this end, an innovative discretization and classification technique is presented based on the 2 distribution. Furthermore, the entire recognition process is executed on the smartphone, which determines not only the activity performed, but also the frequency at which it is carried out. These techniques and the new classification system presented reduce energy consumption caused by the activity monitoring system. The energy saved increases smartphone usage time to more than 27 h without recharging while maintaining accuracy.Ministerio de Economía y Competitividad TIN2013-46801-C4-1-rJunta de Andalucía TIC-805

    Titanic smart objects

    Get PDF

    Discrete techniques applied to low-energy mobile human activity recognition. A new approach

    Get PDF
    Human activity recognition systems are currently implemented by hundreds of applications and, in recent years, several technology manufacturers have introduced new wearable devices for this purpose. Battery consumption constitutes a critical point in these systems since most are provided with a rechargeable battery. In this paper, by using discrete techniques based on the Ameva algorithm, an innovative approach for human activity recognition systems on mobile devices is presented. Furthermore, unlike other systems in current use, this proposal enables recognition of high granularity activities by using accelerometer sensors. Hence, the accuracy of activity recognition systems can be increased without sacrificing efficiency. A comparative is carried out between the proposed approach and an approach based on the well-known neural networks.Junta de Andalucia Simon TIC-805

    Using Hidden Markov Models to Segment and Classify Wrist Motions Related to Eating Activities

    Get PDF
    Advances in body sensing and mobile health technology have created new opportunities for empowering people to take a more active role in managing their health. Measurements of dietary intake are commonly used for the study and treatment of obesity. However, the most widely used tools rely upon self-report and require considerable manual effort, leading to underreporting of consumption, non-compliance, and discontinued use over the long term. We are investigating the use of wrist-worn accelerometers and gyroscopes to automatically recognize eating gestures. In order to improve recognition accuracy, we studied the sequential ependency of actions during eating. In chapter 2 we first undertook the task of finding a set of wrist motion gestures which were small and descriptive enough to model the actions performed by an eater during consumption of a meal. We found a set of four actions: rest, utensiling, bite, and drink; any alternative gestures is referred as the other gesture. The stability of the definitions for gestures was evaluated using an inter-rater reliability test. Later, in chapter 3, 25 meals were hand labeled and used to study the existence of sequential dependence of the gestures. To study this, three types of classifiers were built: 1) a K-nearest neighbor classifier which uses no sequential context, 2) a hidden Markov model (HMM) which captures the sequential context of sub-gesture motions, and 3) HMMs that model inter-gesture sequential dependencies. We built first-order to sixth-order HMMs to evaluate the usefulness of increasing amounts of sequential dependence to aid recognition. The first two were our baseline algorithms. We found that the adding knowledge of the sequential dependence of gestures achieved an accuracy of 96.5%, which is an improvement of 20.7% and 12.2% over the KNN and sub-gesture HMM. Lastly, in chapter 4, we automatically segmented a continuous wrist motion signal and assessed its classification performance for each of the three classifiers. Again, the knowledge of sequential dependence enhances the recognition of gestures in unsegmented data, achieving 90% accuracy and improving 30.1% and 18.9% over the KNN and the sub-gesture HMM
    corecore