783 research outputs found

    Extracting tennis statistics from wireless sensing environments

    Get PDF
    Creating statistics from sporting events is now widespread with most eorts to automate this process using various sensor devices. The problem with many of these statistical applications is that they require proprietary applications to process the sensed data and there is rarely an option to express a wide range of query types. Instead, applications tend to contain built-in queries with predened outputs. In the research presented in this paper, data from a wireless network is converted to a structured and highly interoperable format to facilitate user queries by expressing high level queries in a standard database language and automatically generating the results required by coaches

    Enrichment of raw sensor data to enable high-level queries

    Get PDF
    Sensor networks are increasingly used across various application domains. Their usage has the advantage of automated, often continuous, monitoring of activities and events. Ubiquitous sensor networks detect location of people and objects and their movement. In our research, we employ a ubiquitous sensor network to track the movement of players in a tennis match. By doing so, our goal is to create a detailed analysis of how the match progressed, recording points scored, games and sets, and in doing so, greatly reduce the eort of coaches and players who are required to study matches afterwards. The sensor network is highly efficient as it eliminates the need for manual recording of the match. However, it generates raw data that is unusable by domain experts as it contains no frame of reference or context and cannot be analyzed or queried. In this work, we present the UbiQuSE system of data transformers which bridges the gap between raw sensor data and the high-level requirements of domain specialists such as the tennis coach

    Multi-sensor human action recognition with particular application to tennis event-based indexing

    Get PDF
    The ability to automatically classify human actions and activities using vi- sual sensors or by analysing body worn sensor data has been an active re- search area for many years. Only recently with advancements in both fields and the ubiquitous nature of low cost sensors in our everyday lives has auto- matic human action recognition become a reality. While traditional sports coaching systems rely on manual indexing of events from a single modality, such as visual or inertial sensors, this thesis investigates the possibility of cap- turing and automatically indexing events from multimodal sensor streams. In this work, we detail a novel approach to infer human actions by fusing multimodal sensors to improve recognition accuracy. State of the art visual action recognition approaches are also investigated. Firstly we apply these action recognition detectors to basic human actions in a non-sporting con- text. We then perform action recognition to infer tennis events in a tennis court instrumented with cameras and inertial sensing infrastructure. The system proposed in this thesis can use either visual or inertial sensors to au- tomatically recognise the main tennis events during play. A complete event retrieval system is also presented to allow coaches to build advanced queries, which existing sports coaching solutions cannot facilitate, without an inordi- nate amount of manual indexing. The event retrieval interface is evaluated against a leading commercial sports coaching tool in terms of both usability and efficiency

    A Two-Level Approach to Characterizing Human Activities from Wearable Sensor Data

    Get PDF
    International audienceThe rapid emergence of new technologies in recent decades has opened up a world of opportunities for a better understanding of human mobility and behavior. It is now possible to recognize human movements, physical activity and the environments in which they take place. And this can be done with high precision, thanks to miniature sensors integrated into our everyday devices. In this paper, we explore different methodologies for recognizing and characterizing physical activities performed by people wearing new smart devices. Whether it's smartglasses, smartwatches or smartphones, we show that each of these specialized wearables has a role to play in interpreting and monitoring moments in a user's life. In particular, we propose an approach that splits the concept of physical activity into two sub-categories that we call micro-and macro-activities. Micro-and macro-activities are supposed to have functional relationship with each other and should therefore help to better understand activities on a larger scale. Then, for each of these levels, we show different methods of collecting, interpreting and evaluating data from different sensor sources. Based on a sensing system we have developed using smart devices, we build two data sets before analyzing how to recognize such activities. Finally, we show different interactions and combinations between these scales and demonstrate that they have the potential to lead to new classes of applications, involving authentication or user profiling

    Automatic activity classification and movement assessment during a sports training session using wearable inertial sensors

    Get PDF
    Motion analysis technologies have been widely used to monitor the potential for injury and enhance athlete performance. However, most of these technologies are expensive, can only be used in laboratory environments and examine only a few trials of each movement action. In this paper, we present a novel ambulatory motion analysis framework using wearable inertial sensors to accurately assess all of an athlete’s activities in an outdoor training environment. We firstly present a system that automatically classifies a large range of training activities using the Discrete Wavelet Transform (DWT) in conjunction with a Random forest classifier. The classifier is capable of successfully classifying various activities with up to 98% accuracy. Secondly, a computationally efficient gradient descent algorithm is used to estimate the relative orientations of the wearable inertial sensors mounted on the thigh and shank of a subject, from which the flexion-extension knee angle is calculated. Finally, a curve shift registration technique is applied to both generate normative data and determine if a subject’s movement technique differed to the normative data in order to identify potential injury related factors. It is envisaged that the proposed framework could be utilized for accurate and automatic sports activity classification and reliable movement technique evaluation in various unconstrained environments

    Towards automatic activity classification and movement assessment during a sports training session

    Get PDF
    Abstract—Motion analysis technologies have been widely used to monitor the potential for injury and enhance athlete perfor- mance. However, most of these technologies are expensive, can only be used in laboratory environments and examine only a few trials of each movement action. In this paper, we present a novel ambulatory motion analysis framework using wearable inertial sensors to accurately assess all of an athlete’s activities in real training environment. We firstly present a system that automatically classifies a large range of training activities using the Discrete Wavelet Transform (DWT) in conjunction with a Random forest classifier. The classifier is capable of successfully classifying various activities with up to 98% accuracy. Secondly, a computationally efficient gradient descent algorithm is used to estimate the relative orientations of the wearable inertial sensors mounted on the shank, thigh and pelvis of a subject, from which the flexion-extension knee and hip angles are calculated. These angles, along with sacrum impact accelerations, are automatically extracted for each stride during jogging. Finally, normative data is generated and used to determine if a subject’s movement technique differed to the normative data in order to identify potential injury related factors. For the joint angle data this is achieved using a curve-shift registration technique. It is envisaged that the proposed framework could be utilized for accurate and automatic sports activity classification and reliable movement technique evaluation in various unconstrained environments for both injury management and performance enhancement

    Anticipatory Mobile Computing: A Survey of the State of the Art and Research Challenges

    Get PDF
    Today's mobile phones are far from mere communication devices they were ten years ago. Equipped with sophisticated sensors and advanced computing hardware, phones can be used to infer users' location, activity, social setting and more. As devices become increasingly intelligent, their capabilities evolve beyond inferring context to predicting it, and then reasoning and acting upon the predicted context. This article provides an overview of the current state of the art in mobile sensing and context prediction paving the way for full-fledged anticipatory mobile computing. We present a survey of phenomena that mobile phones can infer and predict, and offer a description of machine learning techniques used for such predictions. We then discuss proactive decision making and decision delivery via the user-device feedback loop. Finally, we discuss the challenges and opportunities of anticipatory mobile computing.Comment: 29 pages, 5 figure

    Deep Learning for Sensor-based Human Activity Recognition: Overview, Challenges and Opportunities

    Full text link
    The vast proliferation of sensor devices and Internet of Things enables the applications of sensor-based activity recognition. However, there exist substantial challenges that could influence the performance of the recognition system in practical scenarios. Recently, as deep learning has demonstrated its effectiveness in many areas, plenty of deep methods have been investigated to address the challenges in activity recognition. In this study, we present a survey of the state-of-the-art deep learning methods for sensor-based human activity recognition. We first introduce the multi-modality of the sensory data and provide information for public datasets that can be used for evaluation in different challenge tasks. We then propose a new taxonomy to structure the deep methods by challenges. Challenges and challenge-related deep methods are summarized and analyzed to form an overview of the current research progress. At the end of this work, we discuss the open issues and provide some insights for future directions
    corecore