24,473 research outputs found

    Activity Recognition using wearable computing.

    Get PDF
    A secure, user-convenient approach to authenticate users on their mobile devices is required as current approaches (e.g., PIN or Password) suffer from security and usability issues. Transparent Authentication Systems (TAS) have been introduced to improve the level of security as well as offer continuous and unobtrusive authentication (i.e., user friendly) by using various behavioural biometric techniques. This paper presents the usefulness of using smartwatch motion sensors (i.e., accelerometer and gyroscope) to perform Activity Recognition for the use within a TAS. Whilst previous research in TAS has focused upon its application in computers and mobile devices, little attention is given to the use of wearable devices - which tend to be sensor-rich highly personal technologies. This paper presents a thorough analysis of the current state of the art in transparent and continuous authentication using acceleration and gyroscope sensors and a technology evaluation to determine the basis for such an approach. The best results are average Euclidean distance scores of 5.5 and 11.9 for users\u27 intra acceleration and gyroscope signals respectively and 24.27 and 101.18 for users\u27 inter acceleration and gyroscope activities accordingly. The findings demonstrate that the technology is sufficiently capable and the nature of the signals captured sufficiently discriminative to be useful in performing Activity Recognition

    HASC2011corpus: Towards the Common Ground of Human Activity Recognition

    Get PDF
    UbiComp '11 Proceedings of the 13th international conference on Ubiquitous computing, September 17-21, 2011, Beijing, ChinaHuman activity recognition through the wearable sensor will enable a next-generation human-oriented biquitous computing. However, most of research on human activity recognition so far is based on small number of subjects, and non-public data. To overcome the situation, we have gathered 4897 accelerometer data with 116 subjects and compose them as HASC2011corpus. In the field of pattern recognition, it is very important to evaluate and to improve the recognition methods by using the same dataset as a common ground. We make the HASC2011corpus into public for the research community to use it as a common ground of the Human Activity Recognition. We also show several facts and results of obtained from the corpus

    An Interpretable Machine Vision Approach to Human Activity Recognition using Photoplethysmograph Sensor Data

    Get PDF
    The current gold standard for human activity recognition (HAR) is based on the use of cameras. However, the poor scalability of camera systems renders them impractical in pursuit of the goal of wider adoption of HAR in mobile computing contexts. Consequently, researchers instead rely on wearable sensors and in particular inertial sensors. A particularly prevalent wearable is the smart watch which due to its integrated inertial and optical sensing capabilities holds great potential for realising better HAR in a non-obtrusive way. This paper seeks to simplify the wearable approach to HAR through determining if the wrist-mounted optical sensor alone typically found in a smartwatch or similar device can be used as a useful source of data for activity recognition. The approach has the potential to eliminate the need for the inertial sensing element which would in turn reduce the cost of and complexity of smartwatches and fitness trackers. This could potentially commoditise the hardware requirements for HAR while retaining the functionality of both heart rate monitoring and activity capture all from a single optical sensor. Our approach relies on the adoption of machine vision for activity recognition based on suitably scaled plots of the optical signals. We take this approach so as to produce classifications that are easily explainable and interpretable by non-technical users. More specifically, images of photoplethysmography signal time series are used to retrain the penultimate layer of a convolutional neural network which has initially been trained on the ImageNet database. We then use the 2048 dimensional features from the penultimate layer as input to a support vector machine. Results from the experiment yielded an average classification accuracy of 92.3%. This result outperforms that of an optical and inertial sensor combined (78%) and illustrates the capability of HAR systems using...Comment: 26th AIAI Irish Conference on Artificial Intelligence and Cognitive Scienc

    The Evolution of First Person Vision Methods: A Survey

    Full text link
    The emergence of new wearable technologies such as action cameras and smart-glasses has increased the interest of computer vision scientists in the First Person perspective. Nowadays, this field is attracting attention and investments of companies aiming to develop commercial devices with First Person Vision recording capabilities. Due to this interest, an increasing demand of methods to process these videos, possibly in real-time, is expected. Current approaches present a particular combinations of different image features and quantitative methods to accomplish specific objectives like object detection, activity recognition, user machine interaction and so on. This paper summarizes the evolution of the state of the art in First Person Vision video analysis between 1997 and 2014, highlighting, among others, most commonly used features, methods, challenges and opportunities within the field.Comment: First Person Vision, Egocentric Vision, Wearable Devices, Smart Glasses, Computer Vision, Video Analytics, Human-machine Interactio

    Can smartwatches replace smartphones for posture tracking?

    Get PDF
    This paper introduces a human posture tracking platform to identify the human postures of sitting, standing or lying down, based on a smartwatch. This work develops such a system as a proof-of-concept study to investigate a smartwatch's ability to be used in future remote health monitoring systems and applications. This work validates the smartwatches' ability to track the posture of users accurately in a laboratory setting while reducing the sampling rate to potentially improve battery life, the first steps in verifying that such a system would work in future clinical settings. The algorithm developed classifies the transitions between three posture states of sitting, standing and lying down, by identifying these transition movements, as well as other movements that might be mistaken for these transitions. The system is trained and developed on a Samsung Galaxy Gear smartwatch, and the algorithm was validated through a leave-one-subject-out cross-validation of 20 subjects. The system can identify the appropriate transitions at only 10 Hz with an F-score of 0.930, indicating its ability to effectively replace smart phones, if needed
    corecore