45,204 research outputs found

    Classification of sporting activities using smartphone accelerometers

    Get PDF
    In this paper we present a framework that allows for the automatic identification of sporting activities using commonly available smartphones. We extract discriminative informational features from smartphone accelerometers using the Discrete Wavelet Transform (DWT). Despite the poor quality of their accelerometers, smartphones were used as capture devices due to their prevalence in today’s society. Successful classification on this basis potentially makes the technology accessible to both elite and non-elite athletes. Extracted features are used to train different categories of classifiers. No one classifier family has a reportable direct advantage in activity classification problems to date; thus we examine classifiers from each of the most widely used classifier families. We investigate three classification approaches; a commonly used SVM-based approach, an optimized classification model and a fusion of classifiers. We also investigate the effect of changing several of the DWT input parameters, including mother wavelets, window lengths and DWT decomposition levels. During the course of this work we created a challenging sports activity analysis dataset, comprised of soccer and field-hockey activities. The average maximum F-measure accuracy of 87% was achieved using a fusion of classifiers, which was 6% better than a single classifier model and 23% better than a standard SVM approach

    A lifelogging approach to automated market research

    Get PDF
    Market research companies spend large amounts of money carrying out time-intensive processes to gather information about peo- ple’s activities, such as the place they frequent and the activities in which they partake. Due to high costs and logistical difficulties, an automated approach to this practice is needed. In this work we present an automated market research system based on computer vision and machine learning algorithms with visual lifelogging data, developed in collaboration with Sponge It, a market research com- pany. Due to some image quality constraints associated with the Sense- cam, for our prototype system we developed a visual lifelogging device using an Android smartphone. This device can capture images at higher resolutions and with additional metadata, such as location information. The aim of this project is to analyse large collections of visual lifelogs and to support both ethnographic research and audience measurement for market research. Ethnographic research is supported by high level classification of images to capture the semantics of the users activities (e.g. socialising in bar, shopping, eating). Location, time and other con- texts are also analysed, and an interactive interface supports browsing and exploration of the data based on this analysis. The system can measure audience exposure to specific advertising cam- paigns, using object recognition algorithms to automatically detect the presence of known logos in life logging images. This combination of con- cept classification for ethnographic research and object recognition for audience exposure represents a very powerful tool from a market research perspective

    A Study and Estimation a Lost Person Behavior in Crowded Areas Using Accelerometer Data from Smartphones

    Get PDF
    As smartphones become more popular, applications are being developed with new and innovative ways to solve problems in the day-to-day lives of users. One area of smartphone technology that has been developed in recent years is human activity recognition (HAR). This technology uses various sensors that are built into the smartphone to sense a person\u27s activity in real time. Applications that incorporate HAR can be used to track a person\u27s movements and are very useful in areas such as health care. We use this type of motion sensing technology, specifically, using data collected from the accelerometer sensor. The purpose of this study is to study and estimate the person who may become lost in a crowded area. The application is capable of estimating the movements of people in a crowded area, and whether or not the person is lost in a crowded area based on his/her movements as detected by the smartphone. This will be a great benefit to anyone interested in crowd management strategies. In this paper, we review related literature and research that has given us the basis for our own research. We also detail research on lost person behavior. We looked at the typical movements a person will likely make when he/she is lost and used these movements to indicate lost person behavior. We then evaluate and describe the creation of the application, all of its components, and the testing process

    Deep HMResNet Model for Human Activity-Aware Robotic Systems

    Full text link
    Endowing the robotic systems with cognitive capabilities for recognizing daily activities of humans is an important challenge, which requires sophisticated and novel approaches. Most of the proposed approaches explore pattern recognition techniques which are generally based on hand-crafted features or learned features. In this paper, a novel Hierarchal Multichannel Deep Residual Network (HMResNet) model is proposed for robotic systems to recognize daily human activities in the ambient environments. The introduced model is comprised of multilevel fusion layers. The proposed Multichannel 1D Deep Residual Network model is, at the features level, combined with a Bottleneck MLP neural network to automatically extract robust features regardless of the hardware configuration and, at the decision level, is fully connected with an MLP neural network to recognize daily human activities. Empirical experiments on real-world datasets and an online demonstration are used for validating the proposed model. Results demonstrated that the proposed model outperforms the baseline models in daily human activity recognition.Comment: Presented at AI-HRI AAAI-FSS, 2018 (arXiv:1809.06606

    Human activity recognition making use of long short-term memory techniques

    Get PDF
    The optimisation and validation of a classifiers performance when applied to real world problems is not always effectively shown. In much of the literature describing the application of artificial neural network architectures to Human Activity Recognition (HAR) problems, postural transitions are grouped together and treated as a singular class. This paper proposes, investigates and validates the development of an optimised artificial neural network based on Long-Short Term Memory techniques (LSTM), with repeated cross validation used to validate the performance of the classifier. The results of the optimised LSTM classifier are comparable or better to that of previous research making use of the same dataset, achieving 95% accuracy under repeated 10-fold cross validation using grouped postural transitions. The work in this paper also achieves 94% accuracy under repeated 10-fold cross validation whilst treating each common postural transition as a separate class (and thus providing more context to each activity)
    corecore