1,306 research outputs found

    Fuzzy classifier ensembles for hierarchical WiFi-based semantic indoor localization

    Get PDF
    The number of applications for smartphones and tablets is growing exponentially in the last years. Many of these applications are supported by the so-called Location Based Services, which are expected to provide reliable real-time localization anytime and anywhere, no matter either outdoors or indoors. Even though outdoors world-wide localization has been successfully developed through the well-known Global Navigation Satellite System technology, its counterpart large-scale deployment indoors is not available yet. In previous work, we have already introduced a novel technology for indoor localization supported by a WiFi fingerprint approach. In this paper, we describe how to enhance such approach through the combination of hierarchical localization and fuzzy classifier ensembles. It has been tested and validated at the University of Edinburgh, yielding promising results.Ministerio de Economía y CompetitividadXunta de Galici

    Physical Activity Recognition and Identification System

    Get PDF
    Background: It is well-established that physical activity is beneficial to health. It is less known how the characteristics of physical activity impact health independently of total amount. This is due to the inability to measure these characteristics in an objective way that can be applied to large population groups. Accelerometry allows for objective monitoring of physical activity but is currently unable to identify type of physical activity accurately. Methods: This thesis details the creation of an activity classifier that can identify type from accelerometer data. The current research in activity classification was reviewed and methodological challenges were identified. The main challenge was the inability of classifiers to generalize to unseen data. Creating methods to mitigate this lack of generalisation represents the bulk of this thesis. Using the review, a classification pipeline was synthesised, representing the sequence of steps that all activity classifiers use. 1. Determination of device location and setting (Chapter 4) 2. Pre-processing (Chapter 5) 3. Segmenting into windows (Chapters 6) 4. Extracting features (Chapters 7,8) 5. Creating the classifier (Chapter 9) 6. Post-processing (Chapter 5) For each of these steps, methods were created and tested that allowed for a high level of generalisability without sacrificing overall performance. Results: The work in this thesis results in an activity classifier that had a good ability to generalize to unseen data. The classifier achieved an F1-score of 0.916 and 0.826 on data similar to its training data, which is statistically equivalent to the performance of current state of the art models (0.898, 0.765). On data dissimilar to its training data, the classifier achieved a significantly higher performance than current state of the art methods (0.759, 0.897 versus 0.352, 0.415). This shows that the classifier created in this work has a significantly greater ability to generalise to unseen data than current methods. Conclusion: This thesis details the creation of an activity classifier that allows for an improved ability to generalize to unseen data, thus allowing for identification of type from acceleration data. This should allow for more detailed investigation into the specific health effects of type in large population studies utilising accelerometers

    Multi-sensor data fusion and modelling in mobile devices for enhanced user experience

    Get PDF
    Multi-sensor data fusion and modelling in mobile devices for enhanced user experienc

    Unstructured Handwashing Recognition using Smartwatch to Reduce Contact Transmission of Pathogens

    Full text link
    Current guidelines from the World Health Organization indicate that the SARS-CoV-2 coronavirus, which results in the novel coronavirus disease (COVID-19), is transmitted through respiratory droplets or by contact. Contact transmission occurs when contaminated hands touch the mucous membrane of the mouth, nose, or eyes so hands hygiene is extremely important to prevent the spread of the SARSCoV-2 as well as of other pathogens. The vast proliferation of wearable devices, such as smartwatches, containing acceleration, rotation, magnetic field sensors, etc., together with the modern technologies of artificial intelligence, such as machine learning and more recently deep-learning, allow the development of accurate applications for recognition and classification of human activities such as: walking, climbing stairs, running, clapping, sitting, sleeping, etc. In this work, we evaluate the feasibility of a machine learning based system which, starting from inertial signals collected from wearable devices such as current smartwatches, recognizes when a subject is washing or rubbing its hands. Preliminary results, obtained over two different datasets, show a classification accuracy of about 95% and of about 94% for respectively deep and standard learning techniques

    Posture Recognition Using the Interdistances Between Wearable Devices

    Get PDF
    Recognition of user's postures and activities is particularly important, as it allows applications to customize their operations according to the current situation. The vast majority of available solutions are based on wearable devices equipped with accelerometers and gyroscopes. In this article, a different approach is explored: The posture of the user is inferred from the interdistances between the set of devices worn by the user. Interdistances are first measured by using ultra-wideband transceivers operating in two-way ranging mode and then provided as input to a classifier that estimates current posture. An experimental evaluation shows that the proposed method is effective (up to ∼98.2% accuracy), especially when using a personalized model. The method could be used to enhance the accuracy of activity recognition systems based on inertial sensors

    Touchalytics: On the Applicability of Touchscreen Input as a Behavioral Biometric for Continuous Authentication

    Full text link
    We investigate whether a classifier can continuously authenticate users based on the way they interact with the touchscreen of a smart phone. We propose a set of 30 behavioral touch features that can be extracted from raw touchscreen logs and demonstrate that different users populate distinct subspaces of this feature space. In a systematic experiment designed to test how this behavioral pattern exhibits consistency over time, we collected touch data from users interacting with a smart phone using basic navigation maneuvers, i.e., up-down and left-right scrolling. We propose a classification framework that learns the touch behavior of a user during an enrollment phase and is able to accept or reject the current user by monitoring interaction with the touch screen. The classifier achieves a median equal error rate of 0% for intra-session authentication, 2%-3% for inter-session authentication and below 4% when the authentication test was carried out one week after the enrollment phase. While our experimental findings disqualify this method as a standalone authentication mechanism for long-term authentication, it could be implemented as a means to extend screen-lock time or as a part of a multi-modal biometric authentication system.Comment: to appear at IEEE Transactions on Information Forensics & Security; Download data from http://www.mariofrank.net/touchalytics

    Identification of Persons and Several Demographic Features based on Motion Analysis of Various Daily Activities using Wearable Sensors

    Full text link
    In recent years, there has been an increasing interest in using the capabilities of wearable sensors, including accelerometers, gyroscopes and magnetometers, to recognize individuals while undertaking a set of normal daily activities. The past few years have seen considerable research exploring person recognition using wearable sensing devices due to its significance in different applications, including security and human-computer interaction applications. This thesis explores the identification of subjects and related multiple biometric demographic attributes based on the motion data of normal daily activities gathered using wearable sensor devices. First, it studies the recognition of 18 subjects based on motion data of 20 daily living activities using six wearable sensors affixed to different body locations. Next, it investigates the task of classifying various biometric demographic features: age, gender, height, and weight based on motion data of various activities gathered using two types of accelerometers and one gyroscope wearable sensors. Initially, different significant parameters that impact the subjects' recognition success rates are investigated. These include studying the performance of the three sensor sources: accelerometer, gyroscope, and magnetometer, and the impact of their combinations. Furthermore, the impact of the number of different sensors mounted at different body positions and the best body position to mount sensors are also studied. Next, the analysis also explored which activities are more suitable for subject recognition, and lastly, the recognition success rates and mutual confusion among individuals. In addition, the impact of several fundamental factors on the classification performance of different demographic features using motion data collected from three sensors is studied. Those factors include the performance evaluation of feature-set extracted from both time and frequency domains, feature selection, individual sensor sources and multiple sources. The key findings are: (I) Features extracted from all three sensor sources provide the highest accuracy of subjects recognition. (2) The recognition accuracy is affected by the body position and the number of sensors. Ankle, chest, and thigh positions outperform other positions in terms of the recognition accuracy of subjects. There is a depreciating association between the subject classification accuracy and the number of sensors used. (3) Sedentary activities such as watching tv, texting on the phone, writing with a pen, and using pc produce higher classification results and distinguish persons efficiently due to the absence of motion noise in the signal. (4) Identifiability is not uniformly distributed across subjects. (5) According to the classification results of considered biometric features, both full and selected features-set derived from all three sources of two accelerometers and a gyroscope sensor provide the highest classification accuracy of all biometric features compared to features derived from individual sensors sources or pairs of sensors together. (6) Under all configurations and for all biometric features classified; the time-domain features examined always outperformed the frequency domain features. Combining the two sets led to no increase in classification accuracy over time-domain alone
    • …
    corecore