34 research outputs found

    Recognizing human activities based on wearable inertial measurements:methods and applications

    No full text
    Abstract Inertial sensors are devices that measure movement, and therefore, when they are attached to a body, they can be used to measure human movements. In this thesis, data from these sensors are studied to recognize human activities user-independently. This is possible if the following two hypotheses are valid: firstly, as human movements are dissimilar between activities, also inertial sensor data between activities is so different that this data can be used to recognize activities. Secondly, while movements and inertial data are dissimilar between activities, they are so similar when different persons are performing the same activity that they can be recognized as the same activity. In this thesis, pattern recognition -based solutions are applied to inertial data to find these dissimilarities and similarities, and therefore, to build models to recognize activities user-independently. Activity recognition within this thesis is studied in two contexts: daily activity recognition using mobile phones, and activity recognition in industrial context. Both of these contexts have special requirements and these are considered in the presented solutions. Mobile phones are optimal devices to measure daily activity: they include a wide range of useful sensors to detect activities, and people carry them with them most of the time. On the other hand, the usage of mobile phones in active recognition includes several challenges; for instance, a person can carry a phone in any orientation, and there are hundreds of smartphone models, and each of them have specific hardware and software. Moreover, as battery life is always as issue with smartphones, techniques to lighten the classification process are proposed. Industrial context is different from daily activity context: when daily activities are recognized, occasional misclassifications may disturb the user, but they do not cause any other type of harm. This is not the case when activities are recognized in industrial context and the purpose is to recognize if the assembly line worker has performed tasks correctly. In this case, false classifications may be much more harmful. Solutions to these challenges are presented in this thesis. The solutions introduced in this thesis are applied to activity recognition data sets. However, as the basic idea of the activity recognition problem is the same as in many other pattern recognition procedures, most of the solutions can be applied to any pattern recognition problem, especially to ones where time series data is studied.Tiivistelmä Liikettä mittaavista antureista, kuten kiihtyvyysantureista, saatavaa tietoa voidaan käyttää ihmisten liikkeiden mittaamiseen kiinnittämällä ne johonkin kohtaan ihmisen kehoa. Väitöskirjassani tavoitteena on opettaa tähän tietoon perustuvia käyttäjäriippumattomia malleja, joiden avulla voidaan tunnistaa ihmisten toimia, kuten käveleminen ja juokseminen. Näiden mallien toimivuus perustuu seuraavaan kahteen oletukseen: (1) koska henkilöiden liikkeet eri toimissa ovat erilaisia, myös niistä mitattava anturitieto on erilaista, (2) useamman henkilön liikkeet samassa toimessa ovat niin samanlaisia, että liikkeistä mitatun anturitiedon perusteella nämä liikkeet voidaan päätellä kuvaavan samaa toimea. Tässä väitöskirjassa käyttäjäriippumaton ihmisten toimien tunnistus perustuu hahmontunnistusmenetelmiin ja tunnistusta on sovellettu kahteen eri asiayhteyteen: arkitoimien tunnistamiseen älypuhelimella sekä toimintojen tunnistamiseen teollisessa ympäristössä. Molemmilla sovellusalueilla on omat erityisvaatimuksensa ja -haasteensa. Älypuhelimien liikettä mittaavien antureihin perustuva tunnistus on haastavaa esimerkiksi siksi, että puhelimen asento ja paikka voivat vaihdella. Se voi olla esimerkiksi laukussa tai taskussa, lisäksi se voi olla missä tahansa asennossa. Myös puhelimen akun rajallinen kesto luo omat haasteensa. Tämän vuoksi tunnistus tulisi tehdä mahdollisimman kevyesti ja vähän virtaa kuluttavalla tavalla. Teollisessa ympäristössä haasteet ovat toisenlaisia. Kun tarkoituksena on tunnistaa esimerkiksi työvaiheiden oikea suoritusjärjestys kokoamislinjastolla, yksikin virheellinen tunnistus voi aiheuttaa suuren vahingon. Teollisessa ympäristössä tavoitteena onkin tunnistaa toimet mahdollisimman tarkasti välittämättä siitä kuinka paljon virtaa ja tehoa tunnistus vaatii. Väitöskirjassani kerrotaan kuinka nämä erityisvaatimukset ja -haasteet voidaan ottaa huomioon suunniteltaessa malleja ihmisten toimien tunnistamiseen. Väitöskirjassani esiteltyjä uusia menetelmiä on sovellettu ihmisten toimien tunnistamiseen. Samoja menetelmiä voidaan kuitenkin käyttää monissa muissa hahmontunnistukseen liittyvissä ongelmissa, erityisesti sellaisissa, joissa analysoitava tieto on aikasarjamuotoista

    Context-aware incremental learning-based method for personalized human activity recognition

    No full text
    Abstract This study introduces an ensemble-based personalized human activity recognition method relying on incremental learning, which is a method for continuous learning, that can not only learn from streaming data but also adapt to different contexts and changes in context. This adaptation is based on a novel weighting approach which gives bigger weight to those base models of the ensemble which are the most suitable to the current context. In this article, contexts are different body positions for inertial sensors. The experiments are performed in two scenarios: (S1) adapting model to a known context, and (S2) adapting model to a previously unknown context. In both scenarios, the models had to also adapt to the data of previously unknown person, as the initial user-independent dataset did not include any data from the studied user. In the experiments, the proposed ensemble-based approach is compared to non-weighted personalization method relying on ensemble-based classifier and to static user-independent model. Both ensemble models are experimented using three different base classifiers (linear discriminant analysis, quadratic discriminant analysis, and classification and regression tree). The results show that the proposed ensemble method performs much better than non-weighted ensemble model for personalization in both scenarios no matter which base classifier is used. Moreover, the proposed method outperforms user-independent models. In scenario 1, the error rate of balanced accuracy using user-independent model was 13.3%, using non-weighted personalization method 13.8%, and using the proposed method 6.4%. The difference is even bigger in scenario 2, where the error rate using user-independent model is 36.6%, using non-weighted personalization method 36.9%, and using the proposed method 14.1%. In addition, F1 scores also show that the proposed method performs much better in both scenarios that the rival methods. Moreover, as a side result, it was noted that the presented method can also be used to recognize body position of the sensor

    Comparison of regression and classification models for user-independent and personal stress detection

    No full text
    Abstract In this article, regression and classification models are compared for stress detection. Both personal and user-independent models are experimented. The article is based on publicly open dataset called AffectiveROAD, which contains data gathered using Empatica E4 sensor and unlike most of the other stress detection datasets, it contains continuous target variables. The used classification model is Random Forest and the regression model is Bagged tree based ensemble. Based on experiments, regression models outperform classification models, when classifying observations as stressed or not-stressed. The best user-independent results are obtained using a combination of blood volume pulse and skin temperature features, and using these the average balanced accuracy was 74.1% with classification model and 82.3% using regression model. In addition, regression models can be used to estimate the level of the stress. Moreover, the results based on models trained using personal data are not encouraging showing that biosignals have a lot of variation not only between the study subjects but also between the session gathered from the same person. On the other hand, it is shown that with subject-wise feature selection for user-independent model, it is possible to improve recognition models more than by using personal training data to build personal models. In fact, it is shown that with subject-wise feature selection, the average detection rate can be improved as much as 4%-units, and it is especially useful to reduce the variance in the recognition rates between the study subjects

    Incremental learning to personalize human activity recognition models:the importance of human AI collaboration

    No full text
    Abstract This study presents incremental learning based methods to personalize human activity recognition models. Initially, a user-independent model is used in the recognition process. When a new user starts to use the human activity recognition application, personal streaming data can be gathered. Of course, this data does not have labels. However, there are three different ways to obtain this data: non-supervised, semi-supervised, and supervised. The non-supervised approach relies purely on predicted labels, the supervised approach uses only human intelligence to label the data, and the proposed method for semi-supervised learning is a combination of these two: It uses artificial intelligence (AI) in most cases to label the data but in uncertain cases it relies on human intelligence. After labels are obtained, the personalization process continues by using the streaming data and these labels to update the incremental learning based model, which in this case is Learn++. Learn++ is an ensemble method that can use any classifier as a base classifier, and this study compares three base classifiers: linear discriminant analysis (LDA), quadratic discriminant analysis (QDA), and classification and regression tree (CART). Moreover, three datasets are used in the experiment to show how well the presented method generalizes on different datasets. The results show that personalized models are much more accurate than user-independent models. On average, the recognition rates are: 87.0% using the user-independent model, 89.1% using the non-supervised personalization approach, 94.0% using the semi-supervised personalization approach, and 96.5% using the supervised personalization approach. This means that by relying on predicted labels with high confidence, and asking the user to label only uncertain observations (6.6% of the observations when using LDA, 7.7% when using QDA, and 18.3% using CART), almost as low error rates can be achieved as by using the supervised approach, in which labeling is fully based on human intelligence

    Revisiting “Recognizing human activities user-independently on smartphones based on accelerometer data” – what has happened since 2012?

    No full text
    Abstract Our article “Recognizing human activities user-independently on smartphones based on accelerometer data” was published in the International Journal of Interactive Multimedia and Artificial Intelligence (IJIMAI) in 2012. In 2018, it was selected as the most outstanding article published in the 10 years of IJIMAI life. To celebrate the 10th anniversary of IJIMAI, in this article we will introduce what has happened in the field of human activity recognition and wearable sensor-based recognition since 2012, and especially, this article concentrates on introducing our work since 2012

    Experiences with publicly open human activity data sets:studying the generalizability of the recognition models

    No full text
    Abstract In this article, it is studied how well inertial sensor-based human activity recognition models work when training and testing data sets are collected in different environments. Comparison is done using publicly open human activity data sets. This article has four objectives. Firstly, survey about publicly available data sets is presented. Secondly, one previously not shared human activity data set used in our earlier work is opened for public use. Thirdly, the genaralizability of the recognition models trained using publicly open data sets are experimented by testing them with data from another publicly open data set to get knowledge to how models work when they are used in different environment, with different study subjects and hardware. Finally, the challenges encountered using publicly open data sets are discussed. The results show that data gathering protocol can have a statistically significant effect to the recognition rates. In addition, it was noted that often publicly open human activity data sets are not as easy to apply as they should be

    MyoGym:introducing an open gym data set for activity recognition collected using myo armband

    No full text
    Abstract The activity recognition research has remained popular although the first steps were taken almost two decades ago. While the first ideas were more like a-proof-of-concept studies the area has become a fruitful soil to novel methods of machine learning, to adaptive modeling, signal fusion and several different types of application areas. Nevertheless, one of the slowing aspects in methodology development is the burden in collecting and labeling enough versatile data sets. In this article, a MyoGym data set is introduced to be used in activity recognition classifier development, in development of models for unseen activities, in signal fusion, and many other areas not yet known. The data set includes 6D motion signals and 8 channel electromyogram data from 10 persons and from 30 different gym exercises, each of them consisting a set of ten repetitions. The benchmark results provided, in this article, are in purpose made straightforward that their repetitiveness should be easy for any newcomer in the area

    OpenHAR:a Matlab toolbox for easy access to publicly open human activity data sets

    No full text
    Abstract This study introduces OpenHAR, a free Matlab toolbox to combine and unify publicly open data sets. It provides an easy access to accelerometer signals of ten publicly open human activity data sets. Data sets are easy to access as OpenHAR provides all the data sets in the same format. In addition, units, measurement range and labels are unified, as well as, body position IDs. Moreover, data sets with different sampling rates are unified using downsampling. What is more, data sets have been visually inspected to find visible errors, such as sensor in wrong orientation. OpenHAR improves re-usability of data sets by fixing these errors. Altogether OpenHAR contains over 65 million labeled data samples. This is equivalent to over 280 hours of data from 3D accelerometers. This includes data from 211 study subjects performing 17 daily human activities and wearing sensors in 14 different body positions

    From user-independent to personal human activity recognition models using smartphone sensors

    No full text
    Abstract In this study, a novel method to obtain user-dependent human activity recognition models unobtrusively by using the sensors of a smartphone is presented. The recognition consists of two models: sensor fusion-based user-independent model for data labeling and single sensor-based user-dependent model for final recognition. The functioning of the presented method is tested with human activity data set, including data from accelerometer and magnetometer, and with two classifiers. Comparison of the detection accuracies of the proposed method to traditional user-independent model shows that the presented method has potential, in nine cases out of ten it is better than the traditional method, but more experiments using different sensor combinations should be made to show the full potential of the method
    corecore