33 research outputs found

    Smartphone-based human activity recognition

    Get PDF
    Cotutela Universitat Politècnica de Catalunya i Università degli Studi di GenovaHuman Activity Recognition (HAR) is a multidisciplinary research field that aims to gather data regarding people's behavior and their interaction with the environment in order to deliver valuable context-aware information. It has nowadays contributed to develop human-centered areas of study such as Ambient Intelligence and Ambient Assisted Living, which concentrate on the improvement of people's Quality of Life. The first stage to accomplish HAR requires to make observations from ambient or wearable sensor technologies. However, in the second case, the search for pervasive, unobtrusive, low-powered, and low-cost devices for achieving this challenging task still has not been fully addressed. In this thesis, we explore the use of smartphones as an alternative approach for performing the identification of physical activities. These self-contained devices, which are widely available in the market, are provided with embedded sensors, powerful computing capabilities and wireless communication technologies that make them highly suitable for this application. This work presents a series of contributions regarding the development of HAR systems with smartphones. In the first place we propose a fully operational system that recognizes in real-time six physical activities while also takes into account the effects of postural transitions that may occur between them. For achieving this, we cover some research topics from signal processing and feature selection of inertial data, to Machine Learning approaches for classification. We employ two sensors (the accelerometer and the gyroscope) for collecting inertial data. Their raw signals are the input of the system and are conditioned through filtering in order to reduce noise and allow the extraction of informative activity features. We also emphasize on the study of Support Vector Machines (SVMs), which are one of the state-of-the-art Machine Learning techniques for classification, and reformulate various of the standard multiclass linear and non-linear methods to find the best trade off between recognition performance, computational costs and energy requirements, which are essential aspects in battery-operated devices such as smartphones. In particular, we propose two multiclass SVMs for activity classification:one linear algorithm which allows to control over dimensionality reduction and system accuracy; and also a non-linear hardware-friendly algorithm that only uses fixed-point arithmetic in the prediction phase and enables a model complexity reduction while maintaining the system performance. The efficiency of the proposed system is verified through extensive experimentation over a HAR dataset which we have generated and made publicly available. It is composed of inertial data collected from a group of 30 participants which performed a set of common daily activities while carrying a smartphone as a wearable device. The results achieved in this research show that it is possible to perform HAR in real-time with a precision near 97\% with smartphones. In this way, we can employ the proposed methodology in several higher-level applications that require HAR such as ambulatory monitoring of the disabled and the elderly during periods above five days without the need of a battery recharge. Moreover, the proposed algorithms can be adapted to other commercial wearable devices recently introduced in the market (e.g. smartwatches, phablets, and glasses). This will open up new opportunities for developing practical and innovative HAR applications.El Reconocimiento de Actividades Humanas (RAH) es un campo de investigación multidisciplinario que busca recopilar información sobre el comportamiento de las personas y su interacción con el entorno con el propósito de ofrecer información contextual de alta significancia sobre las acciones que ellas realizan. Recientemente, el RAH ha contribuido en el desarrollo de áreas de estudio enfocadas a la mejora de la calidad de vida del hombre tales como: la inteligència ambiental (Ambient Intelligence) y la vida cotidiana asistida por el entorno para personas dependientes (Ambient Assisted Living). El primer paso para conseguir el RAH consiste en realizar observaciones mediante el uso de sensores fijos localizados en el ambiente, o bien portátiles incorporados de forma vestible en el cuerpo humano. Sin embargo, para el segundo caso, aún se dificulta encontrar dispositivos poco invasivos, de bajo consumo energético, que permitan ser llevados a cualquier lugar, y de bajo costo. En esta tesis, nosotros exploramos el uso de teléfonos móviles inteligentes (Smartphones) como una alternativa para el RAH. Estos dispositivos, de uso cotidiano y fácilmente asequibles en el mercado, están dotados de sensores embebidos, potentes capacidades de cómputo y diversas tecnologías de comunicación inalámbrica que los hacen apropiados para esta aplicación. Nuestro trabajo presenta una serie de contribuciones en relación al desarrollo de sistemas para el RAH con Smartphones. En primera instancia proponemos un sistema que permite la detección de seis actividades físicas en tiempo real y que, además, tiene en cuenta las transiciones posturales que puedan ocurrir entre ellas. Con este fin, hemos contribuido en distintos ámbitos que van desde el procesamiento de señales y la selección de características, hasta algoritmos de Aprendizaje Automático (AA). Nosotros utilizamos dos sensores inerciales (el acelerómetro y el giroscopio) para la captura de las señales de movimiento de los usuarios. Estas han de ser procesadas a través de técnicas de filtrado para la reducción de ruido, segmentación y obtención de características relevantes en la detección de actividad. También hacemos énfasis en el estudio de Máquinas de soporte vectorial (MSV) que son uno de los algoritmos de AA más usados en la actualidad. Para ello reformulamos varios de sus métodos estándar (lineales y no lineales) con el propósito de encontrar la mejor combinación de variables que garanticen un buen desempeño del sistema en cuanto a precisión, coste computacional y requerimientos de energía, los cuales son aspectos esenciales en dispositivos portátiles con suministro de energía mediante baterías. En concreto, proponemos dos MSV multiclase para la clasificación de actividad: un algoritmo lineal que permite el balance entre la reducción de la dimensionalidad y la precisión del sistema; y asimismo presentamos un algoritmo no lineal conveniente para dispositivos con limitaciones de hardware que solo utiliza aritmética de punto fijo en la fase de predicción y que permite reducir la complejidad del modelo de aprendizaje mientras mantiene el rendimiento del sistema. La eficacia del sistema propuesto es verificada a través de una experimentación extensiva sobre la base de datos RAH que hemos generado y hecho pública en la red. Esta contiene la información inercial obtenida de un grupo de 30 participantes que realizaron una serie de actividades de la vida cotidiana en un ambiente controlado mientras tenían sujeto a su cintura un smartphone que capturaba su movimiento. Los resultados obtenidos en esta investigación demuestran que es posible realizar el RAH en tiempo real con una precisión cercana al 97%. De esta manera, podemos emplear la metodología propuesta en aplicaciones de alto nivel que requieran el RAH tales como monitorizaciones ambulatorias para personas dependientes (ej. ancianos o discapacitados) durante periodos mayores a cinco días sin la necesidad de recarga de baterías.Postprint (published version

    Computerised accelerometric machine learning techniques and statistical developments for human balance analysis

    Get PDF
    Balance maintenance is crucial to participating in the activities of daily life. Balance is often considered as the ability to maintain the centre of mass (COM) position within the base of support. Primarily, to maintain balance, reliance is placed on the balance related sensory systems i.e., the visual, proprioceptive and vestibular. Several factors can affect a person’s balance such as neurological diseases, ageing, medication and obesity etc. To gain insight into the balance operations, studies rely on statistical and machine learning techniques. Statistical techniques are used for inferencing while machine learning techniques proved effective for interpretation. The focus of this study was on the issues encountered in human balance analysis such as the quantification of balance by relevant features, the relationships between COM and ground projected body sway, the performance of various sensory systems in balance analysis, and their relationships between the directions of body sway (i.e., mediolateral (ML) and anteriorposterior (AP)). A portable wireless accelerometry device was developed, balance analysis methods based on the inverted pendulum were devised and evaluated for their accuracy and reliability against a setup designed to allow manual balance measurements. Balance data were collected from 23 healthy adult subjects with the mean (standard deviation) of the age, height and weight: 24.5 (4.0) years, 173.6 (6.8) cm, and 72.7 (9.9) kg respectively. The accelerometry device was attached to the subjects at the approximate position of the illac crest, while they performed 30 seconds trials of the four conditions associated with a standard balance test called the modified Clinical Test of Sensory Interaction and Balance (mCTSIB). These required standing on a hard (ground) surface with the eyes open, standing on hard surface with the eyes closed, standing on a compliant surface (sponge, 10 cm thick) with the eyes open and standing on a compliant surface with the eyes closed. Statistical and machine learning techniques such as t-test, Wilcoxon signed-rank test, the Mann-Whitney U test, Analysis of variance (ANOVA), Kruskal-Wallis test, Friedman test, correlation analysis, linear regression, Bland and Altman analysis, principal component analysis (PCA), K-means clustering, and Kohonen neural network (KNN) were employed for interpreting the measurements. The findings showed close agreement between the developed balance analysis methods and the related measurements from the manual setup for balance analysis. The COM was observed to be responsible for differing amount of sway across the subjects and could affect both the angle and ground projected sway. The AP direction was more sensitive to sway than the ML direction. The subjects were observed to depend more on their proprioceptive system to control balance. The proprioceptive system was observed to have a greater impact in controlling the AP velocity of the subjects as compared to their visual system. The proprioceptive system had no impact on the ML velocity. The visual system was responsible for the control of the ML velocity and for reducing the acceleration in both directions. It was concluded that for comparison of postural sway information, subjects with closely related COM positions should be compared, comparison should be carried out in respect to the base of their support. The sway normalisation by dividing with COM position should be performed to reduce the obscuring effect of the COM. Enhancement of the proprioceptive system should be carried out to reduce the AP velocity while enhancement of the visual system should be used to reduce the ML sway and acceleration in ML and AP directions. The velocity in the AP direction should be used to examine the performance of the proprioceptive system while the ML velocity and acceleration should be used for the visual system. The vestibular system characterised sway more in the AP direction, and hence, the AP direction should be used to examine its performance in balance

    Research on motion recognition based on multi-dimensional sensing data and deep learning algorithms

    Get PDF
    Motion recognition provides movement information for people with physical dysfunction, the elderly and motion-sensing games production, and is important for accurate recognition of human motion. We employed three classical machine learning algorithms and three deep learning algorithm models for motion recognition, namely Random Forests (RF), K-Nearest Neighbors (KNN) and Decision Tree (DT) and Dynamic Neural Network (DNN), Convolutional Neural Network (CNN) and Recurrent Neural Network (RNN). Compared with the Inertial Measurement Unit (IMU) worn on seven parts of body. Overall, the difference in performance among the three classical machine learning algorithms in this study was insignificant. The RF algorithm model performed best, having achieved a recognition rate of 96.67%, followed by the KNN algorithm model with an optimal recognition rate of 95.31% and the DT algorithm with an optimal recognition rate of 94.85%. The performance difference among deep learning algorithm models was significant. The DNN algorithm model performed best, having achieved a recognition rate of 97.71%. Our study validated the feasibility of using multidimensional data for motion recognition and demonstrated that the optimal wearing part for distinguishing daily activities based on multidimensional sensing data was the waist. In terms of algorithms, deep learning algorithms based on multi-dimensional sensors performed better, and tree-structured models still have better performance in traditional machine learning algorithms. The results indicated that IMU combined with deep learning algorithms can effectively recognize actions and provided a promising basis for a wider range of applications in the field of motion recognition

    A Practical Gait Feedback Method Based on Wearable Inertial Sensors for a Drop Foot Assistance Device

    Get PDF
    To maximise the efficiency of gait interventions, gait phase and joint kinematics are important for closing the system loop of adaptive robotic control. However, few studies have applied an inertial sensor system including both gait phase detection and joint kinematic measurement. Many algorithms for joint measurement require careful alignment of the inertial measurement unit (IMU) to the body segment. In this paper, we propose a practical gait feedback method, which provides sufficient feedback without requiring precise alignment of the IMUs. The method incorporates a two-layer model to realise simultaneous gait stance and swing phase detection and ankle joint angle measurement. Recognition of gait phases is performed by a high-level probabilistic method using angular rate from the sensor attached to the shank while the ankle angle is calculated using a data fusion algorithm based on the complementary filter and sensor-to-segment calibration. The online performance of the algorithm was experimentally validated when 10 able-bodied participants walked on the treadmill with three different speeds. The outputs were compared to the ones measured by an optical motion analysis system. The results showed that the IMU-based algorithm achieved a good accuracy of the gait phase recognition (above 95%) with a short delay response below 20 ms and accurate angle measurements with root mean square errors below 3.5° compared to the optical reference. It demonstrates that our method can be used to provide gait feedback for the correction of drop foot

    Gait Analysis and Rehabilitation using Inertial Sensors

    Get PDF

    Human activity classification with miniature inertial sensors

    Get PDF
    Ankara : The Department of Electrical and Electronics Engineering and the Institute of Engineering and Sciences of Bilkent University, 2009.Thesis (Master's) -- Bilkent University, 2009.Includes bibliographical references leaves 79-92.This thesis provides a comparative study on activity recognition using miniature inertial sensors (gyroscopes and accelerometers) and magnetometers worn on the human body. The classification methods used and compared in this study are: a rule-based algorithm (RBA) or decision tree, least-squares method (LSM), k-nearest neighbor algorithm (k-NN), dynamic time warping (DTW- 1 and DTW-2), and support vector machines (SVM). In the first part of this study, eight different leg motions are classified using only two single-axis gyroscopes. In the second part, human activities are classified using five sensor units worn on different parts of the body. Each sensor unit comprises a tri-axial gyroscope, a tri-axial accelerometer and a tri-axial magnetometer. Different feature sets extracted from the raw sensor data and these are used in the classification process. A number of feature extraction and reduction techniques (principal component analysis) as well as different cross-validation techniques have been implemented and compared. A performance comparison of these classification methods is provided in terms of their correct differentiation rates, confusion matrices, pre-processing and training times and classification times. Among the classification techniques we have considered and implemented, SVM, in general, gives the highest correct differentiation rate, followed by k-NN. The classification time for RBA is the shortest, followed by SVM or LSM, k-NN or DTW-1, and DTW-2 methods. SVM requires the longest training time, whereas DTW-2 takes the longest amount of classification time. Although there is not a significant difference between the correct differentiation rates obtained by different crossvalidation techniques, repeated random sub-sampling uses the shortest amount of classification time, whereas leave-one-out requires the longest.Tunçel, OrkunM.S
    corecore