142 research outputs found

    Smartphone-based human activity recognition

    Get PDF
    Cotutela Universitat Politècnica de Catalunya i Università degli Studi di GenovaHuman Activity Recognition (HAR) is a multidisciplinary research field that aims to gather data regarding people's behavior and their interaction with the environment in order to deliver valuable context-aware information. It has nowadays contributed to develop human-centered areas of study such as Ambient Intelligence and Ambient Assisted Living, which concentrate on the improvement of people's Quality of Life. The first stage to accomplish HAR requires to make observations from ambient or wearable sensor technologies. However, in the second case, the search for pervasive, unobtrusive, low-powered, and low-cost devices for achieving this challenging task still has not been fully addressed. In this thesis, we explore the use of smartphones as an alternative approach for performing the identification of physical activities. These self-contained devices, which are widely available in the market, are provided with embedded sensors, powerful computing capabilities and wireless communication technologies that make them highly suitable for this application. This work presents a series of contributions regarding the development of HAR systems with smartphones. In the first place we propose a fully operational system that recognizes in real-time six physical activities while also takes into account the effects of postural transitions that may occur between them. For achieving this, we cover some research topics from signal processing and feature selection of inertial data, to Machine Learning approaches for classification. We employ two sensors (the accelerometer and the gyroscope) for collecting inertial data. Their raw signals are the input of the system and are conditioned through filtering in order to reduce noise and allow the extraction of informative activity features. We also emphasize on the study of Support Vector Machines (SVMs), which are one of the state-of-the-art Machine Learning techniques for classification, and reformulate various of the standard multiclass linear and non-linear methods to find the best trade off between recognition performance, computational costs and energy requirements, which are essential aspects in battery-operated devices such as smartphones. In particular, we propose two multiclass SVMs for activity classification:one linear algorithm which allows to control over dimensionality reduction and system accuracy; and also a non-linear hardware-friendly algorithm that only uses fixed-point arithmetic in the prediction phase and enables a model complexity reduction while maintaining the system performance. The efficiency of the proposed system is verified through extensive experimentation over a HAR dataset which we have generated and made publicly available. It is composed of inertial data collected from a group of 30 participants which performed a set of common daily activities while carrying a smartphone as a wearable device. The results achieved in this research show that it is possible to perform HAR in real-time with a precision near 97\% with smartphones. In this way, we can employ the proposed methodology in several higher-level applications that require HAR such as ambulatory monitoring of the disabled and the elderly during periods above five days without the need of a battery recharge. Moreover, the proposed algorithms can be adapted to other commercial wearable devices recently introduced in the market (e.g. smartwatches, phablets, and glasses). This will open up new opportunities for developing practical and innovative HAR applications.El Reconocimiento de Actividades Humanas (RAH) es un campo de investigación multidisciplinario que busca recopilar información sobre el comportamiento de las personas y su interacción con el entorno con el propósito de ofrecer información contextual de alta significancia sobre las acciones que ellas realizan. Recientemente, el RAH ha contribuido en el desarrollo de áreas de estudio enfocadas a la mejora de la calidad de vida del hombre tales como: la inteligència ambiental (Ambient Intelligence) y la vida cotidiana asistida por el entorno para personas dependientes (Ambient Assisted Living). El primer paso para conseguir el RAH consiste en realizar observaciones mediante el uso de sensores fijos localizados en el ambiente, o bien portátiles incorporados de forma vestible en el cuerpo humano. Sin embargo, para el segundo caso, aún se dificulta encontrar dispositivos poco invasivos, de bajo consumo energético, que permitan ser llevados a cualquier lugar, y de bajo costo. En esta tesis, nosotros exploramos el uso de teléfonos móviles inteligentes (Smartphones) como una alternativa para el RAH. Estos dispositivos, de uso cotidiano y fácilmente asequibles en el mercado, están dotados de sensores embebidos, potentes capacidades de cómputo y diversas tecnologías de comunicación inalámbrica que los hacen apropiados para esta aplicación. Nuestro trabajo presenta una serie de contribuciones en relación al desarrollo de sistemas para el RAH con Smartphones. En primera instancia proponemos un sistema que permite la detección de seis actividades físicas en tiempo real y que, además, tiene en cuenta las transiciones posturales que puedan ocurrir entre ellas. Con este fin, hemos contribuido en distintos ámbitos que van desde el procesamiento de señales y la selección de características, hasta algoritmos de Aprendizaje Automático (AA). Nosotros utilizamos dos sensores inerciales (el acelerómetro y el giroscopio) para la captura de las señales de movimiento de los usuarios. Estas han de ser procesadas a través de técnicas de filtrado para la reducción de ruido, segmentación y obtención de características relevantes en la detección de actividad. También hacemos énfasis en el estudio de Máquinas de soporte vectorial (MSV) que son uno de los algoritmos de AA más usados en la actualidad. Para ello reformulamos varios de sus métodos estándar (lineales y no lineales) con el propósito de encontrar la mejor combinación de variables que garanticen un buen desempeño del sistema en cuanto a precisión, coste computacional y requerimientos de energía, los cuales son aspectos esenciales en dispositivos portátiles con suministro de energía mediante baterías. En concreto, proponemos dos MSV multiclase para la clasificación de actividad: un algoritmo lineal que permite el balance entre la reducción de la dimensionalidad y la precisión del sistema; y asimismo presentamos un algoritmo no lineal conveniente para dispositivos con limitaciones de hardware que solo utiliza aritmética de punto fijo en la fase de predicción y que permite reducir la complejidad del modelo de aprendizaje mientras mantiene el rendimiento del sistema. La eficacia del sistema propuesto es verificada a través de una experimentación extensiva sobre la base de datos RAH que hemos generado y hecho pública en la red. Esta contiene la información inercial obtenida de un grupo de 30 participantes que realizaron una serie de actividades de la vida cotidiana en un ambiente controlado mientras tenían sujeto a su cintura un smartphone que capturaba su movimiento. Los resultados obtenidos en esta investigación demuestran que es posible realizar el RAH en tiempo real con una precisión cercana al 97%. De esta manera, podemos emplear la metodología propuesta en aplicaciones de alto nivel que requieran el RAH tales como monitorizaciones ambulatorias para personas dependientes (ej. ancianos o discapacitados) durante periodos mayores a cinco días sin la necesidad de recarga de baterías.Postprint (published version

    Recognition of Daily Gestures with Wearable Inertial Rings and Bracelets

    Get PDF
    Recognition of activities of daily living plays an important role in monitoring elderly people and helping caregivers in controlling and detecting changes in daily behaviors. Thanks to the miniaturization and low cost of Microelectromechanical systems (MEMs), in particular of Inertial Measurement Units, in recent years body-worn activity recognition has gained popularity. In this context, the proposed work aims to recognize nine different gestures involved in daily activities using hand and wrist wearable sensors. Additionally, the analysis was carried out also considering different combinations of wearable sensors, in order to find the best combination in terms of unobtrusiveness and recognition accuracy. In order to achieve the proposed goals, an extensive experimentation was performed in a realistic environment. Twenty users were asked to perform the selected gestures and then the data were off-line analyzed to extract significant features. In order to corroborate the analysis, the classification problem was treated using two different and commonly used supervised machine learning techniques, namely Decision Tree and Support Vector Machine, analyzing both personal model and Leave-One-Subject-Out cross validation. The results obtained from this analysis show that the proposed system is able to recognize the proposed gestures with an accuracy of 89.01% in the Leave-One-Subject-Out cross validation and are therefore promising for further investigation in real life scenarios

    Human Action Recognition with RGB-D Sensors

    Get PDF
    none3noHuman action recognition, also known as HAR, is at the foundation of many different applications related to behavioral analysis, surveillance, and safety, thus it has been a very active research area in the last years. The release of inexpensive RGB-D sensors fostered researchers working in this field because depth data simplify the processing of visual data that could be otherwise difficult using classic RGB devices. Furthermore, the availability of depth data allows to implement solutions that are unobtrusive and privacy preserving with respect to classic video-based analysis. In this scenario, the aim of this chapter is to review the most salient techniques for HAR based on depth signal processing, providing some details on a specific method based on temporal pyramid of key poses, evaluated on the well-known MSR Action3D dataset.Cippitelli, Enea; Gambi, Ennio; Spinsante, SusannaCippitelli, Enea; Gambi, Ennio; Spinsante, Susann

    Human Action Recognition with RGB-D Sensors

    Get PDF
    Human action recognition, also known as HAR, is at the foundation of many different applications related to behavioral analysis, surveillance, and safety, thus it has been a very active research area in the last years. The release of inexpensive RGB-D sensors fostered researchers working in this field because depth data simplify the processing of visual data that could be otherwise difficult using classic RGB devices. Furthermore, the availability of depth data allows to implement solutions that are unobtrusive and privacy preserving with respect to classic video-based analysis. In this scenario, the aim of this chapter is to review the most salient techniques for HAR based on depth signal processing, providing some details on a specific method based on temporal pyramid of key poses, evaluated on the well-known MSR Action3D dataset

    Real-time head movement tracking through earables in moving vehicles

    Get PDF
    Abstract. The Internet of Things is enabling innovations in the automotive industry by expanding the capabilities of vehicles by connecting them with the cloud. One important application domain is traffic safety, which can benefit from monitoring the driver’s condition to see if they are capable of safely handling the vehicle. By detecting drowsiness, inattentiveness, and distraction of the driver it is possible to react before accidents happen. This thesis explores how accelerometer and gyroscope data collected using earables can be used to classify the orientation of the driver’s head in a moving vehicle. It is found that machine learning algorithms such as Random Forest and K-Nearest Neighbor can be used to reach fairly accurate classifications even without applying any noise reduction to the signal data. Data cleaning and transformation approaches are studied to see how the models could be improved further. This study paves the way for the development of driver monitoring systems capable of reacting to anomalous driving behavior before traffic accidents can happen

    Practical and Rich User Digitization

    Full text link
    A long-standing vision in computer science has been to evolve computing devices into proactive assistants that enhance our productivity, health and wellness, and many other facets of our lives. User digitization is crucial in achieving this vision as it allows computers to intimately understand their users, capturing activity, pose, routine, and behavior. Today's consumer devices - like smartphones and smartwatches provide a glimpse of this potential, offering coarse digital representations of users with metrics such as step count, heart rate, and a handful of human activities like running and biking. Even these very low-dimensional representations are already bringing value to millions of people's lives, but there is significant potential for improvement. On the other end, professional, high-fidelity comprehensive user digitization systems exist. For example, motion capture suits and multi-camera rigs that digitize our full body and appearance, and scanning machines such as MRI capture our detailed anatomy. However, these carry significant user practicality burdens, such as financial, privacy, ergonomic, aesthetic, and instrumentation considerations, that preclude consumer use. In general, the higher the fidelity of capture, the lower the user's practicality. Most conventional approaches strike a balance between user practicality and digitization fidelity. My research aims to break this trend, developing sensing systems that increase user digitization fidelity to create new and powerful computing experiences while retaining or even improving user practicality and accessibility, allowing such technologies to have a societal impact. Armed with such knowledge, our future devices could offer longitudinal health tracking, more productive work environments, full body avatars in extended reality, and embodied telepresence experiences, to name just a few domains.Comment: PhD thesi

    Multidimensional embedded MEMS motion detectors for wearable mechanocardiography and 4D medical imaging

    Get PDF
    Background: Cardiovascular diseases are the number one cause of death. Of these deaths, almost 80% are due to coronary artery disease (CAD) and cerebrovascular disease. Multidimensional microelectromechanical systems (MEMS) sensors allow measuring the mechanical movement of the heart muscle offering an entirely new and innovative solution to evaluate cardiac rhythm and function. Recent advances in miniaturized motion sensors present an exciting opportunity to study novel device-driven and functional motion detection systems in the areas of both cardiac monitoring and biomedical imaging, for example, in computed tomography (CT) and positron emission tomography (PET). Methods: This Ph.D. work describes a new cardiac motion detection paradigm and measurement technology based on multimodal measuring tools — by tracking the heart’s kinetic activity using micro-sized MEMS sensors — and novel computational approaches — by deploying signal processing and machine learning techniques—for detecting cardiac pathological disorders. In particular, this study focuses on the capability of joint gyrocardiography (GCG) and seismocardiography (SCG) techniques that constitute the mechanocardiography (MCG) concept representing the mechanical characteristics of the cardiac precordial surface vibrations. Results: Experimental analyses showed that integrating multisource sensory data resulted in precise estimation of heart rate with an accuracy of 99% (healthy, n=29), detection of heart arrhythmia (n=435) with an accuracy of 95-97%, ischemic disease indication with approximately 75% accuracy (n=22), as well as significantly improved quality of four-dimensional (4D) cardiac PET images by eliminating motion related inaccuracies using MEMS dual gating approach. Tissue Doppler imaging (TDI) analysis of GCG (healthy, n=9) showed promising results for measuring the cardiac timing intervals and myocardial deformation changes. Conclusion: The findings of this study demonstrate clinical potential of MEMS motion sensors in cardiology that may facilitate in time diagnosis of cardiac abnormalities. Multidimensional MCG can effectively contribute to detecting atrial fibrillation (AFib), myocardial infarction (MI), and CAD. Additionally, MEMS motion sensing improves the reliability and quality of cardiac PET imaging.Moniulotteisten sulautettujen MEMS-liiketunnistimien käyttö sydänkardiografiassa sekä lääketieteellisessä 4D-kuvantamisessa Tausta: Sydän- ja verisuonitaudit ovat yleisin kuolinsyy. Näistä kuolemantapauksista lähes 80% johtuu sepelvaltimotaudista (CAD) ja aivoverenkierron häiriöistä. Moniulotteiset mikroelektromekaaniset järjestelmät (MEMS) mahdollistavat sydänlihaksen mekaanisen liikkeen mittaamisen, mikä puolestaan tarjoaa täysin uudenlaisen ja innovatiivisen ratkaisun sydämen rytmin ja toiminnan arvioimiseksi. Viimeaikaiset teknologiset edistysaskeleet mahdollistavat uusien pienikokoisten liiketunnistusjärjestelmien käyttämisen sydämen toiminnan tutkimuksessa sekä lääketieteellisen kuvantamisen, kuten esimerkiksi tietokonetomografian (CT) ja positroniemissiotomografian (PET), tarkkuuden parantamisessa. Menetelmät: Tämä väitöskirjatyö esittelee uuden sydämen kineettisen toiminnan mittaustekniikan, joka pohjautuu MEMS-anturien käyttöön. Uudet laskennalliset lähestymistavat, jotka perustuvat signaalinkäsittelyyn ja koneoppimiseen, mahdollistavat sydämen patologisten häiriöiden havaitsemisen MEMS-antureista saatavista signaaleista. Tässä tutkimuksessa keskitytään erityisesti mekanokardiografiaan (MCG), joihin kuuluvat gyrokardiografia (GCG) ja seismokardiografia (SCG). Näiden tekniikoiden avulla voidaan mitata kardiorespiratorisen järjestelmän mekaanisia ominaisuuksia. Tulokset: Kokeelliset analyysit osoittivat, että integroimalla usean sensorin dataa voidaan mitata syketiheyttä 99% (terveillä n=29) tarkkuudella, havaita sydämen rytmihäiriöt (n=435) 95-97%, tarkkuudella, sekä havaita iskeeminen sairaus noin 75% tarkkuudella (n=22). Lisäksi MEMS-kaksoistahdistuksen avulla voidaan parantaa sydämen 4D PET-kuvan laatua, kun liikeepätarkkuudet voidaan eliminoida paremmin. Doppler-kuvantamisessa (TDI, Tissue Doppler Imaging) GCG-analyysi (terveillä, n=9) osoitti lupaavia tuloksia sydänsykkeen ajoituksen ja intervallien sekä sydänlihasmuutosten mittaamisessa. Päätelmä: Tämän tutkimuksen tulokset osoittavat, että kardiologisilla MEMS-liikeantureilla on kliinistä potentiaalia sydämen toiminnallisten poikkeavuuksien diagnostisoinnissa. Moniuloitteinen MCG voi edistää eteisvärinän (AFib), sydäninfarktin (MI) ja CAD:n havaitsemista. Lisäksi MEMS-liiketunnistus parantaa sydämen PET-kuvantamisen luotettavuutta ja laatua

    Activity recognition from smartphone sensing data

    Get PDF
    Tese de mestrado integrado. Engenharia Informática e Computação. Faculdade de Engenharia. Universidade do Porto. 201

    Wearable and Nearable Biosensors and Systems for Healthcare

    Get PDF
    Biosensors and systems in the form of wearables and “nearables” (i.e., everyday sensorized objects with transmitting capabilities such as smartphones) are rapidly evolving for use in healthcare. Unlike conventional approaches, these technologies can enable seamless or on-demand physiological monitoring, anytime and anywhere. Such monitoring can help transform healthcare from the current reactive, one-size-fits-all, hospital-centered approach into a future proactive, personalized, decentralized structure. Wearable and nearable biosensors and systems have been made possible through integrated innovations in sensor design, electronics, data transmission, power management, and signal processing. Although much progress has been made in this field, many open challenges for the scientific community remain, especially for those applications requiring high accuracy. This book contains the 12 papers that constituted a recent Special Issue of Sensors sharing the same title. The aim of the initiative was to provide a collection of state-of-the-art investigations on wearables and nearables, in order to stimulate technological advances and the use of the technology to benefit healthcare. The topics covered by the book offer both depth and breadth pertaining to wearable and nearable technology. They include new biosensors and data transmission techniques, studies on accelerometers, signal processing, and cardiovascular monitoring, clinical applications, and validation of commercial devices

    EFFICIENT AND SECURE ALGORITHMS FOR MOBILE CROWDSENSING THROUGH PERSONAL SMART DEVICES.

    Get PDF
    The success of the modern pervasive sensing strategies, such as the Social Sensing, strongly depends on the diffusion of smart mobile devices. Smartwatches, smart- phones, and tablets are devices capable of capturing and analyzing data about the user’s context, and can be exploited to infer high-level knowledge about the user himself, and/or the surrounding environment. In this sense, one of the most relevant applications of the Social Sensing paradigm concerns distributed Human Activity Recognition (HAR) in scenarios ranging from health care to urban mobility management, ambient intelligence, and assisted living. Even though some simple HAR techniques can be directly implemented on mo- bile devices, in some cases, such as when complex activities need to be analyzed timely, users’ smart devices should be able to operate as part of a more complex architecture, paving the way to the definition of new distributed computing paradigms. The general idea behind these approaches is to move early analysis to- wards the edge of the network, while relying on other intermediate (fog) or remote (cloud) devices for computations of increasing complexity. This logic represents the main core of the fog computing paradigm, and this thesis investigates its adoption in distributed sensing frameworks. Specifically, the conducted analysis focused on the design of a novel distributed HAR framework in which the heavy computation from the sensing layer is moved to intermediate devices and then to the cloud. Smart personal devices are used as processing units in order to guarantee real-time recognition, whereas the cloud is responsible for maintaining an overall, consistent view of the whole activity set. As compared to traditional cloud-based solutions, this choice allows to overcome processing and storage limitations of wearable devices while also reducing the overall bandwidth consumption. Then, the fog-based architecture allowed the design and definition of a novel HAR technique that combines three machine learning algorithms, namely k-means clustering, Support Vector Machines (SVMs), and Hidden Markov Models (HMMs), to recognize complex activities modeled as sequences of simple micro- activities. The capability to distribute the computation over the different entities in the network, allowing the use of complex HAR algorithms, is definitely one of the most significant advantages provided by the fog architecture. However, because both of its intrinsic nature and high degree of modularity, the fog-based system is particularly prone to cyber security attacks that can be performed against every element of the infrastructure. This aspect plays a main role with respect to social sensing since the users’ private data must be preserved from malicious purposes. Security issues are generally addressed by introducing cryptographic mechanisms that improve the system defenses against cyber attackers while, at the same time, causing an increase of the computational overhead for devices with limited resources. With the goal to find a trade-off between security and computation cost, the de- sign and definition of a secure lightweight protocol for social-based applications are discussed and then integrated into the distributed framework. The protocol covers all tasks commonly required by a general fog-based crowdsensing application, making it applicable not only in a distributed HAR scenario, discussed as a case study, but also in other application contexts. Experimental analysis aims to assess the performance of the solutions described so far. After highlighting the benefits the distributed HAR framework might bring in smart environments, an evaluation in terms of both recognition accuracy and complexity of data exchanged between network devices is conducted. Then, the effectiveness of the secure protocol is demonstrated by showing the low impact it causes on the total computational overhead. Moreover, a comparison with other state-of-art protocols is made to prove its effectiveness in terms of the provided security mechanisms
    corecore