39 research outputs found

    A Review of Physical Human Activity Recognition Chain Using Sensors

    Get PDF
    In the era of Internet of Medical Things (IoMT), healthcare monitoring has gained a vital role nowadays. Moreover, improving lifestyle, encouraging healthy behaviours, and decreasing the chronic diseases are urgently required. However, tracking and monitoring critical cases/conditions of elderly and patients is a great challenge. Healthcare services for those people are crucial in order to achieve high safety consideration. Physical human activity recognition using wearable devices is used to monitor and recognize human activities for elderly and patient. The main aim of this review study is to highlight the human activity recognition chain, which includes, sensing technologies, preprocessing and segmentation, feature extractions methods, and classification techniques. Challenges and future trends are also highlighted.

    A Light Weight Smartphone Based Human Activity Recognition System with High Accuracy

    Get PDF
    With the pervasive use of smartphones, which contain numerous sensors, data for modeling human activity is readily available. Human activity recognition is an important area of research because it can be used in context-aware applications. It has significant influence in many other research areas and applications including healthcare, assisted living, personal fitness, and entertainment. There has been a widespread use of machine learning techniques in wearable and smartphone based human activity recognition. Despite being an active area of research for more than a decade, most of the existing approaches require extensive computation to extract feature, train model, and recognize activities. This study presents a computationally efficient smartphone based human activity recognizer, based on dynamical systems and chaos theory. A reconstructed phase space is formed from the accelerometer sensor data using time-delay embedding. A single accelerometer axis is used to reduce memory and computational complexity. A Gaussian mixture model is learned on the reconstructed phase space. A maximum likelihood classifier uses the Gaussian mixture model to classify ten different human activities and a baseline. One public and one collected dataset were used to validate the proposed approach. Data was collected from ten subjects. The public dataset contains data from 30 subjects. Out-of-sample experimental results show that the proposed approach is able to recognize human activities from smartphones’ one-axis raw accelerometer sensor data. The proposed approach achieved 100% accuracy for individual models across all activities and datasets. The proposed research requires 3 to 7 times less amount of data than the existing approaches to classify activities. It also requires 3 to 4 times less amount of time to build reconstructed phase space compare to time and frequency domain features. A comparative evaluation is also presented to compare proposed approach with the state-of-the-art works

    A Methodology for Trustworthy IoT in Healthcare-Related Environments

    Get PDF
    The transition to the so-called retirement years, comes with the freedom to pursue old passions and hobbies that were not possible to do in the past busy life. Unfortunately, that freedom does not come alone, as the previous young years are gone, and the body starts to feel the time that passed. The necessity to adapt elder way of living, grows as they become more prone to health problems. Often, the solution for the attention required by the elders is nursing homes, or similar, that take away their so cherished independence. IoT has the great potential to help elder citizens stay healthier at home, since it has the possibility to connect and create non-intrusive systems capable of interpreting data and act accordingly. With that capability, comes the responsibility to ensure that the collected data is reliable and trustworthy, as human wellbeing may rely on it. Addressing this uncertainty is the motivation for the presented work. The proposed methodology to reduce this uncertainty and increase confidence relies on a data fusion and a redundancy approach, using a sensor set. Since the scope of wellbeing environment is wide, this thesis focuses its proof of concept on the detection of falls inside home environments, through an android app using an accelerometer sensor and a micro- phone. The experimental results demonstrates that the implemented system has more than 80% of reliable performance and can provide trustworthy results. Currently the app is being tested also in the frame of the European Union projects Smart4Health and Smart Bear.A transição para os chamados anos de reforma, vem com a liberdade de perseguir velhas pai- xões e passatempos que na passada vida ocupada não eram possíveis de realizar. Infelizmente, essa liberdade não vem sozinha, uma vez que os anos jovens anteriores terminaram, e o corpo começa a sentir o tempo que passou. A necessidade de adaptar o modo de vida dos menos jovens, cresce à medida que estes se tornam mais propensos a problemas de saúde. Muitas vezes, a solução para a atenção que os mais idosos necessitam são os lares de idosos, ou similares, que lhes tiram a tão querida independência. IoT tem o grande potencial de ajudar os cidadãos idosos a permanecerem mais saudá- veis em casa, uma vez que tem a possibilidade de se ligar e criar sistemas não intrusivos capa- zes de interpretar dados e agir em conformidade. Com essa capacidade, vem a responsabili- dade de assegurar que os dados recolhidos são fiáveis e de confiança, uma vez que o bem- estar humano possa depender dos mesmos. Abordar esta incerteza é a motivação para o tra- balho apresentado. A metodologia proposta para reduzir esta incerteza e aumentar a confiança no sistema baseia-se numa fusão de dados e numa abordagem de redundância, utilizando um conjunto de sensores. Uma vez que o assunto de bem-estar e saúde é vasto, esta tese concentra a sua prova de conceito na deteção de quedas dentro de ambientes domésticos, através de uma aplicação android, utilizando um sensor de acelerómetro e um microfone. Os resultados expe- rimentais demonstram que o sistema implementado tem um desempenho superior a 80% e pode fornecer dados fiáveis. Atualmente a aplicação está a ser testada também no âmbito dos projetos da União Europeia Smart4Health e Smart Bear

    An optimized context-aware mobile computing model to filter inappropriate incoming calls in smartphone

    Get PDF
    Requests for communication via mobile devices can be disruptive to the receiver in certain social situation. For example, unsuitable incoming calls may put the receiver in a dangerous condition, as in the case of receiving calls while driving. Therefore, designers of mobile computing interfaces require plans for minimizing annoying calls. To reduce the frequency of these calls, one promising approach is to provide an intelligent and accurate system, based on context awareness with cues of a callee's context allowing informed decisions of when to answer a call. The processing capabilities and advantages of mobile devices equipped with portable sensors provide the basis for new context-awareness services and applications. However, contextawareness mobile computing systems are needed to manage the difficulty of multiple sources of context that affects the accuracy of the systems, and the challenge of energy hungry GPS sensor that affects the battery consumption of mobile phone. Hence, reducing the cost of GPS sensor and increasing the accuracy of current contextawareness call filtering systems are two main motivations of this study. Therefore, this study proposes a new localization mechanism named Improved Battery Life in Context Awareness System (IBCS) to deal with the energy-hungry GPS sensor and optimize the battery consumption of GPS sensor in smartphone for more than four hours. Finally, this study investigates the context-awareness models in smartphone and develops an alternative intelligent model structure to improve the accuracy rate. Hence, a new optimized context-awareness mobile computing model named Optimized Context Filtering (OCF) is developed to filter unsuitable incoming calls based on context information of call receiver. In this regard, a new extended Naive Bayesian classifier was proposed based on the Naive Bayesian classifier by combining the incremental learning strategy with appropriate weight on the new training data. This new classifier is utilized as an inference engine to the proposed model to increase its accuracy rate. The results indicated that 7% improvement was seen in the accuracy rate of the proposed extended naive Bayesian classifier. On the other hand, the proposed model result showed that the OCF model improved the accuracy rate by 14%. These results indicated that the proposed model is a hopeful approach to provide an intelligent call filtering system based on context information for smartphones

    Wearable inertial sensors and range of motion metrics in physical therapy remote support

    Get PDF
    Abstract. The practice of physiotherapy diagnoses patient ailments which are often treated by the daily repetition of prescribed physiotherapeutic exercise. The effectiveness of the exercise regime is dependent on regular daily repetition of the regime and the correct execution of the prescribed exercises. Patients often have issues learning unfamiliar exercises and performing the exercise with good technique. This design science research study examines a back squat classifier design to appraise patient exercise regime away from the physiotherapy practice. The scope of the exercise appraisal is limited to one exercise, the back squat. Kinematic data captured with commercial inertial sensors is presented to a small group of physiotherapists to illustrate the potential of the technology to measure range of motion (ROM) for back squat appraisal. Opinions are considered from two fields of physiotherapy, general musculoskeletal and post-operative rehabilitation. While the exercise classifier is considered not suitable for post-operative rehabilitation, the opinions expressed for use in general musculoskeletal physiotherapy are positive. Kinematic data captured with gyroscope sensors in the sagittal plane is analysed with Matlab to develop a method for back squat exercise recognition and appraisal. The artefact, a back squat classifier with appraisal features is constructed from Matlab scripts which are proven to be effective with kinematic data from a novice athlete

    Personal State and Emotion Monitoring by Wearable Computing and Machine Learning

    Get PDF
    One of the major scientific undertakings over the past few years has been exploring the interaction between humans and machines in mobile environments. Wearable computers, embedded in clothing or seamlessly integrated into everyday devices, have an incredible advantage to become the main gateway to personal health management. Current state of the art devices are capable in monitoring basic physical or physiological parameters. Traditional health systems procedures depend on the physical presence of the patient and a medical specialist that not only is a reason of overall costs but also reduces the quality of patients' lives, particularly elderly patients. Usually, patients have to go through the following steps for the traditional procedure: Firstly, patients need to visit the clinic, get registered at reception, wait for the turn, go to the lab for the physiological measurement, wait for the medical experts call, to finally receive feedback from the medical expert. In this work, we examined how to utilize existing technology in order to develop an e-health monitoring system especially for heart patients. This system should support the interaction between the patient and the physician even when the patient is not in the clinic. The supporting wearable health monitoring system WHMS should recognize physical activities, emotional states and transmit this information to the physician along with relevant physiological data; in this way patients do not need to visit the clinic every time for the physician's feed-back. After the discussion with medical experts, we identified relevant physical activities, emotional states and physiological data needed for the patients' examinations. A prototype of this concept for a health monitoring system of the proposed solution was implemented taking into account physical activities, emotional states and physiological data

    MediAlly: A Provenance-Aware Remote Health Monitoring Middleware

    Get PDF

    Inferring Complex Activities for Context-aware Systems within Smart Environments

    Get PDF
    The rising ageing population worldwide and the prevalence of age-related conditions such as physical fragility, mental impairments and chronic diseases have significantly impacted the quality of life and caused a shortage of health and care services. Over-stretched healthcare providers are leading to a paradigm shift in public healthcare provisioning. Thus, Ambient Assisted Living (AAL) using Smart Homes (SH) technologies has been rigorously investigated to help address the aforementioned problems. Human Activity Recognition (HAR) is a critical component in AAL systems which enables applications such as just-in-time assistance, behaviour analysis, anomalies detection and emergency notifications. This thesis is aimed at investigating challenges faced in accurately recognising Activities of Daily Living (ADLs) performed by single or multiple inhabitants within smart environments. Specifically, this thesis explores five complementary research challenges in HAR. The first study contributes to knowledge by developing a semantic-enabled data segmentation approach with user-preferences. The second study takes the segmented set of sensor data to investigate and recognise human ADLs at multi-granular action level; coarse- and fine-grained action level. At the coarse-grained actions level, semantic relationships between the sensor, object and ADLs are deduced, whereas, at fine-grained action level, object usage at the satisfactory threshold with the evidence fused from multimodal sensor data is leveraged to verify the intended actions. Moreover, due to imprecise/vague interpretations of multimodal sensors and data fusion challenges, fuzzy set theory and fuzzy web ontology language (fuzzy-OWL) are leveraged. The third study focuses on incorporating uncertainties caused in HAR due to factors such as technological failure, object malfunction, and human errors. Hence, existing studies uncertainty theories and approaches are analysed and based on the findings, probabilistic ontology (PR-OWL) based HAR approach is proposed. The fourth study extends the first three studies to distinguish activities conducted by more than one inhabitant in a shared smart environment with the use of discriminative sensor-based techniques and time-series pattern analysis. The final study investigates in a suitable system architecture with a real-time smart environment tailored to AAL system and proposes microservices architecture with sensor-based off-the-shelf and bespoke sensing methods. The initial semantic-enabled data segmentation study was evaluated with 100% and 97.8% accuracy to segment sensor events under single and mixed activities scenarios. However, the average classification time taken to segment each sensor events have suffered from 3971ms and 62183ms for single and mixed activities scenarios, respectively. The second study to detect fine-grained-level user actions was evaluated with 30 and 153 fuzzy rules to detect two fine-grained movements with a pre-collected dataset from the real-time smart environment. The result of the second study indicate good average accuracy of 83.33% and 100% but with the high average duration of 24648ms and 105318ms, and posing further challenges for the scalability of fusion rule creations. The third study was evaluated by incorporating PR-OWL ontology with ADL ontologies and Semantic-Sensor-Network (SSN) ontology to define four types of uncertainties presented in the kitchen-based activity. The fourth study illustrated a case study to extended single-user AR to multi-user AR by combining RFID tags and fingerprint sensors discriminative sensors to identify and associate user actions with the aid of time-series analysis. The last study responds to the computations and performance requirements for the four studies by analysing and proposing microservices-based system architecture for AAL system. A future research investigation towards adopting fog/edge computing paradigms from cloud computing is discussed for higher availability, reduced network traffic/energy, cost, and creating a decentralised system. As a result of the five studies, this thesis develops a knowledge-driven framework to estimate and recognise multi-user activities at fine-grained level user actions. This framework integrates three complementary ontologies to conceptualise factual, fuzzy and uncertainties in the environment/ADLs, time-series analysis and discriminative sensing environment. Moreover, a distributed software architecture, multimodal sensor-based hardware prototypes, and other supportive utility tools such as simulator and synthetic ADL data generator for the experimentation were developed to support the evaluation of the proposed approaches. The distributed system is platform-independent and currently supported by an Android mobile application and web-browser based client interfaces for retrieving information such as live sensor events and HAR results

    State of the art of audio- and video based solutions for AAL

    Get PDF
    Working Group 3. Audio- and Video-based AAL ApplicationsIt is a matter of fact that Europe is facing more and more crucial challenges regarding health and social care due to the demographic change and the current economic context. The recent COVID-19 pandemic has stressed this situation even further, thus highlighting the need for taking action. Active and Assisted Living (AAL) technologies come as a viable approach to help facing these challenges, thanks to the high potential they have in enabling remote care and support. Broadly speaking, AAL can be referred to as the use of innovative and advanced Information and Communication Technologies to create supportive, inclusive and empowering applications and environments that enable older, impaired or frail people to live independently and stay active longer in society. AAL capitalizes on the growing pervasiveness and effectiveness of sensing and computing facilities to supply the persons in need with smart assistance, by responding to their necessities of autonomy, independence, comfort, security and safety. The application scenarios addressed by AAL are complex, due to the inherent heterogeneity of the end-user population, their living arrangements, and their physical conditions or impairment. Despite aiming at diverse goals, AAL systems should share some common characteristics. They are designed to provide support in daily life in an invisible, unobtrusive and user-friendly manner. Moreover, they are conceived to be intelligent, to be able to learn and adapt to the requirements and requests of the assisted people, and to synchronise with their specific needs. Nevertheless, to ensure the uptake of AAL in society, potential users must be willing to use AAL applications and to integrate them in their daily environments and lives. In this respect, video- and audio-based AAL applications have several advantages, in terms of unobtrusiveness and information richness. Indeed, cameras and microphones are far less obtrusive with respect to the hindrance other wearable sensors may cause to one’s activities. In addition, a single camera placed in a room can record most of the activities performed in the room, thus replacing many other non-visual sensors. Currently, video-based applications are effective in recognising and monitoring the activities, the movements, and the overall conditions of the assisted individuals as well as to assess their vital parameters (e.g., heart rate, respiratory rate). Similarly, audio sensors have the potential to become one of the most important modalities for interaction with AAL systems, as they can have a large range of sensing, do not require physical presence at a particular location and are physically intangible. Moreover, relevant information about individuals’ activities and health status can derive from processing audio signals (e.g., speech recordings). Nevertheless, as the other side of the coin, cameras and microphones are often perceived as the most intrusive technologies from the viewpoint of the privacy of the monitored individuals. This is due to the richness of the information these technologies convey and the intimate setting where they may be deployed. Solutions able to ensure privacy preservation by context and by design, as well as to ensure high legal and ethical standards are in high demand. After the review of the current state of play and the discussion in GoodBrother, we may claim that the first solutions in this direction are starting to appear in the literature. A multidisciplinary 4 debate among experts and stakeholders is paving the way towards AAL ensuring ergonomics, usability, acceptance and privacy preservation. The DIANA, PAAL, and VisuAAL projects are examples of this fresh approach. This report provides the reader with a review of the most recent advances in audio- and video-based monitoring technologies for AAL. It has been drafted as a collective effort of WG3 to supply an introduction to AAL, its evolution over time and its main functional and technological underpinnings. In this respect, the report contributes to the field with the outline of a new generation of ethical-aware AAL technologies and a proposal for a novel comprehensive taxonomy of AAL systems and applications. Moreover, the report allows non-technical readers to gather an overview of the main components of an AAL system and how these function and interact with the end-users. The report illustrates the state of the art of the most successful AAL applications and functions based on audio and video data, namely (i) lifelogging and self-monitoring, (ii) remote monitoring of vital signs, (iii) emotional state recognition, (iv) food intake monitoring, activity and behaviour recognition, (v) activity and personal assistance, (vi) gesture recognition, (vii) fall detection and prevention, (viii) mobility assessment and frailty recognition, and (ix) cognitive and motor rehabilitation. For these application scenarios, the report illustrates the state of play in terms of scientific advances, available products and research project. The open challenges are also highlighted. The report ends with an overview of the challenges, the hindrances and the opportunities posed by the uptake in real world settings of AAL technologies. In this respect, the report illustrates the current procedural and technological approaches to cope with acceptability, usability and trust in the AAL technology, by surveying strategies and approaches to co-design, to privacy preservation in video and audio data, to transparency and explainability in data processing, and to data transmission and communication. User acceptance and ethical considerations are also debated. Finally, the potentials coming from the silver economy are overviewed.publishedVersio

    AI-based framework for automatically extracting high-low features from NDS data to understand driver behavior

    Get PDF
    Our ability to detect and characterize unsafe driving behaviors in naturalistic driving environments and associate them with road crashes will be a significant step toward developing effective crash countermeasures. Due to some limitations, researchers have not yet fully achieved the stated goal of characterizing unsafe driving behaviors. These limitations include, but are not limited to, the high cost of data collection and the manual processes required to extract information from NDS data. In light of this limitations, the primary objective of this study is to develop an artificial intelligence (AI) framework for automatically extracting high-low features from the NDS dataset to explain driver behavior using a low-cost data collection method. The author proposed three novel objectives for achieving the study's objective in light of the identified research gaps. Initially, the study develops a low-cost data acquisition system for gathering data on naturalistic driving. Second, the study develops a framework that automatically extracts high- to low-level features, such as vehicle density, turning movements, and lane changes, from the data collected by the developed data acquisition system. Thirdly, the study extracted information from the NDS data to gain a better understanding of people's car-following behavior and other driving behaviors in order to develop countermeasures for traffic safety through data collection and analysis. The first objective of this study is to develop a multifunctional smartphone application for collecting NDS data. Three major modules comprised the designed app: a front-end user interface module, a sensor module, and a backend module. The front-end, which is also the application's user interface, was created to provide a streamlined view that exposed the application's key features via a tab bar controller. This allows us to compartmentalize the application's critical components into separate views. The backend module provides computational resources that can be used to accelerate front-end query responses. Google Firebase powered the backend of the developed application. The sensor modules included CoreMotion, CoreLocation, and AVKit. CoreMotion collects motion and environmental data from the onboard hardware of iOS devices, including accelerometers, gyroscopes, pedometers, magnetometers, and barometers. In contrast, CoreLocation determines the altitude, orientation, and geographical location of a device, as well as its position relative to an adjacent iBeacon device. The AVKit finally provides a high-level interface for video content playback. To achieve objective two, we formulated the problem as both a classification and time-series segmentation problem. This is due to the fact that the majority of existing driver maneuver detection methods formulate the problem as a pure classification problem, assuming a discretized input signal with known start and end locations for each event or segment. In practice, however, vehicle telemetry data used for detecting driver maneuvers are continuous; thus, a fully automated driver maneuver detection system should incorporate solutions for both time series segmentation and classification. The five stages of our proposed methodology are as follows: 1) data preprocessing, 2) segmentation of events, 3) machine learning classification, 4) heuristics classification, and 5) frame-by-frame video annotation. The result of the study indicates that the gyroscope reading is an exceptional parameter for extracting driving events, as its accuracy was consistent across all four models developed. The study reveals that the Energy Maximization Algorithm's accuracy ranges from 56.80 percent (left lane change) to 85.20 percent (right lane change) (lane-keeping) All four models developed had comparable accuracies to studies that used similar models. The 1D-CNN model had the highest accuracy (98.99 percent), followed by the LSTM model (97.75 percent), the RF model (97.71 percent), and the SVM model (97.65 percent). To serve as a ground truth, continuous signal data was annotated. In addition, the proposed method outperformed the fixed time window approach. The study analyzed the overall pipeline's accuracy by penalizing the F1 scores of the ML models with the EMA's duration score. The pipeline's accuracy ranged between 56.8 percent and 85.0 percent overall. The ultimate goal of this study was to extract variables from naturalistic driving videos that would facilitate an understanding of driver behavior in a naturalistic driving environment. To achieve this objective, three sub-goals were established. First, we developed a framework for extracting features pertinent to comprehending the behavior of natural-environment drivers. Using the extracted features, we then analyzed the car-following behaviors of various demographic groups. Thirdly, using a machine learning algorithm, we modeled the acceleration of both the ego-vehicle and the leading vehicle. Younger drivers are more likely to be aggressive, according to the findings of this study. In addition, the study revealed that drivers tend to accelerate when the distance between them and the vehicle in front of them is substantial. Lastly, compared to younger drivers, elderly motorists maintain a significantly larger following distance. This study's results have numerous safety implications. First, the analysis of the driving behavior of different demographic groups will enable safety engineers to develop the most effective crash countermeasures by enhancing their understanding of the driving styles of different demographic groups and the causes of collisions. Second, the models developed to predict the acceleration of both the ego-vehicle and the leading vehicle will provide enough information to explain the behavior of the ego-driver.Includes bibliographical references
    corecore