37 research outputs found

    Personalized fall detection monitoring system based on learning from the user movements

    Get PDF
    Personalized fall detection system is shown to provide added and more benefits compare to the current fall detection system. The personalized model can also be applied to anything where one class of data is hard to gather. The results show that adapting to the user needs, improve the overall accuracy of the system. Future work includes detection of the smartphone on the user so that the user can place the system anywhere on the body and make sure it detects. Even though the accuracy is not 100% the proof of concept of personalization can be used to achieve greater accuracy. The concept of personalization used in this paper can also be extended to other research in the medical field or where data is hard to come by for a particular class. More research into the feature extraction and feature selection module should be investigated. For the feature selection module, more research into selecting features based on one class data

    Smartwatch-Based IoT Fall Detection Application

    Get PDF
    This paper proposes using only the streaming accelerometer data from a commodity-based smartwatch (IoT) device to detect falls. The smartwatch is paired with a smartphone as a means for performing the computation necessary for the prediction of falls in realtime without incurring latency in communicating with a cloud server while also preserving data privacy. The majority of current fall detection applications require specially designed hardware and software which make them expensive and inaccessible to the general public. Moreover, a fall detection application that uses a wrist worn smartwatch for data collection has the added benefit that it can be perceived as a piece of jewelry and thus non-intrusive. We experimented with both Support Vector Machine and Naive Bayes machine learning algorithms for the creation of the fall model. We demonstrated that by adjusting the sampling frequency of the streaming data, computing acceleration features over a sliding window, and using a Naive Bayes machine learning model, we can obtain the true positive rate of fall detection in real-world setting with 93.33% accuracy. Our result demonstrated that using a commodity-based smartwatch sensor can yield fall detection results that are competitive with those of custom made expensive sensors

    Personalized fall detection monitoring system based on learning from the user movements

    Get PDF
    Personalized fall detection system is shown to provide added and more benefits compare to the current fall detection system. The personalized model can also be applied to anything where one class of data is hard to gather. The results show that adapting to the user needs, improve the overall accuracy of the system. Future work includes detection of the smartphone on the user so that the user can place the system anywhere on the body and make sure it detects. Even though the accuracy is not 100% the proof of concept of personalization can be used to achieve greater accuracy. The concept of personalization used in this paper can also be extended to other research in the medical field or where data is hard to come by for a particular class. More research into the feature extraction and feature selection module should be investigated. For the feature selection module, more research into selecting features based on one class data.http://jit.ndhu.edu.twam2022Electrical, Electronic and Computer Engineerin

    Progetto di reti Sensori Wireless e tecniche di Fusione Sensoriale

    Get PDF
    Ambient Intelligence (AmI) envisions a world where smart, electronic environments are aware and responsive to their context. People moving into these settings engage many computational devices and systems simultaneously even if they are not aware of their presence. AmI stems from the convergence of three key technologies: ubiquitous computing, ubiquitous communication and natural interfaces. The dependence on a large amount of fixed and mobile sensors embedded into the environment makes of Wireless Sensor Networks one of the most relevant enabling technologies for AmI. WSN are complex systems made up of a number of sensor nodes, simple devices that typically embed a low power computational unit (microcontrollers, FPGAs etc.), a wireless communication unit, one or more sensors and a some form of energy supply (either batteries or energy scavenger modules). Low-cost, low-computational power, low energy consumption and small size are characteristics that must be taken into consideration when designing and dealing with WSNs. In order to handle the large amount of data generated by a WSN several multi sensor data fusion techniques have been developed. The aim of multisensor data fusion is to combine data to achieve better accuracy and inferences than could be achieved by the use of a single sensor alone. In this dissertation we present our results in building several AmI applications suitable for a WSN implementation. The work can be divided into two main areas: Multimodal Surveillance and Activity Recognition. Novel techniques to handle data from a network of low-cost, low-power Pyroelectric InfraRed (PIR) sensors are presented. Such techniques allow the detection of the number of people moving in the environment, their direction of movement and their position. We discuss how a mesh of PIR sensors can be integrated with a video surveillance system to increase its performance in people tracking. Furthermore we embed a PIR sensor within the design of a Wireless Video Sensor Node (WVSN) to extend its lifetime. Activity recognition is a fundamental block in natural interfaces. A challenging objective is to design an activity recognition system that is able to exploit a redundant but unreliable WSN. We present our activity in building a novel activity recognition architecture for such a dynamic system. The architecture has a hierarchical structure where simple nodes performs gesture classification and a high level meta classifiers fuses a changing number of classifier outputs. We demonstrate the benefit of such architecture in terms of increased recognition performance, and fault and noise robustness. Furthermore we show how we can extend network lifetime by performing a performance-power trade-off. Smart objects can enhance user experience within smart environments. We present our work in extending the capabilities of the Smart Micrel Cube (SMCube), a smart object used as tangible interface within a tangible computing framework, through the development of a gesture recognition algorithm suitable for this limited computational power device. Finally the development of activity recognition techniques can greatly benefit from the availability of shared dataset. We report our experience in building a dataset for activity recognition. Such dataset is freely available to the scientific community for research purposes and can be used as a testbench for developing, testing and comparing different activity recognition techniques

    Towards a smart fall detection system using wearable sensors

    Get PDF
    Empirical thesis."A thesis submitted as part of a cotutelle programme in partial fulfilment of Coventry University’s and Macquarie University’s requirements for the degree of Doctor of Philosophy" -- title page.Bibliography: pages 183-205.1. Introduction -- 2. Literature review -- 3. Falls and activities of daily living datasets -- 4. An analysis of fall-detection approaches -- 5. Event-triggered machine-learning approach (EvenT-ML) -- 6. Genetic-algorithm-based feature-selection technique for fall detection (GA-Fade) -- 7. Conclusions and future work -- References -- Appendices.A fall-detection system is employed in order to monitor an older person or infirm patient and alert their carer when a fall occurs. Some studies use wearable-sensor technologies to detect falls, as those technologies are getting smaller and cheaper. To date, wearable-sensor-based fall-detection approaches are categorised into threshold and machine-learning-based approaches. A high number of false alarms and a high computational cost are issues that are faced by the threshold- and machine-learning basedapproaches, respectively. The goal of this thesis is to address those issues by developing a novel low-computational-cost machine-learning-based approach for fall detection using accelerometer sensors.Toward this goal, existing fall-detection approaches (both threshold- and machine-learning-based) are explored and evaluated using publicly accessible datasets: Cogent, SisFall, and FARSEEING. Four machine-learning algorithms are implemented in this study: Classification and Regression Tree (CART), k-Nearest Neighbour (k-NN), Logistic Regression (LR), and Support Vector Machine (SVM). The experimental results show that using the correct size and type for the sliding window to segment the data stream can give the machine-learning-based approach a better detection rate than the threshold-based approach, though the difference between the threshold- and machine-learning-based approaches is not significant in some cases.To further improve the performance of the machine-learning-based approaches, fall stages (pre-impact, impact, and post-impact) are used as a basis for the feature extraction process. A novel approach called an event-triggered machine-learning approach for fall detection (EvenT-ML) is proposed, which can correctly align fall stages into a data segment and extract features based on those stages. Correctly aligning the stages to a data segment is difficult because of multiple high peaks, where a high peak usually indicates the impact stage, often occurring during the pre-impact stage. EvenT-ML significantly improves the detection rate and reduces the computational cost of existing machine-learning-based approaches, with an up to 97.6% F-score and a reduction in computational cost by a factor of up to 80 during feature extraction. Also, this technique can significantly outperform the threshold-based approach in all cases.Finally, to reduce the computational cost of EvenT-ML even further, the number of features needs to be reduced through a feature-selection process. A novel genetic-algorithm-based feature-selection technique (GA-Fade) is proposed, which uses multiple criteria to select features. GA-Fade considers the detection rate, the computational cost, and the number of sensors used as the selection criteria. GAFade is able to reduce the number of features by 60% on average, while achieving an F-score of up to 97.7%. The selected features also can give a significantly lower total computational cost than features that are selected by two single-criterion-based feature-selection techniques: SelectKBest and Recursive Feature Elimination.In summary, the techniques presented in this thesis significantly increase the detection rate of the machine-learning-based approach, so that a more reliable fall detection system can be achieved. Furthermore, as an additional advantage, these techniques can significantly reduce the computational cost of the machine-learning approach. This advantage indicates that the proposed machine-learning-based approach is more applicable to a small wearable device with limited resources (e.g., computing power and battery capacity) than the existing machine-learning-based approaches.Mode of access: World wide web1 online resource (xx, 211 pages) diagrams, graphs, table

    A Vision-based approach to fall detection for elderly patients receiving home-based care

    Get PDF
    Thesis submitted in partial fulfillment of the requirements for the Degree of Master of Science in Information Technology (MSIT) at Strathmore UniversityFalls present one of the unintentional accidents for people in the world. The adverse effects of a fall vary with the nature of the fall and the impact with the ground or object. Essentially, falls rarely occur in the daily activities of healthy individuals. The occurrence results in fatal or non-fatal falls. However, the falls are consequential for the elderly people since they result in future related problems or death. As such, elderly patients require additional attention in the case of fall events. Therefore, to mitigate the effect of a fall on an elderly patient, there must be the provision of a fast response mechanism. Response time to medical emergencies plays a key role in patient survival and recovery. As such, medical personnel strive to reduce the response time. Proper and immediate notification of an emergency aids in reducing the response time. In order to substantially reduce the negative effect of the fall or increase the survival chances, patients ought to receive fast medical response. Therefore, the need of a fast and proper notification method that aims at providing relevant information in regards to the nature of emergency of the patient. As such, proper monitoring leads to a reduced response time. Arguably, elderly patients require urgent medical care in case of a fall. This research work proposes a multi-person fall detection system, which implements a vision-based approach for fall detection leveraging on region-based convolution neural network. A fixed camera serves as the input device to capture images of people. The system analyses the image to identify the posture and orientation of the people present in the image. Based on the provided image, the system then classifies the occurrence as a fall or non-fall using the developed model. If it identifies a fall, an alert is then sent to a concerned party. The system achieves a mean average precision of 0.8 in fall detection. Further, the system detects a fall in an image in 3.8 seconds thus improving the response time of the medical personnel to aid in curbing the negative effects of a fall on a patient

    State of the art of audio- and video based solutions for AAL

    Get PDF
    Working Group 3. Audio- and Video-based AAL ApplicationsIt is a matter of fact that Europe is facing more and more crucial challenges regarding health and social care due to the demographic change and the current economic context. The recent COVID-19 pandemic has stressed this situation even further, thus highlighting the need for taking action. Active and Assisted Living (AAL) technologies come as a viable approach to help facing these challenges, thanks to the high potential they have in enabling remote care and support. Broadly speaking, AAL can be referred to as the use of innovative and advanced Information and Communication Technologies to create supportive, inclusive and empowering applications and environments that enable older, impaired or frail people to live independently and stay active longer in society. AAL capitalizes on the growing pervasiveness and effectiveness of sensing and computing facilities to supply the persons in need with smart assistance, by responding to their necessities of autonomy, independence, comfort, security and safety. The application scenarios addressed by AAL are complex, due to the inherent heterogeneity of the end-user population, their living arrangements, and their physical conditions or impairment. Despite aiming at diverse goals, AAL systems should share some common characteristics. They are designed to provide support in daily life in an invisible, unobtrusive and user-friendly manner. Moreover, they are conceived to be intelligent, to be able to learn and adapt to the requirements and requests of the assisted people, and to synchronise with their specific needs. Nevertheless, to ensure the uptake of AAL in society, potential users must be willing to use AAL applications and to integrate them in their daily environments and lives. In this respect, video- and audio-based AAL applications have several advantages, in terms of unobtrusiveness and information richness. Indeed, cameras and microphones are far less obtrusive with respect to the hindrance other wearable sensors may cause to one’s activities. In addition, a single camera placed in a room can record most of the activities performed in the room, thus replacing many other non-visual sensors. Currently, video-based applications are effective in recognising and monitoring the activities, the movements, and the overall conditions of the assisted individuals as well as to assess their vital parameters (e.g., heart rate, respiratory rate). Similarly, audio sensors have the potential to become one of the most important modalities for interaction with AAL systems, as they can have a large range of sensing, do not require physical presence at a particular location and are physically intangible. Moreover, relevant information about individuals’ activities and health status can derive from processing audio signals (e.g., speech recordings). Nevertheless, as the other side of the coin, cameras and microphones are often perceived as the most intrusive technologies from the viewpoint of the privacy of the monitored individuals. This is due to the richness of the information these technologies convey and the intimate setting where they may be deployed. Solutions able to ensure privacy preservation by context and by design, as well as to ensure high legal and ethical standards are in high demand. After the review of the current state of play and the discussion in GoodBrother, we may claim that the first solutions in this direction are starting to appear in the literature. A multidisciplinary 4 debate among experts and stakeholders is paving the way towards AAL ensuring ergonomics, usability, acceptance and privacy preservation. The DIANA, PAAL, and VisuAAL projects are examples of this fresh approach. This report provides the reader with a review of the most recent advances in audio- and video-based monitoring technologies for AAL. It has been drafted as a collective effort of WG3 to supply an introduction to AAL, its evolution over time and its main functional and technological underpinnings. In this respect, the report contributes to the field with the outline of a new generation of ethical-aware AAL technologies and a proposal for a novel comprehensive taxonomy of AAL systems and applications. Moreover, the report allows non-technical readers to gather an overview of the main components of an AAL system and how these function and interact with the end-users. The report illustrates the state of the art of the most successful AAL applications and functions based on audio and video data, namely (i) lifelogging and self-monitoring, (ii) remote monitoring of vital signs, (iii) emotional state recognition, (iv) food intake monitoring, activity and behaviour recognition, (v) activity and personal assistance, (vi) gesture recognition, (vii) fall detection and prevention, (viii) mobility assessment and frailty recognition, and (ix) cognitive and motor rehabilitation. For these application scenarios, the report illustrates the state of play in terms of scientific advances, available products and research project. The open challenges are also highlighted. The report ends with an overview of the challenges, the hindrances and the opportunities posed by the uptake in real world settings of AAL technologies. In this respect, the report illustrates the current procedural and technological approaches to cope with acceptability, usability and trust in the AAL technology, by surveying strategies and approaches to co-design, to privacy preservation in video and audio data, to transparency and explainability in data processing, and to data transmission and communication. User acceptance and ethical considerations are also debated. Finally, the potentials coming from the silver economy are overviewed.publishedVersio

    Novel technologies for the detection and mitigation of drowsy driving

    Get PDF
    In the human control of motor vehicles, there are situations regularly encountered wherein the vehicle operator becomes drowsy and fatigued due to the influence of long work days, long driving hours, or low amounts of sleep. Although various methods are currently proposed to detect drowsiness in the operator, they are either obtrusive, expensive, or otherwise impractical. The method of drowsy driving detection through the collection of Steering Wheel Movement (SWM) signals has become an important measure as it lends itself to accurate, effective, and cost-effective drowsiness detection. In this dissertation, novel technologies for drowsiness detection using Inertial Measurement Units (IMUs) are investigated and described. IMUs are an umbrella group of kinetic sensors (including accelerometers and gyroscopes) which transduce physical motions into data. Driving performances were recorded using IMUs as the primary sensors, and the resulting data were used by artificial intelligence algorithms, specifically Support Vector Machines (SVMs) to determine whether or not the individual was still fit to operate a motor vehicle. Results demonstrated high accuracy of the method in classifying drowsiness. It was also shown that the use of a smartphone-based approach to IMU monitoring of drowsiness will result in the initiation of feedback mechanisms upon a positive detection of drowsiness. These feedback mechanisms are intended to notify the driver of their drowsy state, and to dissuade further driving which could lead to crashes and/or fatalities. The novel methods not only demonstrated the ability to qualitatively determine a drivers drowsy state, but they were also low-cost, easy to implement, and unobtrusive to drivers. The efficacy, ease of use, and ease of access to these methods could potentially eliminate many barriers to the implementation of the technologies. Ultimately, it is hoped that these findings will help enhance traveler safety and prevent deaths and injuries to users

    Inferring Complex Activities for Context-aware Systems within Smart Environments

    Get PDF
    The rising ageing population worldwide and the prevalence of age-related conditions such as physical fragility, mental impairments and chronic diseases have significantly impacted the quality of life and caused a shortage of health and care services. Over-stretched healthcare providers are leading to a paradigm shift in public healthcare provisioning. Thus, Ambient Assisted Living (AAL) using Smart Homes (SH) technologies has been rigorously investigated to help address the aforementioned problems. Human Activity Recognition (HAR) is a critical component in AAL systems which enables applications such as just-in-time assistance, behaviour analysis, anomalies detection and emergency notifications. This thesis is aimed at investigating challenges faced in accurately recognising Activities of Daily Living (ADLs) performed by single or multiple inhabitants within smart environments. Specifically, this thesis explores five complementary research challenges in HAR. The first study contributes to knowledge by developing a semantic-enabled data segmentation approach with user-preferences. The second study takes the segmented set of sensor data to investigate and recognise human ADLs at multi-granular action level; coarse- and fine-grained action level. At the coarse-grained actions level, semantic relationships between the sensor, object and ADLs are deduced, whereas, at fine-grained action level, object usage at the satisfactory threshold with the evidence fused from multimodal sensor data is leveraged to verify the intended actions. Moreover, due to imprecise/vague interpretations of multimodal sensors and data fusion challenges, fuzzy set theory and fuzzy web ontology language (fuzzy-OWL) are leveraged. The third study focuses on incorporating uncertainties caused in HAR due to factors such as technological failure, object malfunction, and human errors. Hence, existing studies uncertainty theories and approaches are analysed and based on the findings, probabilistic ontology (PR-OWL) based HAR approach is proposed. The fourth study extends the first three studies to distinguish activities conducted by more than one inhabitant in a shared smart environment with the use of discriminative sensor-based techniques and time-series pattern analysis. The final study investigates in a suitable system architecture with a real-time smart environment tailored to AAL system and proposes microservices architecture with sensor-based off-the-shelf and bespoke sensing methods. The initial semantic-enabled data segmentation study was evaluated with 100% and 97.8% accuracy to segment sensor events under single and mixed activities scenarios. However, the average classification time taken to segment each sensor events have suffered from 3971ms and 62183ms for single and mixed activities scenarios, respectively. The second study to detect fine-grained-level user actions was evaluated with 30 and 153 fuzzy rules to detect two fine-grained movements with a pre-collected dataset from the real-time smart environment. The result of the second study indicate good average accuracy of 83.33% and 100% but with the high average duration of 24648ms and 105318ms, and posing further challenges for the scalability of fusion rule creations. The third study was evaluated by incorporating PR-OWL ontology with ADL ontologies and Semantic-Sensor-Network (SSN) ontology to define four types of uncertainties presented in the kitchen-based activity. The fourth study illustrated a case study to extended single-user AR to multi-user AR by combining RFID tags and fingerprint sensors discriminative sensors to identify and associate user actions with the aid of time-series analysis. The last study responds to the computations and performance requirements for the four studies by analysing and proposing microservices-based system architecture for AAL system. A future research investigation towards adopting fog/edge computing paradigms from cloud computing is discussed for higher availability, reduced network traffic/energy, cost, and creating a decentralised system. As a result of the five studies, this thesis develops a knowledge-driven framework to estimate and recognise multi-user activities at fine-grained level user actions. This framework integrates three complementary ontologies to conceptualise factual, fuzzy and uncertainties in the environment/ADLs, time-series analysis and discriminative sensing environment. Moreover, a distributed software architecture, multimodal sensor-based hardware prototypes, and other supportive utility tools such as simulator and synthetic ADL data generator for the experimentation were developed to support the evaluation of the proposed approaches. The distributed system is platform-independent and currently supported by an Android mobile application and web-browser based client interfaces for retrieving information such as live sensor events and HAR results
    corecore