402 research outputs found

    A smart home environment to support safety and risk monitoring for the elderly living independently

    Get PDF
    The elderly prefer to live independently despite vulnerability to age-related challenges. Constant monitoring is required in cases where the elderly are living alone. The home environment can be a dangerous environment for the elderly living independently due to adverse events that can occur at any time. The potential risks for the elderly living independently can be categorised as injury in the home, home environmental risks and inactivity due to unconsciousness. The main research objective was to develop a Smart Home Environment (SHE) that can support risk and safety monitoring for the elderly living independently. An unobtrusive and low cost SHE solution that uses a Raspberry Pi 3 model B, a Microsoft Kinect Sensor and an Aeotec 4-in-1 Multisensor was implemented. The Aeotec Multisensor was used to measure temperature, motion, lighting, and humidity in the home. Data from the multisensor was collected using OpenHAB as the Smart Home Operating System. The information was processed using the Raspberry Pi 3 and push notifications were sent when risk situations were detected. An experimental evaluation was conducted to determine the accuracy with which the prototype SHE detected abnormal events. Evaluation scripts were each evaluated five times. The results show that the prototype has an average accuracy, sensitivity and specificity of 94%, 96.92% and 88.93% respectively. The sensitivity shows that the chance of the prototype missing a risk situation is 3.08%, and the specificity shows that the chance of incorrectly classifying a non-risk situation is 11.07%. The prototype does not require any interaction on the part of the elderly. Relatives and caregivers can remotely monitor the elderly person living independently via the mobile application or a web portal. The total cost of the equipment used was below R3000

    Fall prevention intervention technologies: A conceptual framework and survey of the state of the art

    Get PDF
    In recent years, an ever increasing range of technology-based applications have been developed with the goal of assisting in the delivery of more effective and efficient fall prevention interventions. Whilst there have been a number of studies that have surveyed technologies for a particular sub-domain of fall prevention, there is no existing research which surveys the full spectrum of falls prevention interventions and characterises the range of technologies that have augmented this landscape. This study presents a conceptual framework and survey of the state of the art of technology-based fall prevention systems which is derived from a systematic template analysis of studies presented in contemporary research literature. The framework proposes four broad categories of fall prevention intervention system: Pre-fall prevention; Post-fall prevention; Fall injury prevention; Cross-fall prevention. Other categories include, Application type, Technology deployment platform, Information sources, Deployment environment, User interface type, and Collaborative function. After presenting the conceptual framework, a detailed survey of the state of the art is presented as a function of the proposed framework. A number of research challenges emerge as a result of surveying the research literature, which include a need for: new systems that focus on overcoming extrinsic falls risk factors; systems that support the environmental risk assessment process; systems that enable patients and practitioners to develop more collaborative relationships and engage in shared decision making during falls risk assessment and prevention activities. In response to these challenges, recommendations and future research directions are proposed to overcome each respective challenge.The Royal Society, grant Ref: RG13082

    Optimal locations and computational frameworks of FSR and IMU sensors for measuring gait abnormalities

    Get PDF
    Neuromuscular diseases cause abnormal joint movements and drastically alter gait patterns in patients. The analysis of abnormal gait patterns can provide clinicians with an in-depth insight into implementing appropriate rehabilitation therapies. Wearable sensors are used to measure the gait patterns of neuromuscular patients due to their non-invasive and cost-efficient characteristics. FSR and IMU sensors are the most popular and efficient options. When assessing abnormal gait patterns, it is important to determine the optimal locations of FSRs and IMUs on the human body, along with their computational framework. The gait abnormalities of different types and the gait analysis systems based on IMUs and FSRs have therefore been investigated. After studying a variety of research articles, the optimal locations of the FSR and IMU sensors were determined by analysing the main pressure points under the feet and prime anatomical locations on the human body. A total of seven locations (the big toe, heel, first, third, and fifth metatarsals, as well as two close to the medial arch) can be used to measure gate cycles for normal and flat feet. It has been found that IMU sensors can be placed in four standard anatomical locations (the feet, shank, thigh, and pelvis). A section on computational analysis is included to illustrate how data from the FSR and IMU sensors are processed. Sensor data is typically sampled at 100 Hz, and wireless systems use a range of microcontrollers to capture and transmit the signals. The findings reported in this article are expected to help develop efficient and cost-effective gait analysis systems by using an optimal number of FSRs and IMUs

    State of the art of audio- and video based solutions for AAL

    Get PDF
    Working Group 3. Audio- and Video-based AAL ApplicationsIt is a matter of fact that Europe is facing more and more crucial challenges regarding health and social care due to the demographic change and the current economic context. The recent COVID-19 pandemic has stressed this situation even further, thus highlighting the need for taking action. Active and Assisted Living (AAL) technologies come as a viable approach to help facing these challenges, thanks to the high potential they have in enabling remote care and support. Broadly speaking, AAL can be referred to as the use of innovative and advanced Information and Communication Technologies to create supportive, inclusive and empowering applications and environments that enable older, impaired or frail people to live independently and stay active longer in society. AAL capitalizes on the growing pervasiveness and effectiveness of sensing and computing facilities to supply the persons in need with smart assistance, by responding to their necessities of autonomy, independence, comfort, security and safety. The application scenarios addressed by AAL are complex, due to the inherent heterogeneity of the end-user population, their living arrangements, and their physical conditions or impairment. Despite aiming at diverse goals, AAL systems should share some common characteristics. They are designed to provide support in daily life in an invisible, unobtrusive and user-friendly manner. Moreover, they are conceived to be intelligent, to be able to learn and adapt to the requirements and requests of the assisted people, and to synchronise with their specific needs. Nevertheless, to ensure the uptake of AAL in society, potential users must be willing to use AAL applications and to integrate them in their daily environments and lives. In this respect, video- and audio-based AAL applications have several advantages, in terms of unobtrusiveness and information richness. Indeed, cameras and microphones are far less obtrusive with respect to the hindrance other wearable sensors may cause to one’s activities. In addition, a single camera placed in a room can record most of the activities performed in the room, thus replacing many other non-visual sensors. Currently, video-based applications are effective in recognising and monitoring the activities, the movements, and the overall conditions of the assisted individuals as well as to assess their vital parameters (e.g., heart rate, respiratory rate). Similarly, audio sensors have the potential to become one of the most important modalities for interaction with AAL systems, as they can have a large range of sensing, do not require physical presence at a particular location and are physically intangible. Moreover, relevant information about individuals’ activities and health status can derive from processing audio signals (e.g., speech recordings). Nevertheless, as the other side of the coin, cameras and microphones are often perceived as the most intrusive technologies from the viewpoint of the privacy of the monitored individuals. This is due to the richness of the information these technologies convey and the intimate setting where they may be deployed. Solutions able to ensure privacy preservation by context and by design, as well as to ensure high legal and ethical standards are in high demand. After the review of the current state of play and the discussion in GoodBrother, we may claim that the first solutions in this direction are starting to appear in the literature. A multidisciplinary 4 debate among experts and stakeholders is paving the way towards AAL ensuring ergonomics, usability, acceptance and privacy preservation. The DIANA, PAAL, and VisuAAL projects are examples of this fresh approach. This report provides the reader with a review of the most recent advances in audio- and video-based monitoring technologies for AAL. It has been drafted as a collective effort of WG3 to supply an introduction to AAL, its evolution over time and its main functional and technological underpinnings. In this respect, the report contributes to the field with the outline of a new generation of ethical-aware AAL technologies and a proposal for a novel comprehensive taxonomy of AAL systems and applications. Moreover, the report allows non-technical readers to gather an overview of the main components of an AAL system and how these function and interact with the end-users. The report illustrates the state of the art of the most successful AAL applications and functions based on audio and video data, namely (i) lifelogging and self-monitoring, (ii) remote monitoring of vital signs, (iii) emotional state recognition, (iv) food intake monitoring, activity and behaviour recognition, (v) activity and personal assistance, (vi) gesture recognition, (vii) fall detection and prevention, (viii) mobility assessment and frailty recognition, and (ix) cognitive and motor rehabilitation. For these application scenarios, the report illustrates the state of play in terms of scientific advances, available products and research project. The open challenges are also highlighted. The report ends with an overview of the challenges, the hindrances and the opportunities posed by the uptake in real world settings of AAL technologies. In this respect, the report illustrates the current procedural and technological approaches to cope with acceptability, usability and trust in the AAL technology, by surveying strategies and approaches to co-design, to privacy preservation in video and audio data, to transparency and explainability in data processing, and to data transmission and communication. User acceptance and ethical considerations are also debated. Finally, the potentials coming from the silver economy are overviewed.publishedVersio

    Shear-promoted drug encapsulation into red blood cells: a CFD model and μ-PIV analysis

    Get PDF
    The present work focuses on the main parameters that influence shear-promoted encapsulation of drugs into erythrocytes. A CFD model was built to investigate the fluid dynamics of a suspension of particles flowing in a commercial micro channel. Micro Particle Image Velocimetry (μ-PIV) allowed to take into account for the real properties of the red blood cell (RBC), thus having a deeper understanding of the process. Coupling these results with an analytical diffusion model, suitable working conditions were defined for different values of haematocrit

    A smart home environment to support safety and risk monitoring for the elderly living independently

    Get PDF
    The elderly prefer to live independently despite vulnerability to age-related challenges. Constant monitoring is required in cases where the elderly are living alone. The home environment can be a dangerous environment for the elderly living independently due to adverse events that can occur at any time. The potential risks for the elderly living independently can be categorised as injury in the home, home environmental risks and inactivity due to unconsciousness. The main research objective was to develop a Smart Home Environment (SHE) that can support risk and safety monitoring for the elderly living independently. An unobtrusive and low cost SHE solution that uses a Raspberry Pi 3 model B, a Microsoft Kinect Sensor and an Aeotec 4-in-1 Multisensor was implemented. The Aeotec Multisensor was used to measure temperature, motion, lighting, and humidity in the home. Data from the multisensor was collected using OpenHAB as the Smart Home Operating System. The information was processed using the Raspberry Pi 3 and push notifications were sent when risk situations were detected. An experimental evaluation was conducted to determine the accuracy with which the prototype SHE detected abnormal events. Evaluation scripts were each evaluated five times. The results show that the prototype has an average accuracy, sensitivity and specificity of 94%, 96.92% and 88.93% respectively. The sensitivity shows that the chance of the prototype missing a risk situation is 3.08%, and the specificity shows that the chance of incorrectly classifying a non-risk situation is 11.07%. The prototype does not require any interaction on the part of the elderly. Relatives and caregivers can remotely monitor the elderly person living independently via the mobile application or a web portal. The total cost of the equipment used was below R3000

    Sistema para análise automatizada de movimento durante a marcha usando uma câmara RGB-D

    Get PDF
    Nowadays it is still common in clinical practice to assess the gait (or way of walking) of a given subject through the visual observation and use of a rating scale, which is a subjective approach. However, sensors including RGB-D cameras, such as the Microsoft Kinect, can be used to obtain quantitative information that allows performing gait analysis in a more objective way. The quantitative gait analysis results can be very useful for example to support the clinical assessment of patients with diseases that can affect their gait, such as Parkinson’s disease. The main motivation of this thesis was thus to provide support to gait assessment, by allowing to carry out quantitative gait analysis in an automated way. This objective was achieved by using 3-D data, provided by a single RGB-D camera, to automatically select the data corresponding to walking and then detect the gait cycles performed by the subject while walking. For each detected gait cycle, we obtain several gait parameters, which are used together with anthropometric measures to automatically identify the subject being assessed. The automated gait data selection relies on machine learning techniques to recognize three different activities (walking, standing, and marching), as well as two different positions of the subject in relation to the camera (facing the camera and facing away from it). For gait cycle detection, we developed an algorithm that estimates the instants corresponding to given gait events. The subject identification based on gait is enabled by a solution that was also developed by relying on machine learning. The developed solutions were integrated into a system for automated gait analysis, which we found to be a viable alternative to gold standard systems for obtaining several spatiotemporal and some kinematic gait parameters. Furthermore, the system is suitable for use in clinical environments, as well as ambulatory scenarios, since it relies on a single markerless RGB-D camera that is less expensive, more portable, less intrusive and easier to set up, when compared with the gold standard systems (multiple cameras and several markers attached to the subject’s body).Atualmente ainda é comum na prática clínica avaliar a marcha (ou o modo de andar) de uma certa pessoa através da observação visual e utilização de uma escala de classificação, o que é uma abordagem subjetiva. No entanto, existem sensores incluindo câmaras RGB-D, como a Microsoft Kinect, que podem ser usados para obter informação quantitativa que permite realizar a análise da marcha de um modo mais objetivo. Os resultados quantitativos da análise da marcha podem ser muito úteis, por exemplo, para apoiar a avaliação clínica de pessoas com doenças que podem afetar a sua marcha, como a doença de Parkinson. Assim, a principal motivação desta tese foi fornecer apoio à avaliação da marcha, permitindo realizar a análise quantitativa da marcha de forma automatizada. Este objetivo foi atingido usando dados em 3-D, fornecidos por uma única câmara RGB-D, para automaticamente selecionar os dados correspondentes a andar e, em seguida, detetar os ciclos de marcha executados pelo sujeito durante a marcha. Para cada ciclo de marcha identificado, obtemos vários parâmetros de marcha, que são usados em conjunto com medidas antropométricas para identificar automaticamente o sujeito que está a ser avaliado. A seleção automatizada de dados de marcha usa técnicas de aprendizagem máquina para reconhecer três atividades diferentes (andar, estar parado em pé e marchar), bem como duas posições diferentes do sujeito em relação à câmara (de frente para a câmara e de costas para ela). Para a deteção dos ciclos da marcha, desenvolvemos um algoritmo que estima os instantes correspondentes a determinados eventos da marcha. A identificação do sujeito com base na sua marcha é realizada usando uma solução que também foi desenvolvida com base em aprendizagem máquina. As soluções desenvolvidas foram integradas num sistema de análise automatizada de marcha, que demonstrámos ser uma alternativa viável a sistemas padrão de referência para obter vários parâmetros de marcha espácio-temporais e alguns parâmetros angulares. Além disso, o sistema é adequado para uso em ambientes clínicos, bem como em cenários ambulatórios, pois depende de apenas de uma câmara RGB-D que não usa marcadores e é menos dispendiosa, mais portátil, menos intrusiva e mais fácil de configurar, quando comparada com os sistemas padrão de referência (múltiplas câmaras e vários marcadores colocados no corpo do sujeito).Programa Doutoral em Informátic

    Exploring the Landscape of Ubiquitous In-home Health Monitoring: A Comprehensive Survey

    Full text link
    Ubiquitous in-home health monitoring systems have become popular in recent years due to the rise of digital health technologies and the growing demand for remote health monitoring. These systems enable individuals to increase their independence by allowing them to monitor their health from the home and by allowing more control over their well-being. In this study, we perform a comprehensive survey on this topic by reviewing a large number of literature in the area. We investigate these systems from various aspects, namely sensing technologies, communication technologies, intelligent and computing systems, and application areas. Specifically, we provide an overview of in-home health monitoring systems and identify their main components. We then present each component and discuss its role within in-home health monitoring systems. In addition, we provide an overview of the practical use of ubiquitous technologies in the home for health monitoring. Finally, we identify the main challenges and limitations based on the existing literature and provide eight recommendations for potential future research directions toward the development of in-home health monitoring systems. We conclude that despite extensive research on various components needed for the development of effective in-home health monitoring systems, the development of effective in-home health monitoring systems still requires further investigation.Comment: 35 pages, 5 figure
    corecore