907 research outputs found

    Fall prediction using behavioural modelling from sensor data in smart homes.

    Get PDF
    The number of methods for identifying potential fall risk is growing as the rate of elderly fallers continues to rise in the UK. Assessments for identifying risk of falling are usually performed in hospitals and other laboratory environments, however these are costly and cause inconvenience for the subject and health services. Replacing these intrusive testing methods with a passive in-home monitoring solution would provide a less time-consuming and cheaper alternative. As sensors become more readily available, machine learning models can be applied to the large amount of data they produce. This can support activity recognition, falls detection, prediction and risk determination. In this review, the growing complexity of sensor data, the required analysis, and the machine learning techniques used to determine risk of falling are explored. The current research on using passive monitoring in the home is discussed, while the viability of active monitoring using vision-based and wearable sensors is considered. Methods of fall detection, prediction and risk determination are then compared

    Locomotion Traces Data Mining for Supporting Frail People with Cognitive Impairment

    Get PDF
    The rapid increase in the senior population is posing serious challenges to national healthcare systems. Hence, innovative tools are needed to early detect health issues, including cognitive decline. Several clinical studies show that it is possible to identify cognitive impairment based on the locomotion patterns of older people. Thus, this thesis at first focused on providing a systematic literature review of locomotion data mining systems for supporting Neuro-Degenerative Diseases (NDD) diagnosis, identifying locomotion anomaly indicators and movement patterns for discovering low-level locomotion indicators, sensor data acquisition, and processing methods, as well as NDD detection algorithms considering their pros and cons. Then, we investigated the use of sensor data and Deep Learning (DL) to recognize abnormal movement patterns in instrumented smart-homes. In order to get rid of the noise introduced by indoor constraints and activity execution, we introduced novel visual feature extraction methods for locomotion data. Our solutions rely on locomotion traces segmentation, image-based extraction of salient features from locomotion segments, and vision-based DL. Furthermore, we proposed a data augmentation strategy to increase the volume of collected data and generalize the solution to different smart-homes with different layouts. We carried out extensive experiments with a large real-world dataset acquired in a smart-home test-bed from older people, including people with cognitive diseases. Experimental comparisons show that our system outperforms state-of-the-art methods

    Distributed Computing and Monitoring Technologies for Older Patients

    Get PDF
    This book summarizes various approaches for the automatic detection of health threats to older patients at home living alone. The text begins by briefly describing those who would most benefit from healthcare supervision. The book then summarizes possible scenarios for monitoring an older patient at home, deriving the common functional requirements for monitoring technology. Next, the work identifies the state of the art of technological monitoring approaches that are practically applicable to geriatric patients. A survey is presented on a range of such interdisciplinary fields as smart homes, telemonitoring, ambient intelligence, ambient assisted living, gerontechnology, and aging-in-place technology. The book discusses relevant experimental studies, highlighting the application of sensor fusion, signal processing and machine learning techniques. Finally, the text discusses future challenges, offering a number of suggestions for further research directions

    Gait rehabilitation monitor

    Get PDF
    This work presents a simple wearable, non-intrusive affordable mobile framework that allows remote patient monitoring during gait rehabilitation, by doctors and physiotherapists. The system includes a set of 2 Shimmer3 9DoF Inertial Measurement Units (IMUs), Bluetooth compatible from Shimmer, an Android smartphone for collecting and primary processing of data and persistence in a local database. Low computational load algorithms based on Euler angles and accelerometer, gyroscope and magnetometer signals were developed and used for the classification and identification of several gait disturbances. These algorithms include the alignment of IMUs sensors data by means of a common temporal reference as well as heel strike and stride detection algorithms to help segmentation of the remotely collected signals by the System app to identify gait strides and extract relevant features to feed, train and test a classifier to predict gait abnormalities in gait sessions. A set of drivers from Shimmer manufacturer is used to make the connection between the app and the set of IMUs using Bluetooth. The developed app allows users to collect data and train a classification model for identifying abnormal and normal gait types. The system provides a REST API available in a backend server along with Java and Python libraries and a PostgreSQL database. The machine-learning type is Supervised using Extremely Randomized Trees method. Frequency, time and time-frequency domain features were extracted from the collected and processed signals to train the classifier. To test the framework a set of gait abnormalities and normal gait were used to train a model and test the classifier.Este trabalho apresenta uma estrutura móvel acessível, simples e não intrusiva, que permite a monitorização e a assistência remota de pacientes durante a reabilitação da marcha, por médicos e fisioterapeutas que monitorizam a reabilitação da marcha do paciente. O sistema inclui um conjunto de 2 IMUs (Inertial Mesaurement Units) Shimmer3 da marca Shimmer, compatíveís com Bluetooth, um smartphone Android para recolha, e pré-processamento de dados e armazenamento numa base de dados local. Algoritmos de baixa carga computacional baseados em ângulos Euler e sinais de acelerómetros, giroscópios e magnetómetros foram desenvolvidos e utilizados para a classificação e identificação de diversas perturbações da marcha. Estes algoritmos incluem o alinhamento e sincronização dos dados dos sensores IMUs usando uma referência temporal comum, além de algoritmos de detecção de passos e strides para auxiliar a segmentação dos sinais recolhidos remotamente pelaappdestaframeworke identificar os passos da marcha extraindo as características relevantes para treinar e testar um classificador que faça a predição de deficiências na marcha durante as sessões de monitorização. Um conjunto de drivers do fabricante Shimmer é usado para fazer a conexão entre a app e o conjunto de IMUs através de Bluetooth. A app desenvolvida permite aos utilizadores recolher dados e treinar um modelo de classificação para identificar os tipos de marcha normais e patológicos. O sistema fornece uma REST API disponível num servidor backend recorrendo a bibliotecas Java e Python e a uma base de dados PostgreSQL. O tipo de machine-learning é Supervisionado usando Extremely Randomized Trees. Features no domínio do tempo, da frequência e do tempo-frequência foram extraídas dos sinais recolhidos e processados para treinar o classificador. Para testar a estrutura, um conjunto de marchas patológicas e normais foram utilizadas para treinar um modelo e testar o classificador

    Technology and good dementia care. A study of technology and ethics in everyday care practice

    Get PDF
    Avhandling (ph.d.) - Universitetet i Oslo, 2009publishedVersio

    State of the art of audio- and video based solutions for AAL

    Get PDF
    Working Group 3. Audio- and Video-based AAL ApplicationsIt is a matter of fact that Europe is facing more and more crucial challenges regarding health and social care due to the demographic change and the current economic context. The recent COVID-19 pandemic has stressed this situation even further, thus highlighting the need for taking action. Active and Assisted Living (AAL) technologies come as a viable approach to help facing these challenges, thanks to the high potential they have in enabling remote care and support. Broadly speaking, AAL can be referred to as the use of innovative and advanced Information and Communication Technologies to create supportive, inclusive and empowering applications and environments that enable older, impaired or frail people to live independently and stay active longer in society. AAL capitalizes on the growing pervasiveness and effectiveness of sensing and computing facilities to supply the persons in need with smart assistance, by responding to their necessities of autonomy, independence, comfort, security and safety. The application scenarios addressed by AAL are complex, due to the inherent heterogeneity of the end-user population, their living arrangements, and their physical conditions or impairment. Despite aiming at diverse goals, AAL systems should share some common characteristics. They are designed to provide support in daily life in an invisible, unobtrusive and user-friendly manner. Moreover, they are conceived to be intelligent, to be able to learn and adapt to the requirements and requests of the assisted people, and to synchronise with their specific needs. Nevertheless, to ensure the uptake of AAL in society, potential users must be willing to use AAL applications and to integrate them in their daily environments and lives. In this respect, video- and audio-based AAL applications have several advantages, in terms of unobtrusiveness and information richness. Indeed, cameras and microphones are far less obtrusive with respect to the hindrance other wearable sensors may cause to one’s activities. In addition, a single camera placed in a room can record most of the activities performed in the room, thus replacing many other non-visual sensors. Currently, video-based applications are effective in recognising and monitoring the activities, the movements, and the overall conditions of the assisted individuals as well as to assess their vital parameters (e.g., heart rate, respiratory rate). Similarly, audio sensors have the potential to become one of the most important modalities for interaction with AAL systems, as they can have a large range of sensing, do not require physical presence at a particular location and are physically intangible. Moreover, relevant information about individuals’ activities and health status can derive from processing audio signals (e.g., speech recordings). Nevertheless, as the other side of the coin, cameras and microphones are often perceived as the most intrusive technologies from the viewpoint of the privacy of the monitored individuals. This is due to the richness of the information these technologies convey and the intimate setting where they may be deployed. Solutions able to ensure privacy preservation by context and by design, as well as to ensure high legal and ethical standards are in high demand. After the review of the current state of play and the discussion in GoodBrother, we may claim that the first solutions in this direction are starting to appear in the literature. A multidisciplinary 4 debate among experts and stakeholders is paving the way towards AAL ensuring ergonomics, usability, acceptance and privacy preservation. The DIANA, PAAL, and VisuAAL projects are examples of this fresh approach. This report provides the reader with a review of the most recent advances in audio- and video-based monitoring technologies for AAL. It has been drafted as a collective effort of WG3 to supply an introduction to AAL, its evolution over time and its main functional and technological underpinnings. In this respect, the report contributes to the field with the outline of a new generation of ethical-aware AAL technologies and a proposal for a novel comprehensive taxonomy of AAL systems and applications. Moreover, the report allows non-technical readers to gather an overview of the main components of an AAL system and how these function and interact with the end-users. The report illustrates the state of the art of the most successful AAL applications and functions based on audio and video data, namely (i) lifelogging and self-monitoring, (ii) remote monitoring of vital signs, (iii) emotional state recognition, (iv) food intake monitoring, activity and behaviour recognition, (v) activity and personal assistance, (vi) gesture recognition, (vii) fall detection and prevention, (viii) mobility assessment and frailty recognition, and (ix) cognitive and motor rehabilitation. For these application scenarios, the report illustrates the state of play in terms of scientific advances, available products and research project. The open challenges are also highlighted. The report ends with an overview of the challenges, the hindrances and the opportunities posed by the uptake in real world settings of AAL technologies. In this respect, the report illustrates the current procedural and technological approaches to cope with acceptability, usability and trust in the AAL technology, by surveying strategies and approaches to co-design, to privacy preservation in video and audio data, to transparency and explainability in data processing, and to data transmission and communication. User acceptance and ethical considerations are also debated. Finally, the potentials coming from the silver economy are overviewed.publishedVersio

    Key body pose detection and movement assessment of fitness performances

    Get PDF
    Motion segmentation plays an important role in human motion analysis. Understanding the intrinsic features of human activities represents a challenge for modern science. Current solutions usually involve computationally demanding processing and achieve the best results using expensive, intrusive motion capture devices. In this thesis, research has been carried out to develop a series of methods for affordable and effective human motion assessment in the context of stand-up physical exercises. The objective of the research was to tackle the needs for an autonomous system that could be deployed in nursing homes or elderly people's houses, as well as rehabilitation of high profile sport performers. Firstly, it has to be designed so that instructions on physical exercises, especially in the case of elderly people, can be delivered in an understandable way. Secondly, it has to deal with the problem that some individuals may find it difficult to keep up with the programme due to physical impediments. They may also be discouraged because the activities are not stimulating or the instructions are hard to follow. In this thesis, a series of methods for automatic assessment production, as a combination of worded feedback and motion visualisation, is presented. The methods comprise two major steps. First, a series of key body poses are identified upon a model built by a multi-class classifier from a set of frame-wise features extracted from the motion data. Second, motion alignment (or synchronisation) with a reference performance (the tutor) is established in order to produce a second assessment model. Numerical assessment, first, and textual feedback, after, are delivered to the user along with a 3D skeletal animation to enrich the assessment experience. This animation is produced after the demonstration of the expert is transformed to the current level of performance of the user, in order to help encourage them to engage with the programme. The key body pose identification stage follows a two-step approach: first, the principal components of the input motion data are calculated in order to reduce the dimensionality of the input. Then, candidates of key body poses are inferred using multi-class, supervised machine learning techniques from a set of training samples. Finally, cluster analysis is used to refine the result. Key body pose identification is guaranteed to be invariant to the repetitiveness and symmetry of the performance. Results show the effectiveness of the proposed approach by comparing it against Dynamic Time Warping and Hierarchical Aligned Cluster Analysis. The synchronisation sub-system takes advantage of the cyclic nature of the stretches that are part of the stand-up exercises subject to study in order to remove out-of-sequence identified key body poses (i.e., false positives). Two approaches are considered for performing cycle analysis: a sequential, trivial algorithm and a proposed Genetic Algorithm, with and without prior knowledge on cyclic sequence patterns. These two approaches are compared and the Genetic Algorithm with prior knowledge shows a lower rate of false positives, but also a higher false negative rate. The GAs are also evaluated with randomly generated periodic string sequences. The automatic assessment follows a similar approach to that of key body pose identification. A multi-class, multi-target machine learning classifier is trained with features extracted from previous motion alignment. The inferred numerical assessment levels (one per identified key body pose and involved body joint) are translated into human-understandable language via a highly-customisable, context-free grammar. Finally, visual feedback is produced in the form of a synchronised skeletal animation of both the user's performance and the tutor's. If the user's performance is well below a standard then an affine offset transformation of the skeletal motion data series to an in-between performance is performed, in order to prevent dis-encouragement from the user and still provide a reference for improvement. At the end of this thesis, a study of the limitations of the methods in real circumstances is explored. Issues like the gimbal lock in the angular motion data, lack of accuracy of the motion capture system and the escalation of the training set are discussed. Finally, some conclusions are drawn and future work is discussed
    corecore