13 research outputs found

    Comparison and Characterization of Android-Based Fall Detection Systems

    Get PDF
    Falls are a foremost source of injuries and hospitalization for seniors. The adoption of automatic fall detection mechanisms can noticeably reduce the response time of the medical staff or caregivers when a fall takes place. Smartphones are being increasingly proposed as wearable, cost-effective and not-intrusive systems for fall detection. The exploitation of smartphones’ potential (and in particular, the Android Operating System) can benefit from the wide implantation, the growing computational capabilities and the diversity of communication interfaces and embedded sensors of these personal devices. After revising the state-of-the-art on this matter, this study develops an experimental testbed to assess the performance of different fall detection algorithms that ground their decisions on the analysis of the inertial data registered by the accelerometer of the smartphone. Results obtained in a real testbed with diverse individuals indicate that the accuracy of the accelerometry-based techniques to identify the falls depends strongly on the fall pattern. The performed tests also show the difficulty to set detection acceleration thresholds that allow achieving a good trade-off between false negatives (falls that remain unnoticed) and false positives (conventional movements that are erroneously classified as falls). In any case, the study of the evolution of the battery drain reveals that the extra power consumption introduced by the Android monitoring applications cannot be neglected when evaluating the autonomy and even the viability of fall detection systems.Ministerio de Economía y Competitividad TEC2009-13763-C02-0

    Personalized fall detection monitoring system based on learning from the user movements

    Get PDF
    Personalized fall detection system is shown to provide added and more benefits compare to the current fall detection system. The personalized model can also be applied to anything where one class of data is hard to gather. The results show that adapting to the user needs, improve the overall accuracy of the system. Future work includes detection of the smartphone on the user so that the user can place the system anywhere on the body and make sure it detects. Even though the accuracy is not 100% the proof of concept of personalization can be used to achieve greater accuracy. The concept of personalization used in this paper can also be extended to other research in the medical field or where data is hard to come by for a particular class. More research into the feature extraction and feature selection module should be investigated. For the feature selection module, more research into selecting features based on one class data

    Review of current study methods for VRU safety : Appendix 4 –Systematic literature review: Naturalistic driving studies

    Get PDF
    With the aim of assessing the extent and nature of naturalistic studies involving vulnerable road users, a systematic literature review was carried out. The purpose of this review was to identify studies based on naturalistic data from VRUs (pedestrians, cyclists, moped riders and motorcyclists) to provide an overview of how data was collected and how data has been used. In the literature review, special attention is given to the use of naturalistic studies as a tool for road safety evaluations to gain knowledge on methodological issues for the design of a naturalistic study involving VRUs within the InDeV project. The review covered the following types of studies: •Studies collecting naturalistic data from vulnerable road users (pedestrians, cyclists, moped riders, motorcyclists). •Studies collecting accidents or safety-critical situations via smartphones from vulnerable road users and motorized vehicles. •Studies collecting falls that have not occurred on roads via smartphones. Four databases were used in the search for publications: ScienceDirect, Transport Research International Documentation (TRID), IEEE Xplore and PubMed. In addition to these four databases, six databases were screened to check if they contained references to publications not already included in the review. These databases were: Web of Science, Scopus, Google Scholar, Springerlink, Taylor & Francis and Engineering Village.The findings revealed that naturalistic studies of vulnerable road users have mainly been carried out by collecting data from cyclists and pedestrians and to a smaller degree of motorcyclists. To collect data, most studies used the built-in sensors of smartphones, although equipped bicycles or motorcycles were used in some studies. Other types of portable equipment was used to a lesser degree, particularly for cycling studies. The naturalistic studies were carried out with various purposes: mode classification, travel surveys, measuring the distance and number of trips travelled and conducting traffic counts. Naturalistic data was also used for assessment of the safety based on accidents, safety-critical events or other safety-related aspect such as speed behaviour, head turning and obstacle detection. Only few studies detect incidents automatically based on indicators collected via special equipment such as accelerometers, gyroscopes, GPS receivers, switches, etc. for assessing the safety by identifying accidents or safety-critical events. Instead, they rely on self-reporting or manual review of video footage. Despite this, the review indicates that there is a large potential of detecting accidents from naturalistic data. A large number of studies focused on the detection of falls among elderly people. Using smartphone sensors, the movements of the participants were monitored continuously. Most studies used acceleration as indicator of falls. In some cases, the acceleration was supplemented by rotation measurements to indicate that a fall had occurred. Most studies of using kinematic triggers for detection of falls, accidents and safety-critical events were primarily used for demonstration of prototypes of detection algorithms. Few studies have been tested on real accidents or falls. Instead, simulated falls were used both in studies of vulnerable road users and for studies of falls among elderly people

    Personalized fall detection monitoring system based on learning from the user movements

    Get PDF
    Personalized fall detection system is shown to provide added and more benefits compare to the current fall detection system. The personalized model can also be applied to anything where one class of data is hard to gather. The results show that adapting to the user needs, improve the overall accuracy of the system. Future work includes detection of the smartphone on the user so that the user can place the system anywhere on the body and make sure it detects. Even though the accuracy is not 100% the proof of concept of personalization can be used to achieve greater accuracy. The concept of personalization used in this paper can also be extended to other research in the medical field or where data is hard to come by for a particular class. More research into the feature extraction and feature selection module should be investigated. For the feature selection module, more research into selecting features based on one class data.http://jit.ndhu.edu.twam2022Electrical, Electronic and Computer Engineerin

    Analysis of Android Device-Based Solutions for Fall Detection

    Get PDF
    Falls are a major cause of health and psychological problems as well as hospitalization costs among older adults. Thus, the investigation on automatic Fall Detection Systems (FDSs) has received special attention from the research community during the last decade. In this area, the widespread popularity, decreasing price, computing capabilities, built-in sensors and multiplicity of wireless interfaces of Android-based devices (especially smartphones) have fostered the adoption of this technology to deploy wearable and inexpensive architectures for fall detection. This paper presents a critical and thorough analysis of those existing fall detection systems that are based on Android devices. The review systematically classifies and compares the proposals of the literature taking into account different criteria such as the system architecture, the employed sensors, the detection algorithm or the response in case of a fall alarms. The study emphasizes the analysis of the evaluation methods that are employed to assess the effectiveness of the detection process. The review reveals the complete lack of a reference framework to validate and compare the proposals. In addition, the study also shows that most research works do not evaluate the actual applicability of the Android devices (with limited battery and computing resources) to fall detection solutions.Ministerio de Economía y Competitividad TEC2013-42711-

    State of the art of audio- and video based solutions for AAL

    Get PDF
    Working Group 3. Audio- and Video-based AAL ApplicationsIt is a matter of fact that Europe is facing more and more crucial challenges regarding health and social care due to the demographic change and the current economic context. The recent COVID-19 pandemic has stressed this situation even further, thus highlighting the need for taking action. Active and Assisted Living (AAL) technologies come as a viable approach to help facing these challenges, thanks to the high potential they have in enabling remote care and support. Broadly speaking, AAL can be referred to as the use of innovative and advanced Information and Communication Technologies to create supportive, inclusive and empowering applications and environments that enable older, impaired or frail people to live independently and stay active longer in society. AAL capitalizes on the growing pervasiveness and effectiveness of sensing and computing facilities to supply the persons in need with smart assistance, by responding to their necessities of autonomy, independence, comfort, security and safety. The application scenarios addressed by AAL are complex, due to the inherent heterogeneity of the end-user population, their living arrangements, and their physical conditions or impairment. Despite aiming at diverse goals, AAL systems should share some common characteristics. They are designed to provide support in daily life in an invisible, unobtrusive and user-friendly manner. Moreover, they are conceived to be intelligent, to be able to learn and adapt to the requirements and requests of the assisted people, and to synchronise with their specific needs. Nevertheless, to ensure the uptake of AAL in society, potential users must be willing to use AAL applications and to integrate them in their daily environments and lives. In this respect, video- and audio-based AAL applications have several advantages, in terms of unobtrusiveness and information richness. Indeed, cameras and microphones are far less obtrusive with respect to the hindrance other wearable sensors may cause to one’s activities. In addition, a single camera placed in a room can record most of the activities performed in the room, thus replacing many other non-visual sensors. Currently, video-based applications are effective in recognising and monitoring the activities, the movements, and the overall conditions of the assisted individuals as well as to assess their vital parameters (e.g., heart rate, respiratory rate). Similarly, audio sensors have the potential to become one of the most important modalities for interaction with AAL systems, as they can have a large range of sensing, do not require physical presence at a particular location and are physically intangible. Moreover, relevant information about individuals’ activities and health status can derive from processing audio signals (e.g., speech recordings). Nevertheless, as the other side of the coin, cameras and microphones are often perceived as the most intrusive technologies from the viewpoint of the privacy of the monitored individuals. This is due to the richness of the information these technologies convey and the intimate setting where they may be deployed. Solutions able to ensure privacy preservation by context and by design, as well as to ensure high legal and ethical standards are in high demand. After the review of the current state of play and the discussion in GoodBrother, we may claim that the first solutions in this direction are starting to appear in the literature. A multidisciplinary 4 debate among experts and stakeholders is paving the way towards AAL ensuring ergonomics, usability, acceptance and privacy preservation. The DIANA, PAAL, and VisuAAL projects are examples of this fresh approach. This report provides the reader with a review of the most recent advances in audio- and video-based monitoring technologies for AAL. It has been drafted as a collective effort of WG3 to supply an introduction to AAL, its evolution over time and its main functional and technological underpinnings. In this respect, the report contributes to the field with the outline of a new generation of ethical-aware AAL technologies and a proposal for a novel comprehensive taxonomy of AAL systems and applications. Moreover, the report allows non-technical readers to gather an overview of the main components of an AAL system and how these function and interact with the end-users. The report illustrates the state of the art of the most successful AAL applications and functions based on audio and video data, namely (i) lifelogging and self-monitoring, (ii) remote monitoring of vital signs, (iii) emotional state recognition, (iv) food intake monitoring, activity and behaviour recognition, (v) activity and personal assistance, (vi) gesture recognition, (vii) fall detection and prevention, (viii) mobility assessment and frailty recognition, and (ix) cognitive and motor rehabilitation. For these application scenarios, the report illustrates the state of play in terms of scientific advances, available products and research project. The open challenges are also highlighted. The report ends with an overview of the challenges, the hindrances and the opportunities posed by the uptake in real world settings of AAL technologies. In this respect, the report illustrates the current procedural and technological approaches to cope with acceptability, usability and trust in the AAL technology, by surveying strategies and approaches to co-design, to privacy preservation in video and audio data, to transparency and explainability in data processing, and to data transmission and communication. User acceptance and ethical considerations are also debated. Finally, the potentials coming from the silver economy are overviewed.publishedVersio

    Multi-sensor data fusion in mobile devices for the identification of Activities of Daily Living

    Get PDF
    Following the recent advances in technology and the growing use of mobile devices such as smartphones, several solutions may be developed to improve the quality of life of users in the context of Ambient Assisted Living (AAL). Mobile devices have different available sensors, e.g., accelerometer, gyroscope, magnetometer, microphone and Global Positioning System (GPS) receiver, which allow the acquisition of physical and physiological parameters for the recognition of different Activities of Daily Living (ADL) and the environments in which they are performed. The definition of ADL includes a well-known set of tasks, which include basic selfcare tasks, based on the types of skills that people usually learn in early childhood, including feeding, bathing, dressing, grooming, walking, running, jumping, climbing stairs, sleeping, watching TV, working, listening to music, cooking, eating and others. On the context of AAL, some individuals (henceforth called user or users) need particular assistance, either because the user has some sort of impairment, or because the user is old, or simply because users need/want to monitor their lifestyle. The research and development of systems that provide a particular assistance to people is increasing in many areas of application. In particular, in the future, the recognition of ADL will be an important element for the development of a personal digital life coach, providing assistance to different types of users. To support the recognition of ADL, the surrounding environments should be also recognized to increase the reliability of these systems. The main focus of this Thesis is the research on methods for the fusion and classification of the data acquired by the sensors available in off-the-shelf mobile devices in order to recognize ADL in almost real-time, taking into account the large diversity of the capabilities and characteristics of the mobile devices available in the market. In order to achieve this objective, this Thesis started with the review of the existing methods and technologies to define the architecture and modules of the method for the identification of ADL. With this review and based on the knowledge acquired about the sensors available in off-the-shelf mobile devices, a set of tasks that may be reliably identified was defined as a basis for the remaining research and development to be carried out in this Thesis. This review also identified the main stages for the development of a new method for the identification of the ADL using the sensors available in off-the-shelf mobile devices; these stages are data acquisition, data processing, data cleaning, data imputation, feature extraction, data fusion and artificial intelligence. One of the challenges is related to the different types of data acquired from the different sensors, but other challenges were found, including the presence of environmental noise, the positioning of the mobile device during the daily activities, the limited capabilities of the mobile devices and others. Based on the acquired data, the processing was performed, implementing data cleaning and feature extraction methods, in order to define a new framework for the recognition of ADL. The data imputation methods were not applied, because at this stage of the research their implementation does not have influence in the results of the identification of the ADL and environments, as the features are extracted from a set of data acquired during a defined time interval and there are no missing values during this stage. The joint selection of the set of usable sensors and the identifiable set of tasks will then allow the development of a framework that, considering multi-sensor data fusion technologies and context awareness, in coordination with other information available from the user context, such as his/her agenda and the time of the day, will allow to establish a profile of the tasks that the user performs in a regular activity day. The classification method and the algorithm for the fusion of the features for the recognition of ADL and its environments needs to be deployed in a machine with some computational power, while the mobile device that will use the created framework, can perform the identification of the ADL using a much less computational power. Based on the results reported in the literature, the method chosen for the recognition of the ADL is composed by three variants of Artificial Neural Networks (ANN), including simple Multilayer Perceptron (MLP) networks, Feedforward Neural Networks (FNN) with Backpropagation, and Deep Neural Networks (DNN). Data acquisition can be performed with standard methods. After the acquisition, the data must be processed at the data processing stage, which includes data cleaning and feature extraction methods. The data cleaning method used for motion and magnetic sensors is the low pass filter, in order to reduce the noise acquired; but for the acoustic data, the Fast Fourier Transform (FFT) was applied to extract the different frequencies. When the data is clean, several features are then extracted based on the types of sensors used, including the mean, standard deviation, variance, maximum value, minimum value and median of raw data acquired from the motion and magnetic sensors; the mean, standard deviation, variance and median of the maximum peaks calculated with the raw data acquired from the motion and magnetic sensors; the five greatest distances between the maximum peaks calculated with the raw data acquired from the motion and magnetic sensors; the mean, standard deviation, variance, median and 26 Mel- Frequency Cepstral Coefficients (MFCC) of the frequencies obtained with FFT based on the raw data acquired from the microphone data; and the distance travelled calculated with the data acquired from the GPS receiver. After the extraction of the features, these will be grouped in different datasets for the application of the ANN methods and to discover the method and dataset that reports better results. The classification stage was incrementally developed, starting with the identification of the most common ADL (i.e., walking, running, going upstairs, going downstairs and standing activities) with motion and magnetic sensors. Next, the environments were identified with acoustic data, i.e., bedroom, bar, classroom, gym, kitchen, living room, hall, street and library. After the environments are recognized, and based on the different sets of sensors commonly available in the mobile devices, the data acquired from the motion and magnetic sensors were combined with the recognized environment in order to differentiate some activities without motion, i.e., sleeping and watching TV. The number of recognized activities in this stage was increased with the use of the distance travelled, extracted from the GPS receiver data, allowing also to recognize the driving activity. After the implementation of the three classification methods with different numbers of iterations, datasets and remaining configurations in a machine with high processing capabilities, the reported results proved that the best method for the recognition of the most common ADL and activities without motion is the DNN method, but the best method for the recognition of environments is the FNN method with Backpropagation. Depending on the number of sensors used, this implementation reports a mean accuracy between 85.89% and 89.51% for the recognition of the most common ADL, equals to 86.50% for the recognition of environments, and equals to 100% for the recognition of activities without motion, reporting an overall accuracy between 85.89% and 92.00%. The last stage of this research work was the implementation of the structured framework for the mobile devices, verifying that the FNN method requires a high processing power for the recognition of environments and the results reported with the mobile application are lower than the results reported with the machine with high processing capabilities used. Thus, the DNN method was also implemented for the recognition of the environments with the mobile devices. Finally, the results reported with the mobile devices show an accuracy between 86.39% and 89.15% for the recognition of the most common ADL, equal to 45.68% for the recognition of environments, and equal to 100% for the recognition of activities without motion, reporting an overall accuracy between 58.02% and 89.15%. Compared with the literature, the results returned by the implemented framework show only a residual improvement. However, the results reported in this research work comprehend the identification of more ADL than the ones described in other studies. The improvement in the recognition of ADL based on the mean of the accuracies is equal to 2.93%, but the maximum number of ADL and environments previously recognized was 13, while the number of ADL and environments recognized with the framework resulting from this research is 16. In conclusion, the framework developed has a mean improvement of 2.93% in the accuracy of the recognition for a larger number of ADL and environments than previously reported. In the future, the achievements reported by this PhD research may be considered as a start point of the development of a personal digital life coach, but the number of ADL and environments recognized by the framework should be increased and the experiments should be performed with different types of devices (i.e., smartphones and smartwatches), and the data imputation and other machine learning methods should be explored in order to attempt to increase the reliability of the framework for the recognition of ADL and its environments.Após os recentes avanços tecnológicos e o crescente uso dos dispositivos móveis, como por exemplo os smartphones, várias soluções podem ser desenvolvidas para melhorar a qualidade de vida dos utilizadores no contexto de Ambientes de Vida Assistida (AVA) ou Ambient Assisted Living (AAL). Os dispositivos móveis integram vários sensores, tais como acelerómetro, giroscópio, magnetómetro, microfone e recetor de Sistema de Posicionamento Global (GPS), que permitem a aquisição de vários parâmetros físicos e fisiológicos para o reconhecimento de diferentes Atividades da Vida Diária (AVD) e os seus ambientes. A definição de AVD inclui um conjunto bem conhecido de tarefas que são tarefas básicas de autocuidado, baseadas nos tipos de habilidades que as pessoas geralmente aprendem na infância. Essas tarefas incluem alimentar-se, tomar banho, vestir-se, fazer os cuidados pessoais, caminhar, correr, pular, subir escadas, dormir, ver televisão, trabalhar, ouvir música, cozinhar, comer, entre outras. No contexto de AVA, alguns indivíduos (comumente chamados de utilizadores) precisam de assistência particular, seja porque o utilizador tem algum tipo de deficiência, seja porque é idoso, ou simplesmente porque o utilizador precisa/quer monitorizar e treinar o seu estilo de vida. A investigação e desenvolvimento de sistemas que fornecem algum tipo de assistência particular está em crescente em muitas áreas de aplicação. Em particular, no futuro, o reconhecimento das AVD é uma parte importante para o desenvolvimento de um assistente pessoal digital, fornecendo uma assistência pessoal de baixo custo aos diferentes tipos de pessoas. pessoas. Para ajudar no reconhecimento das AVD, os ambientes em que estas se desenrolam devem ser reconhecidos para aumentar a fiabilidade destes sistemas. O foco principal desta Tese é o desenvolvimento de métodos para a fusão e classificação dos dados adquiridos a partir dos sensores disponíveis nos dispositivos móveis, para o reconhecimento quase em tempo real das AVD, tendo em consideração a grande diversidade das características dos dispositivos móveis disponíveis no mercado. Para atingir este objetivo, esta Tese iniciou-se com a revisão dos métodos e tecnologias existentes para definir a arquitetura e os módulos do novo método de identificação das AVD. Com esta revisão da literatura e com base no conhecimento adquirido sobre os sensores disponíveis nos dispositivos móveis disponíveis no mercado, um conjunto de tarefas que podem ser identificadas foi definido para as pesquisas e desenvolvimentos desta Tese. Esta revisão também identifica os principais conceitos para o desenvolvimento do novo método de identificação das AVD, utilizando os sensores, são eles: aquisição de dados, processamento de dados, correção de dados, imputação de dados, extração de características, fusão de dados e extração de resultados recorrendo a métodos de inteligência artificial. Um dos desafios está relacionado aos diferentes tipos de dados adquiridos pelos diferentes sensores, mas outros desafios foram encontrados, sendo os mais relevantes o ruído ambiental, o posicionamento do dispositivo durante a realização das atividades diárias, as capacidades limitadas dos dispositivos móveis. As diferentes características das pessoas podem igualmente influenciar a criação dos métodos, escolhendo pessoas com diferentes estilos de vida e características físicas para a aquisição e identificação dos dados adquiridos a partir de sensores. Com base nos dados adquiridos, realizou-se o processamento dos dados, implementando-se métodos de correção dos dados e a extração de características, para iniciar a criação do novo método para o reconhecimento das AVD. Os métodos de imputação de dados foram excluídos da implementação, pois não iriam influenciar os resultados da identificação das AVD e dos ambientes, na medida em que são utilizadas as características extraídas de um conjunto de dados adquiridos durante um intervalo de tempo definido. A seleção dos sensores utilizáveis, bem como das AVD identificáveis, permitirá o desenvolvimento de um método que, considerando o uso de tecnologias para a fusão de dados adquiridos com múltiplos sensores em coordenação com outras informações relativas ao contexto do utilizador, tais como a agenda do utilizador, permitindo estabelecer um perfil de tarefas que o utilizador realiza diariamente. Com base nos resultados obtidos na literatura, o método escolhido para o reconhecimento das AVD são as diferentes variantes das Redes Neuronais Artificiais (RNA), incluindo Multilayer Perceptron (MLP), Feedforward Neural Networks (FNN) with Backpropagation and Deep Neural Networks (DNN). No final, após a criação dos métodos para cada fase do método para o reconhecimento das AVD e ambientes, a implementação sequencial dos diferentes métodos foi realizada num dispositivo móvel para testes adicionais. Após a definição da estrutura do método para o reconhecimento de AVD e ambientes usando dispositivos móveis, verificou-se que a aquisição de dados pode ser realizada com os métodos comuns. Após a aquisição de dados, os mesmos devem ser processados no módulo de processamento de dados, que inclui os métodos de correção de dados e de extração de características. O método de correção de dados utilizado para sensores de movimento e magnéticos é o filtro passa-baixo de modo a reduzir o ruído, mas para os dados acústicos, a Transformada Rápida de Fourier (FFT) foi aplicada para extrair as diferentes frequências. Após a correção dos dados, as diferentes características foram extraídas com base nos tipos de sensores usados, sendo a média, desvio padrão, variância, valor máximo, valor mínimo e mediana de dados adquiridos pelos sensores magnéticos e de movimento, a média, desvio padrão, variância e mediana dos picos máximos calculados com base nos dados adquiridos pelos sensores magnéticos e de movimento, as cinco maiores distâncias entre os picos máximos calculados com os dados adquiridos dos sensores de movimento e magnéticos, a média, desvio padrão, variância e 26 Mel-Frequency Cepstral Coefficients (MFCC) das frequências obtidas com FFT com base nos dados obtidos a partir do microfone, e a distância calculada com os dados adquiridos pelo recetor de GPS. Após a extração das características, as mesmas são agrupadas em diferentes conjuntos de dados para a aplicação dos métodos de RNA de modo a descobrir o método e o conjunto de características que reporta melhores resultados. O módulo de classificação de dados foi incrementalmente desenvolvido, começando com a identificação das AVD comuns com sensores magnéticos e de movimento, i.e., andar, correr, subir escadas, descer escadas e parado. Em seguida, os ambientes são identificados com dados de sensores acústicos, i.e., quarto, bar, sala de aula, ginásio, cozinha, sala de estar, hall, rua e biblioteca. Com base nos ambientes reconhecidos e os restantes sensores disponíveis nos dispositivos móveis, os dados adquiridos dos sensores magnéticos e de movimento foram combinados com o ambiente reconhecido para diferenciar algumas atividades sem movimento (i.e., dormir e ver televisão), onde o número de atividades reconhecidas nesta fase aumenta com a fusão da distância percorrida, extraída a partir dos dados do recetor GPS, permitindo também reconhecer a atividade de conduzir. Após a implementação dos três métodos de classificação com diferentes números de iterações, conjuntos de dados e configurações numa máquina com alta capacidade de processamento, os resultados relatados provaram que o melhor método para o reconhecimento das atividades comuns de AVD e atividades sem movimento é o método DNN, mas o melhor método para o reconhecimento de ambientes é o método FNN with Backpropagation. Dependendo do número de sensores utilizados, esta implementação reporta uma exatidão média entre 85,89% e 89,51% para o reconhecimento das AVD comuns, igual a 86,50% para o reconhecimento de ambientes, e igual a 100% para o reconhecimento de atividades sem movimento, reportando uma exatidão global entre 85,89% e 92,00%. A última etapa desta Tese foi a implementação do método nos dispositivos móveis, verificando que o método FNN requer um alto poder de processamento para o reconhecimento de ambientes e os resultados reportados com estes dispositivos são inferiores aos resultados reportados com a máquina com alta capacidade de processamento utilizada no desenvolvimento do método. Assim, o método DNN foi igualmente implementado para o reconhecimento dos ambientes com os dispositivos móveis. Finalmente, os resultados relatados com os dispositivos móveis reportam uma exatidão entre 86,39% e 89,15% para o reconhecimento das AVD comuns, igual a 45,68% para o reconhecimento de ambientes, e igual a 100% para o reconhecimento de atividades sem movimento, reportando uma exatidão geral entre 58,02% e 89,15%. Com base nos resultados relatados na literatura, os resultados do método desenvolvido mostram uma melhoria residual, mas os resultados desta Tese identificam mais AVD que os demais estudos disponíveis na literatura. A melhoria no reconhecimento das AVD com base na média das exatidões é igual a 2,93%, mas o número máximo de AVD e ambientes reconhecidos pelos estudos disponíveis na literatura é 13, enquanto o número de AVD e ambientes reconhecidos com o método implementado é 16. Assim, o método desenvolvido tem uma melhoria de 2,93% na exatidão do reconhecimento num maior número de AVD e ambientes. Como trabalho futuro, os resultados reportados nesta Tese podem ser considerados um ponto de partida para o desenvolvimento de um assistente digital pessoal, mas o número de ADL e ambientes reconhecidos pelo método deve ser aumentado e as experiências devem ser repetidas com diferentes tipos de dispositivos móveis (i.e., smartphones e smartwatches), e os métodos de imputação e outros métodos de classificação de dados devem ser explorados de modo a tentar aumentar a confiabilidade do método para o reconhecimento das AVD e ambientes

    Classification and Decision-Theoretic Framework for Detecting and Reporting Unseen Falls

    Get PDF
    Detecting falls is critical for an activity recognition system to ensure the well being of an individual. However, falls occur rarely and infrequently, therefore sufficient data for them may not be available during training of the classifiers. Building a fall detection system in the absence of fall data is very challenging and can severely undermine the generalization capabilities of an activity recognition system. In this thesis, we present ideas from both classification and decision theory perspectives to handle scenarios when the training data for falls is not available. In traditional decision theoretic approaches, the utilities (or conversely costs) to report/not-report a fall or a non-fall are treated equally or the costs are deduced from the datasets, both of which are flawed. However, these costs are either difficult to compute or only available from domain experts. Therefore, in a typical fall detection system, we neither have a good model for falls nor an accurate estimate of utilities. In this thesis, we make contributions to handle both of these situations. In recent years, Hidden Markov Models (HMMs) have been used to model temporal dynamics of human activities. HMMs are generally built for normal activities and a threshold based on the log-likelihood of the training data is used to identify unseen falls. We show that such formulation to identify unseen fall activities is ill-posed for this problem. We present a new approach for the identification of falls using wearable devices in the absence of their training data but with plentiful data for normal Activities of Daily Living (ADL). We propose three 'X-Factor' Hidden Markov Model (XHMMs) approaches, which are similar to the traditional HMMs but have ``inflated'' output covariances (observation models). To estimate the inflated covariances, we propose a novel cross validation method to remove 'outliers' or deviant sequences from the ADL that serves as proxies for the unseen falls and allow learning the XHMMs using only normal activities. We tested the proposed XHMM approaches on three activity recognition datasets and show high detection rates for unseen falls. We also show that supervised classification methods perform poorly when very limited fall data is available during the training phase. We present a novel decision-theoretic approach to fall detection (dtFall) that aims to tackle the core problem when the model for falls and information about the costs/utilities associated with them is unavailable. We theoretically show that the expected regret will always be positive using dtFall instead of a maximum likelihood classifier. We present a new method to parameterize unseen falls such that training situations with no fall data can be handled. We also identify problems with theoretical thresholding to identify falls using decision theoretic modelling when training data for fall data is absent, and present an empirical thresholding technique to handle imperfect models for falls and non-falls. We also develop a new cost model based on severity of falls to provide an operational range of utilities. We present results on three activity recognition datasets, and show how the results may generalize to the difficult problem of fall detection in the real world. Under the condition when falls occur sporadically and rarely in the test set, the results show that (a) knowing the difference in the cost between a reported fall and a false alarm is useful, (b) as the cost of false alarm gets bigger this becomes more significant, and (c) the difference in the cost of between a reported and non-reported fall is not that useful

    Human Activity Recognition and Fall Detection Using Unobtrusive Technologies

    Full text link
    As the population ages, health issues like injurious falls demand more attention. Wearable devices can be used to detect falls. However, despite their commercial success, most wearable devices are obtrusive, and patients generally do not like or may forget to wear them. In this thesis, a monitoring system consisting of two 24×32 thermal array sensors and a millimetre-wave (mmWave) radar sensor was developed to unobtrusively detect locations and recognise human activities such as sitting, standing, walking, lying, and falling. Data were collected by observing healthy young volunteers simulate ten different scenarios. The optimal installation position of the sensors was initially unknown. Therefore, the sensors were mounted on a side wall, a corner, and on the ceiling of the experimental room to allow performance comparison between these sensor placements. Every thermal frame was converted into an image and a set of features was manually extracted or convolutional neural networks (CNNs) were used to automatically extract features. Applying a CNN model on the infrared stereo dataset to recognise five activities (falling plus lying on the floor, lying in bed, sitting on chair, sitting in bed, standing plus walking), overall average accuracy and F1-score were 97.6%, and 0.935, respectively. The scores for detecting falling plus lying on the floor from the remaining activities were 97.9%, and 0.945, respectively. When using radar technology, the generated point clouds were converted into an occupancy grid and a CNN model was used to automatically extract features, or a set of features was manually extracted. Applying several classifiers on the manually extracted features to detect falling plus lying on the floor from the remaining activities, Random Forest (RF) classifier achieved the best results in overhead position (an accuracy of 92.2%, a recall of 0.881, a precision of 0.805, and an F1-score of 0.841). Additionally, the CNN model achieved the best results (an accuracy of 92.3%, a recall of 0.891, a precision of 0.801, and an F1-score of 0.844), in overhead position and slightly outperformed the RF method. Data fusion was performed at a feature level, combining both infrared and radar technologies, however the benefit was not significant. The proposed system was cost, processing time, and space efficient. The system with further development can be utilised as a real-time fall detection system in aged care facilities or at homes of older people

    Implantation d’un système de vidéosurveillance intelligente pour détecter les chutes en milieu de vie

    Full text link
    Introduction. Le vieillissement de la population est associé à un risque accru de chute menaçant le maintien des aînés à domicile et dans la communauté. Les nombreuses conséquences néfastes des chutes sur la santé de l’aîné (ex : blessures) et sur son indépendance sont réduites lorsque la prise en charge postchute est rapide. Or les proches-aidants intervenant auprès des aînés en cas de chute ne sont pas assez nombreux et sont souvent conduits à l’épuisement en raison du fardeau lié aux soins apportés à l’aîné (Ducharme, 2006; Wolff et al., 2017; World Health Organization, 2015). L’élaboration d’alternatives pour détecter et alerter lors de chutes devient incontournable pour faciliter le maintien à domicile et dans la communauté en sécurité et pour maintenir une qualité de vie (van Hoof, Kort, Rutten, & Duijnstee, 2011). De nombreuses technologies de détection des chutes ont été développées. Cependant elles ont des limites (ex : l’enregistrement de données personnelles) que le système de vidéosurveillance intelligente (VSI) développé par notre équipe tente de compenser. La VSI est composée d’une caméra reliée à un ordinateur, lui-même relié à Internet. Basée sur une analyse informatisée de l’image, la VSI détecte automatiquement la chute et envoie une alerte au répondant choisi (ex : le proche-aidant) sur son cellulaire, son ordinateur ou sa tablette. Elle préserve la vie privée par son fonctionnement en circuit fermé : en absence de chute, les images sont détruites; lors d’une chute, une image de la chute est transmise au répondant, cette image peut être brouillée à la demande de l’aîné. Si l’aîné l’autorise, il est possible d’enregistrer les 30 secondes précédant la chute pour documenter ses causes. Les travaux antérieurs montrent que la VSI a le potentiel de répondre aux besoins des usagers (Lapierre et al., 2016, 2015; Londei et al., 2009; Rougier, St-Arnaud, Rousseau, & Meunier, 2011). Cependant, il importe de valider sa technologie et d’explorer la perception des usagers dans des conditions écologiques (à domicile auprès d’aînés chuteurs) (Atoyebi, Stewart, & Sampson, 2015). But de l’étude. Basé sur le Modèle de compétence expliquant les relations personne- environnement (Rousseau, 2017), cette thèse a pour but d’explorer la faisabilité de l’implantation de la VSI pour détecter les chutes à domicile afin d’améliorer la qualité de vie de l’aîné et diminuer le fardeau du proche-aidant. Méthodologie. La thèse suit un devis de recherche de développement (Contandriopoulos, Champagne, Potvin, Denis, & Boyle, 2005) en quatre étapes. L’étape 1 consistait en deux revues de la portée (Daudt, Van Mossel, & Scott, 2013) traitant respectivement des technologies de détection des chutes et des technologies de gestion de l’errance. Plusieurs banques de données ont été explorées (ex: CINHAL, Medline, Embase). Chaque étape de sélection des études, puis d’extraction et d’analyse des données a été réalisée indépendamment par deux co-auteurs. Leurs résultats ont été comparés et les désaccords ont été résolus par consensus ou par l’intervention d’un tiers. Les données extraites ont été analysées de façon descriptive (Fortin & Gagnon, 2015). L’étape 2 était une étude de cas multiples (Yin, 2014) auprès de six aînées chuteuses vivant seules, concernant l’implantation à domicile d’une version préalable à la VSI, la vidéosurveillance programmable (VSP). La VSP a été installée durant sept nuits chez les participantes pour observer leurs déplacements lors des levés la nuit pour aller à la toilette. Des entrevues semi-structurées ont été réalisées avant puis après l’expérimentation. Les données ont été analysées qualitativement (Miles, Huberman, & Saldana, 2014; Yin, 2014). L’étape 3 était une preuve de concept en deux phases : 1) une étude de simulation en appartement-laboratoire (Contandriopoulos, Champagne, Potvin, Denis, & Boyle, 2005) et 2) un pré-test au domicile de jeunes adultes. La phase 1 impliquait la simulation de scenarios de la vie quotidienne et de scenarios de chutes afin d’estimer la sensibilité, la spécificité, le taux d’erreur et la précision de la VSI. Le pré-test consistait en l’implantation de la VSI à domicile pendant 28 jours afin d’anticiper les difficultés technologiques liées à une implantation prolongée. Pour les deux phases, un journal de bord a été complété afin de documenter le fonctionnement de la VSI puis les données ont été analysées descriptivement. L’étape 4 était une étude de cas multiples (Yin, 2014) auprès de trois dyades aînés/proches-aidants. Les aînés inclus, présentant un risque de chute élevé, vivaient seuls à domicile. La VSI était implantée pour deux mois, avec le proche-aidant comme destinataire des alertes. Une entrevue semi-structurée était réalisée, avant, à mi-parcours et après l’expérimentation. Les données ont été analysées qualitativement (Miles, Huberman, & Saldana, 2014; Yin, 2014). Résultats. Les résultats ont abouti à l’adaptation de la VSI pour explorer la faisabilité de son implantation à domicile afin de détecter les chutes graves. L’étape 1 a souligné les lacunes dans la littérature, dont certaines ont été comblées par le projet de thèse (ex : manque d’étude explorant l’implantation de systèmes ambiants dans des domiciles variés). Cette étape a aussi permis d’identifier les façons de bonifier la VSI et sa procédure d’implantation. L’étape 2 a mis en évidence des facteurs pouvant faciliter ou freiner l’implantation de systèmes de caméras à domicile. L’étape 3 a permis de valider la technologie de la VSI dans un environnement similaire à celui de l’aîné et de résoudre les problèmes techniques liés à l’implantation prolongée du système. Enfin, l’étape 4 a permis d’explorer la faisabilité de l’implantation de la VSI au domicile d’aînés chuteurs pendant une période de deux mois. Discussion. Cette recherche de développement a permis d’adapter la VSI pour son implantation grâce à plusieurs étapes de recherche (des revues de la portée, une preuve de concept, étude de cas multiple) puis de montrer la faisabilité de son implantation. Les résultats ont abouti à l’identification de facteurs influençant l’implantation de la VSI à domicile et ont permis d’émettre des recommandations à cet égard. Cette recherche est originale notamment sur trois aspects: 1) l’implication d’une équipe multidisciplinaire, 2) une conception technologique centrée sur l’usager, 3) l’implantation à domicile de la technologie. Même si des défis persistent quant à son implantation à domicile (ex. réduire l’écart de performance du système entre l’appartement-laboratoire et le domicile), cette étude encourage la poursuite du développement de la VSI. Conclusion. Cette thèse visait à répondre à la problématique des chutes des aînés à domicile grâce à l’implantation d’un système de vidéosurveillance intelligente pour alerter automatiquement le proche-aidant. Les résultats de cette recherche de développement, soulignent que la VSI serait une avenue prometteuse pour détecter les chutes graves, alerter le proche et documenter la cause des chutes. Les futures recherches sur l’implantation de technologies similaires devraient impliquer des devis de recherche quantitatifs, avec notamment des profils plus variés de proches-aidants et une implantation plus longue pour démontrer les effets de la VSI. La VSI pourrait ensuite devenir accessible aux aînés afin de soutenir leur maintien à domicile et dans la communauté et soulager le fardeau des proches- aidants.Introduction. Aging is associated with an increased risk of fall, which threatens Aging in Place. The numerous and serious consequences of falls on the older adult’s health and independence are reduced with a quick intervention. Yet the informal caregivers, who often intervene in case of a fall are not numerous enough and are often worn out because of the burden related to the care provided for the older adult (Ducharme, 2006; Wolff et al., 2017; World Health Organization, 2015). The development of alternatives to detect and alert in case of a fall becomes essential to facilitate Aging in Place in safety and to maintain a quality of life (van Hoof, Kort, Rutten, & Duijnstee, 2011). Many fall detection systems have been developed. However, they have limits (eg. the recording of personal data), that the intelligent videomonitoring system (IVS) tries to compensate. The IVS is composed of one camera linked to a computer and to the Internet. Based on the computerized analysis of the images, the IVS automatically detects falls and sends an alert to the chosen recipient (eg. the informal caregiver) on his smartphone, computer or tablet. The IVS preserves privacy with its closed circuit functioning: without a fall, the images are destroyed; in case of a fall, an image of the fall can be sent to the recipient. This image can be blurred at the request of the older adult. The 30 seconds before the fall can be recorded to document its causes, if the older adult authorizes it. Previous studies on the IVS show that the IVS has the potential to answer the users’ needs (Lapierre et al., 2016, 2015; Londei et al., 2009; Rougier, St-Arnaud, Rousseau, & Meunier, 2011). However, it is important to validate its technology and explore users’ perception in ecological conditions (at home with older adults at risk of fall) (Atoyebi, Stewart, & Sampson, 2015). Purpose. Based on the Model of Competence explaining the person-environment interactions (Rousseau, 2017), the study aims to explore the feasibility of the IVS implementation to detect falls at home in order to improve the older adult’s quality of life and decrease the caregiver’s burden. Methodology. The thesis follows a development research design (Contandriopoulos, Champagne, Potvin, Denis, & Boyle, 2005) in four steps. Step 1 was two scoping reviews (Daudt, Van Mossel, & Scott, 2013) on fall detection technology and on wandering management technology respectively. Many databases have been searched (eg. CINHAL, Medline, Embase). Each step of the study selection, data extraction and analysis have been independently realised by two co-authors. Results were compared and disagreements were solved by consensus or by a third part intervention. Extracted data were descriptively analysed (Fortin & Gagnon, 2015). Step 2 was a multiple case study (Yin, 2014) with six older adults living alone with a risk of fall, on the implementation of a previous version of the IVS, the programmable videomonitoring system. The programmable videomonitoring system was installed for seven nights at home to observe participants walk when they went to the bathroom at night. Semi- structured interviews were realised before and after the experiment. Data were qualitatively analysed (Miles, Huberman, & Saldana, 2014). Step 3 was a proof of concept in two phases: 1) a simulation study in an apartment- laboratory (Contandriopoulos, Champagne, Potvin, Denis, & Boyle, 2005) and 2) a pre-test at home with young adults. Phase 1 implied a simulation of daily living scenarios and falls scenarios to estimate the sensitivity, specificity, error rate and accuracy of the IVS. The pre- test consisted in the implementation of the IVS at home for 28 days to anticipate the technological difficulties related to extended implementation. For the two phases, a logbook was completed to document the IVS functioning, then data were descriptively analysed. Step 4 was a multiple case study (Yin, 2014) with three dyads of older adults/caregivers. The included older adults had a high risk of fall and lived alone. The IVS was implemented for a two-month period with the informal caregiver as the alerts recipient. A semi-structured interview was realised before, at mid-term, and after the experiment. Data were qualitatively analysed (Miles, Huberman, & Saldana, 2014). Results. Results encompass the adaptation of the IVS to explore the feasibility of its implementation at home to detect serious falls. Step 1 highlighted the gaps in the literature, some of which were filled by the thesis project (eg. lack of studies exploring the implementation of ambient system in various homes). This step also enabled us to identify ways to improve the IVS and its implementation process. Step 2 highlighted factors facilitating or hindering the implementation of cameras system at home. Step 3 has enabled us to validate the technology in a similar environment to the older adult’s home and to solve technical difficulties related to the prolonged implementation. Finally, step 4 enabled us to explore the feasibility of the implementation of the IVS at older adults’ home for a two-month period. Discussion. This development research enabled us to adapt the IVS for its implementation by means of four research steps (scoping reviews, proof of concept, multiple case study), and then to show the feasibility of its implementation. Results led to the identification of factors influencing the IVS at home and enabled us to make recommendations in this regard. This thesis is original on three aspects: 1) the implication of a multidisciplinary team, 2) a user-based conception, 3) the implementation of the technology at home. Despite the remaining challenges regarding the implementation (eg. the performance discrepancy between the home and the apartment-laboratory), this study encourages the further development of the VSI. Conclusion. This thesis aimed to address the problematic of falls at home thanks to the implementation of the IVS to automatically alert the informal caregiver. Results from this development research highlight that the IVS may be a promising way to detect serious falls, to alert the caregiver and document the falls causes. Future researches should be involving quantitative designs, more specifically with more various profiles of informal caregivers and a longer period of implementation, to demonstrate the IVS outcomes. The IVS could then become accessible to the older adult to support Aging in place and relieve the caregiver’s burden
    corecore