209 research outputs found

    Radar and RGB-depth sensors for fall detection: a review

    Get PDF
    This paper reviews recent works in the literature on the use of systems based on radar and RGB-Depth (RGB-D) sensors for fall detection, and discusses outstanding research challenges and trends related to this research field. Systems to detect reliably fall events and promptly alert carers and first responders have gained significant interest in the past few years in order to address the societal issue of an increasing number of elderly people living alone, with the associated risk of them falling and the consequences in terms of health treatments, reduced well-being, and costs. The interest in radar and RGB-D sensors is related to their capability to enable contactless and non-intrusive monitoring, which is an advantage for practical deployment and users’ acceptance and compliance, compared with other sensor technologies, such as video-cameras, or wearables. Furthermore, the possibility of combining and fusing information from The heterogeneous types of sensors is expected to improve the overall performance of practical fall detection systems. Researchers from different fields can benefit from multidisciplinary knowledge and awareness of the latest developments in radar and RGB-D sensors that this paper is discussing

    Sistema para análise automatizada de movimento durante a marcha usando uma câmara RGB-D

    Get PDF
    Nowadays it is still common in clinical practice to assess the gait (or way of walking) of a given subject through the visual observation and use of a rating scale, which is a subjective approach. However, sensors including RGB-D cameras, such as the Microsoft Kinect, can be used to obtain quantitative information that allows performing gait analysis in a more objective way. The quantitative gait analysis results can be very useful for example to support the clinical assessment of patients with diseases that can affect their gait, such as Parkinson’s disease. The main motivation of this thesis was thus to provide support to gait assessment, by allowing to carry out quantitative gait analysis in an automated way. This objective was achieved by using 3-D data, provided by a single RGB-D camera, to automatically select the data corresponding to walking and then detect the gait cycles performed by the subject while walking. For each detected gait cycle, we obtain several gait parameters, which are used together with anthropometric measures to automatically identify the subject being assessed. The automated gait data selection relies on machine learning techniques to recognize three different activities (walking, standing, and marching), as well as two different positions of the subject in relation to the camera (facing the camera and facing away from it). For gait cycle detection, we developed an algorithm that estimates the instants corresponding to given gait events. The subject identification based on gait is enabled by a solution that was also developed by relying on machine learning. The developed solutions were integrated into a system for automated gait analysis, which we found to be a viable alternative to gold standard systems for obtaining several spatiotemporal and some kinematic gait parameters. Furthermore, the system is suitable for use in clinical environments, as well as ambulatory scenarios, since it relies on a single markerless RGB-D camera that is less expensive, more portable, less intrusive and easier to set up, when compared with the gold standard systems (multiple cameras and several markers attached to the subject’s body).Atualmente ainda é comum na prática clínica avaliar a marcha (ou o modo de andar) de uma certa pessoa através da observação visual e utilização de uma escala de classificação, o que é uma abordagem subjetiva. No entanto, existem sensores incluindo câmaras RGB-D, como a Microsoft Kinect, que podem ser usados para obter informação quantitativa que permite realizar a análise da marcha de um modo mais objetivo. Os resultados quantitativos da análise da marcha podem ser muito úteis, por exemplo, para apoiar a avaliação clínica de pessoas com doenças que podem afetar a sua marcha, como a doença de Parkinson. Assim, a principal motivação desta tese foi fornecer apoio à avaliação da marcha, permitindo realizar a análise quantitativa da marcha de forma automatizada. Este objetivo foi atingido usando dados em 3-D, fornecidos por uma única câmara RGB-D, para automaticamente selecionar os dados correspondentes a andar e, em seguida, detetar os ciclos de marcha executados pelo sujeito durante a marcha. Para cada ciclo de marcha identificado, obtemos vários parâmetros de marcha, que são usados em conjunto com medidas antropométricas para identificar automaticamente o sujeito que está a ser avaliado. A seleção automatizada de dados de marcha usa técnicas de aprendizagem máquina para reconhecer três atividades diferentes (andar, estar parado em pé e marchar), bem como duas posições diferentes do sujeito em relação à câmara (de frente para a câmara e de costas para ela). Para a deteção dos ciclos da marcha, desenvolvemos um algoritmo que estima os instantes correspondentes a determinados eventos da marcha. A identificação do sujeito com base na sua marcha é realizada usando uma solução que também foi desenvolvida com base em aprendizagem máquina. As soluções desenvolvidas foram integradas num sistema de análise automatizada de marcha, que demonstrámos ser uma alternativa viável a sistemas padrão de referência para obter vários parâmetros de marcha espácio-temporais e alguns parâmetros angulares. Além disso, o sistema é adequado para uso em ambientes clínicos, bem como em cenários ambulatórios, pois depende de apenas de uma câmara RGB-D que não usa marcadores e é menos dispendiosa, mais portátil, menos intrusiva e mais fácil de configurar, quando comparada com os sistemas padrão de referência (múltiplas câmaras e vários marcadores colocados no corpo do sujeito).Programa Doutoral em Informátic

    Usage of KINECT to detect walking problems of elder people

    Get PDF
    The dissertation addresses the problem in analyzing and detecting the lack of mobility in the elder population, while enabling a rapid intervention by specialized and able people that can help improve their movement quality. In order to achieve a solution to the problem above, we used the motion detection and gestures device named Kinect, developed by Microsoft, which allowed the recording of all the movements performed by a set of randomly selected people, for the purpose of obtaining data that will allow us to further analysis and classification of motor skills on every person. Additionally, it was necessary to create an application that can extract relevant information generated by the video Kinect and treat this information in order to carry out the person movements classification. Thus, the application is divided into three main steps: -Obtaining the XYZ coordinates, for a relevant set of bones of the person skeleton, in all the recorded frames as well as the total duration of the movement itself; -Extracted data treatment and standardization, so it can later be used in the classifier; -Creation of a classifier using neural networks methodology, which uses the standardized data in order to classify the person movement, according to its quality (existence of a mobility deficit or not). Throughout the dissertation will be described every step of the development process, since the proposed solution design until the code of the developed classes.A dissertação aborda o problema da análise e deteção de défice de mobilidade em pessoas idosas, com vista a permitir uma intervenção rápida por parte de pessoas especializadas e capazes de ajudar a melhorar a qualidade de movimentação. Por forma a alcançar uma solução ao problema referido, foi utilizado o dispositivo de deteção de movimentos e gestos Kinect, desenvolvido pela empresa Microsoft, que permitiu a gravação de todo os movimentos realizados por um conjunto de pessoas idosas selecionadas aleatoriamente, para efeitos de obtenção de dados que nos permitam a posterior análise e classificação das capacidades motoras de cada indivíduo. Adicionalmente foi necessário a criação de uma aplicação, capaz de extrair informações relevantes dos vídeos gerados pelo Kinect, tratar essas informações de forma a ser possível realizar a classificação dos movimentos dos indivíduos. Assim, a aplicação desenvolvida subdivide-se em três etapas principais: - A obtenção das coordenadas XYZ, para um conjunto relevante de ossos do esqueleto do indivíduo, em todas as frames da gravação e a duração total do movimento em si; - Tratamento dos dados extraídos e normalização dos mesmos, por forma a serem utilizados no classificador; - Criação de um classificador através de redes neuronais, capaz de utilizar os dados normalizados e classificar o movimento do idoso, de acordo com a qualidade do mesmo (existência de um défice de mobilidade ou não). Na dissertação será descrito todo o processo de desenvolvimento, desde a estruturação da solução proposta até às classes desenvolvidas em código

    A Survey of Applications and Human Motion Recognition with Microsoft Kinect

    Get PDF
    Microsoft Kinect, a low-cost motion sensing device, enables users to interact with computers or game consoles naturally through gestures and spoken commands without any other peripheral equipment. As such, it has commanded intense interests in research and development on the Kinect technology. In this paper, we present, a comprehensive survey on Kinect applications, and the latest research and development on motion recognition using data captured by the Kinect sensor. On the applications front, we review the applications of the Kinect technology in a variety of areas, including healthcare, education and performing arts, robotics, sign language recognition, retail services, workplace safety training, as well as 3D reconstructions. On the technology front, we provide an overview of the main features of both versions of the Kinect sensor together with the depth sensing technologies used, and review literatures on human motion recognition techniques used in Kinect applications. We provide a classification of motion recognition techniques to highlight the different approaches used in human motion recognition. Furthermore, we compile a list of publicly available Kinect datasets. These datasets are valuable resources for researchers to investigate better methods for human motion recognition and lower-level computer vision tasks such as segmentation, object detection and human pose estimation

    Kinematic assessment for stroke patients in a stroke game and a daily activity recognition and assessment system

    Get PDF
    Stroke is the leading cause of serious, long-term disabilities among which deficits in motor abilities in arms or legs are most common. Those who suffer a stroke can recover through effective rehabilitation which is delicately personalized. To achieve the best personalization, it is essential for clinicians to monitor patients' health status and recovery progress accurately and consistently. Traditionally, rehabilitation involves patients performing exercises in clinics where clinicians oversee the procedure and evaluate patients' recovery progress. Following the in-clinic visits, additional home practices are tailored and assigned to patients. The in-clinic visits are important to evaluate recovery progress. The information collected can then help clinicians customize home practices for stroke patients. However, as the number of in-clinic sessions is limited by insurance policies, the recovery information collected in-clinic is often insufficient. Meanwhile, the home practice programs report low adherence rates based on historic data. Given that clinicians rely on patients to self-report adherence, the actual adherence rate could be even lower. Despite the limited feedback clinicians could receive, the measurement method is subjective as well. In practice, classic clinical scales are mostly used for assessing the qualities of movements and the recovery status of patients. However, these clinical scales are evaluated subjectively with only moderate inter-rater and intra-rater reliabilities. Taken together, clinicians lack a method to get sufficient and accurate feedback from patients, which limits the extent to which clinicians can personalize treatment plans. This work aims to solve this problem. To help clinicians obtain abundant health information regarding patients' recovery in an objective approach, I've developed a novel kinematic assessment toolchain that consists of two parts. The first part is a tool to evaluate stroke patients' motions collected in a rehabilitation game setting. This kinematic assessment tool utilizes body-tracking in a rehabilitation game. Specifically, a set of upper body assessment measures were proposed and calculated for assessing the movements using skeletal joint data. Statistical analysis was applied to evaluate the quality of upper body motions using the assessment outcomes. Second, to classify and quantify home activities for stroke patients objectively and accurately, I've developed DARAS, a daily activity recognition and assessment system that evaluates daily motions in a home setting. DARAS consists of three main components: daily action logger, action recognition part, and assessment part. The logger is implemented with a Foresite system to record daily activities using depth and skeletal joint data. Daily activity data in a realistic environment were collected from sixteen post-stroke participants. The collection period for each participant lasts three months. An ensemble network for activity recognition and temporal localization was developed to detect and segment the clinically relevant actions from the recorded data. The ensemble network fuses the prediction outputs from customized 3D Convolutional-De-Convolutional, customized Region Convolutional 3D network and a proposed Region Hierarchical Co-occurrence network which learns rich spatial-temporal features from either depth data or joint data. The per-frame precision and the per-action precision were 0.819 and 0.838, respectively, on the validation set. For the recognized actions, the kinematic assessments were performed using the skeletal joint data, as well as the longitudinal assessments. The results showed that, compared with non-stroke participants, stroke participants had slower hand movements, were less active, and tended to perform fewer hand manipulation actions. The assessment outcomes from the proposed toolchain help clinicians to provide more personalized rehabilitation plans that benefit patients.Includes bibliographical references

    Automatic ankle angle detection by integrated RGB and depth camera system

    Get PDF
    Depth cameras are developing widely. One of their main virtues is that, based on their data and by applying machine learning algorithms and techniques, it is possible to perform body tracking and make an accurate three-dimensional representation of body movement. Specifically, this paper will use the Kinect v2 device, which incorporates a random forest algorithm for 25 joints detection in the human body. However, although Kinect v2 is a powerful tool, there are circumstances in which the device’s design does not allow the extraction of such data or the accuracy of the data is low, as is usually the case with foot position. We propose a method of acquiring this data in circumstances where the Kinect v2 device does not recognize the body when only the lower limbs are visible, improving the ankle angle’s precision employing projection lines. Using a region-based convolutional neural network (Mask RCNN) for body recognition, raw data extraction for automatic ankle angle measurement has been achieved. All angles have been evaluated by inertial measurement units (IMUs) as gold standard. For the six tests carried out at different fixed distances between 0.5 and 4 m to the Kinect, we have obtained (mean ± SD) a Pearson’s coefficient, r = 0.89 ± 0.04, a Spearman’s coefficient, ρ = 0.83 ± 0.09, a root mean square error, RMSE = 10.7 ± 2.6 deg and a mean absolute error, MAE = 7.5 ± 1.8 deg. For the walking test, or variable distance test, we have obtained a Pearson’s coefficient, r = 0.74, a Spearman’s coefficient, ρ = 0.72, an RMSE = 6.4 deg and an MAE = 4.7 deg.This work has been supported by the Spanish Ministry of Science, Innovation and Universities and European Regional Development Fund (ERDF) across projects RTC-2017-6321-1 AEI/FEDER, UE, PID2019-107270RB-C21 AEI/FEDER, UE and FEDER founds

    A gaze-contingent framework for perceptually-enabled applications in healthcare

    Get PDF
    Patient safety and quality of care remain the focus of the smart operating room of the future. Some of the most influential factors with a detrimental effect are related to suboptimal communication among the staff, poor flow of information, staff workload and fatigue, ergonomics and sterility in the operating room. While technological developments constantly transform the operating room layout and the interaction between surgical staff and machinery, a vast array of opportunities arise for the design of systems and approaches, that can enhance patient safety and improve workflow and efficiency. The aim of this research is to develop a real-time gaze-contingent framework towards a "smart" operating suite, that will enhance operator's ergonomics by allowing perceptually-enabled, touchless and natural interaction with the environment. The main feature of the proposed framework is the ability to acquire and utilise the plethora of information provided by the human visual system to allow touchless interaction with medical devices in the operating room. In this thesis, a gaze-guided robotic scrub nurse, a gaze-controlled robotised flexible endoscope and a gaze-guided assistive robotic system are proposed. Firstly, the gaze-guided robotic scrub nurse is presented; surgical teams performed a simulated surgical task with the assistance of a robot scrub nurse, which complements the human scrub nurse in delivery of surgical instruments, following gaze selection by the surgeon. Then, the gaze-controlled robotised flexible endoscope is introduced; experienced endoscopists and novice users performed a simulated examination of the upper gastrointestinal tract using predominately their natural gaze. Finally, a gaze-guided assistive robotic system is presented, which aims to facilitate activities of daily living. The results of this work provide valuable insights into the feasibility of integrating the developed gaze-contingent framework into clinical practice without significant workflow disruptions.Open Acces
    corecore