523 research outputs found

    A spatiotemporal deep learning approach for automatic pathological Gait classification

    Get PDF
    Human motion analysis provides useful information for the diagnosis and recovery assessment of people suffering from pathologies, such as those affecting the way of walking, i.e., gait. With recent developments in deep learning, state-of-the-art performance can now be achieved using a single 2D-RGB-camera-based gait analysis system, offering an objective assessment of gait-related pathologies. Such systems provide a valuable complement/alternative to the current standard practice of subjective assessment. Most 2D-RGB-camera-based gait analysis approaches rely on compact gait representations, such as the gait energy image, which summarize the characteristics of a walking sequence into one single image. However, such compact representations do not fully capture the temporal information and dependencies between successive gait movements. This limitation is addressed by proposing a spatiotemporal deep learning approach that uses a selection of key frames to represent a gait cycle. Convolutional and recurrent deep neural networks were combined, processing each gait cycle as a collection of silhouette key frames, allowing the system to learn temporal patterns among the spatial features extracted at individual time instants. Trained with gait sequences from the GAIT-IT dataset, the proposed system is able to improve gait pathology classification accuracy, outperforming state-of-the-art solutions and achieving improved generalization on cross-dataset tests.info:eu-repo/semantics/publishedVersio

    A deep learning solution for real-time human motion decoding in smart walkers

    Get PDF
    Dissertação de mestrado integrado em Engenharia Biomédica (especialização em Eletrónica Médica)The treatment of gait impairments has increasingly relied on rehabilitation therapies which benefit from the use of smart walkers. These walkers still lack advanced and seamless Human-Robot Interaction, which intuitively understands the intentions of human motion, empowering the user’s recovery state and autonomy, while reducing the physician’s effort. This dissertation proposes the development of a deep learning solution to tackle the human motion decoding problematic in smart walkers, using only lower body vision information from a camera stream, mounted on the WALKit Smart Walker, a smart walker prototype for rehabilitation purposes. Different deep learning frameworks were designed for early human motion recognition and detec tion. A custom acquisition method, including a smart walker’s automatic driving algorithm and labelling procedure, was also designed to enable further training and evaluation of the proposed frameworks. Facing a 4-class (stop, walk, turn right/left) classification problem, a deep learning convolutional model with an attention mechanism achieved the best results: an offline f1-score of 99.61%, an online calibrated instantaneous precision higher than 97% and a human-centred focus slightly higher than 30%. Promising results were attained for early human motion detection, with enhancements in the focus of the proposed architectures. However, further improvements are still needed to achieve a more reliable solution for integration in a smart walker’s control strategy, based in the human motion intentions.O tratamento de distúrbios da marcha tem apostado cada vez mais em terapias de reabilitação que beneficiam do uso de andarilhos inteligentes. Estes ainda carecem de uma Interação Humano-Robô avançada e eficaz, capaz de entender, intuitivamente, as intenções do movimento humano, fortalecendo a recuperação autónoma do paciente e reduzindo o esforço médico. Esta dissertação propõe o desenvolvimento de uma solução de aprendizagem para o problema de descodificação de movimento humano em andarilhos inteligentes, usando apenas vídeos recolhidos pelo WALKit Smart Walker, um protótipo de andarilho inteligente usado para reabilitação. Foram desenvolvidos algoritmos de aprendizagem para o reconhecimento e detecção precoces de movimento humano. Um método de aquisição personalizado, incluindo um algoritmo de condução e labelização automatizados, foi projetado para permitir o conseguinte treino e avaliação dos algoritmos propostos. Perante a classificação de 4 ações (parar, andar, virar à direita/esquerda), um modelo convolucional com um mecanismo de atenção alcançou os melhores resultados: f1-score offline de 99,61%, precisão instantânea calibrada online de superior a 97 % e um foco centrado no ser humano ligeiramente superior a 30%. Com esta dissertação alcançaram-se resultados promissores para a detecção precoce de movimento humano, com aprimoramentos no foco dos algoritmos propostos. No entanto, ainda são necessárias melhorias adicionais para alcançar uma solução mais robusta para a integração na estratégia de controlo de um andarilho inteligente, com base nas intenções de movimento do utilizador

    Remote Gait type classification system using markerless 2D video

    Get PDF
    Several pathologies can alter the way people walk, i.e., their gait. Gait analysis can be used to detect such alterations and, therefore, help diagnose certain pathologies or assess people’s health and recovery. Simple vision-based systems have a considerable potential in this area, as they allow the capture of gait in unconstrained environments, such as at home or in a clinic, while the required computations can be done remotely. State-of-the-art vision-based systems for gait analysis use deep learning strategies, thus requiring a large amount of data for training. However, to the best of our knowledge, the largest publicly available pathological gait dataset contains only 10 subjects, simulating 5 types of gait. This paper presents a new dataset, GAIT-IT, captured from 21 subjects simulating 5 types of gait, at 2 severity levels. The dataset is recorded in a professional studio, making the sequences free of background camouflage, variations in illumination and other visual artifacts. The dataset is used to train a novel automatic gait analysis system. Compared to the state-of-the-art, the proposed system achieves a drastic reduction in the number of trainable parameters, memory requirements and execution times, while the classification accuracy is on par with the state-of-the-art. Recognizing the importance of remote healthcare, the proposed automatic gait analysis system is integrated with a prototype web application. This prototype is presently hosted in a private network, and after further tests and development it will allow people to upload a video of them walking and execute a web service that classifies their gait. The web application has a user-friendly interface usable by healthcare professionals or by laypersons. The application also makes an association between the identified type of gait and potential gait pathologies that exhibit the identified characteristics.info:eu-repo/semantics/publishedVersio

    Markerless Human Motion Analysis

    Get PDF
    Measuring and understanding human motion is crucial in several domains, ranging from neuroscience, to rehabilitation and sports biomechanics. Quantitative information about human motion is fundamental to study how our Central Nervous System controls and organizes movements to functionally evaluate motor performance and deficits. In the last decades, the research in this field has made considerable progress. State-of-the-art technologies that provide useful and accurate quantitative measures rely on marker-based systems. Unfortunately, markers are intrusive and their number and location must be determined a priori. Also, marker-based systems require expensive laboratory settings with several infrared cameras. This could modify the naturalness of a subject\u2019s movements and induce discomfort. Last, but not less important, they are computationally expensive in time and space. Recent advances on markerless pose estimation based on computer vision and deep neural networks are opening the possibility of adopting efficient video-based methods for extracting movement information from RGB video data. In this contest, this thesis presents original contributions to the following objectives: (i) the implementation of a video-based markerless pipeline to quantitatively characterize human motion; (ii) the assessment of its accuracy if compared with a gold standard marker-based system; (iii) the application of the pipeline to different domains in order to verify its versatility, with a special focus on the characterization of the motion of preterm infants and on gait analysis. With the proposed approach we highlight that, starting only from RGB videos and leveraging computer vision and machine learning techniques, it is possible to extract reliable information characterizing human motion comparable to that obtained with gold standard marker-based systems

    Using transfer learning for classification of gait pathologies

    Get PDF
    Different diseases can affect an individual’s gait in different ways and, therefore, gait analysis can provide important insights into an individual’s health and well-being. Currently, most systems that perform gait analysis using 2D video are limited to simple binary classification of gait as being either normal or impaired. While some systems do perform gait classification across different pathologies, the reported results still have a considerable margin for improvement. This paper presents a novel system that performs classification of gait across different pathologies, with considerably improved results. The system computes the walking individual’s silhouettes, which are computed from a 2D video sequence, and combines them into a representation known as the gait energy image (GEI), which provides robustness against silhouette segmentation errors. In this work, instead of using a set of handcrafted gait features, feature extraction is done using the VGG-19 convolutional neural network. The network is fine-tuned to automatically extract the features that best represent gait pathologies, using transfer learning. The use of transfer learning improves the classification accuracy while avoiding the need of a very large training set, as the network is pre-trained for generic image description, which also contributes to a better generalization when tested across different datasets. The proposed system performs the final classification using linear discriminant analysis (LDA). Obtained results show that the proposed system outperforms the state-of-the-art, achieving a classification accuracy of 95% on a dataset containing gait sequences affected by diplegia, hemiplegia, neuropathy and Parkinson’s disease, along with normal gait sequences.info:eu-repo/semantics/acceptedVersio

    Radar and RGB-depth sensors for fall detection: a review

    Get PDF
    This paper reviews recent works in the literature on the use of systems based on radar and RGB-Depth (RGB-D) sensors for fall detection, and discusses outstanding research challenges and trends related to this research field. Systems to detect reliably fall events and promptly alert carers and first responders have gained significant interest in the past few years in order to address the societal issue of an increasing number of elderly people living alone, with the associated risk of them falling and the consequences in terms of health treatments, reduced well-being, and costs. The interest in radar and RGB-D sensors is related to their capability to enable contactless and non-intrusive monitoring, which is an advantage for practical deployment and users’ acceptance and compliance, compared with other sensor technologies, such as video-cameras, or wearables. Furthermore, the possibility of combining and fusing information from The heterogeneous types of sensors is expected to improve the overall performance of practical fall detection systems. Researchers from different fields can benefit from multidisciplinary knowledge and awareness of the latest developments in radar and RGB-D sensors that this paper is discussing

    Estimation and validation of temporal gait features using a markerless 2D video system

    Get PDF
    Background and Objective: Estimation of temporal gait features, such as stance time, swing time and gait cycle time, can be used for clinical evaluations of various patient groups having gait pathologies, such as Parkinson’s diseases, neuropathy, hemiplegia and diplegia. Most clinical laboratories employ an optoelectronic motion capture system to acquire such features. However, the operation of these systems requires specially trained operators, a controlled environment and attaching reflective markers to the patient’s body. To allow the estimation of the same features in a daily life setting, this paper presents a novel vision based system whose operation does not require the presence of skilled technicians or markers and uses a single 2D camera. Method: The proposed system takes as input a 2D video, computes the silhouettes of the walking person, and then estimates key biomedical gait indicators, such as the initial foot contact with the ground and the toe off instants, from which several other temporal gait features can be derived. Results: The proposed system is tested on two datasets: (i) a public gait dataset made available by CASIA, which contains 20 users, with 4 sequences per user; and (ii) a dataset acquired simultaneously by a marker-based optoelectronic motion capture system and a simple 2D video camera, containing 10 users, with 5 sequences per user. For the CASIA gait dataset A the relevant temporal biomedical gait indicators were manually annotated, and the proposed automated video analysis system achieved an accuracy of 99% on their identification. It was able to obtain accurate estimations even on segmented silhouettes where, the state-of-the-art markerless 2D video based systems fail. For the second database, the temporal features obtained by the proposed system achieved an average intra-class correlation coefficient of 0.86, when compared to the "gold standard" optoelectronic motion capture system. Conclusions: The proposed markerless 2D video based system can be used to evaluate patients’ gait without requiring the usage of complex laboratory settings and without the need for physical attachment of sensors/markers to the patients. The good accuracy of the results obtained suggests that the proposed system can be used as an alternative to the optoelectronic motion capture system in non-laboratory environments, which can be enable more regular clinical evaluations.info:eu-repo/semantics/acceptedVersio
    corecore