40 research outputs found

    Assessing gait impairments based on auto-encoded patterns of mahalanobis distances from consecutive steps

    Get PDF
    Proceedings of: 14th conference of the Association for the Advancement of Assistive Technology in Europe (AAATE 2017), Sheffield (UK), 12-15th September 2017.Insole pressure sensors capture the force distribution patterns during the stance phase while walking. By comparing patterns obtained from healthy individuals to patients suffering different medical conditions based on a given similarity measure, automatic impairment indexes can be computed in order to help in applications such as rehabilitation. This paper uses the data sensed from insole pressure sensors for a group of healthy controls to train an auto-encoder using patterns of stochastic distances in series of consecutive steps while walking at normal speeds. Two experiment groups are compared to the healthy control group: a group of patients suffering knee pain and a group of post-stroke survivors. The Mahalanobis distance is computed for every single step by each participant compared to the entire dataset sensed from healthy controls. The computed distances for consecutive steps are fed into the previously trained autoencoder and the average error is used to assess how close the walking segment is to the autogenerated model from healthy controls. The results show that automatic distortion indexes can be used to assess each participant as compared to normal patterns computed from healthy controls. The stochastic distances observed for the group of stroke survivors are bigger than those for the people with knee pain.The research leading to these results has received funding from the “HERMES-SMART DRIVER” project TIN2013-46801-C4-2-R (MINECO), funded by the Spanish Agencia Estatal de Investigación (AEI), and the “ANALYTICS USING SENSOR DATA FOR FLATCITY” project TIN2016-77158-C4-1-R (MINECO/ ERDF, EU) funded by the Spanish Agencia Estatal de Investigación (AEI) and the European Regional Development Fund (ERDF)

    Novelty, distillation, and federation in machine learning for medical imaging

    Get PDF
    The practical application of deep learning methods in the medical domain has many challenges. Pathologies are diverse and very few examples may be available for rare cases. Where data is collected it may lie in multiple institutions and cannot be pooled for practical and ethical reasons. Deep learning is powerful for image segmentation problems but ultimately its output must be interpretable at the patient level. Although clearly not an exhaustive list, these are the three problems tackled in this thesis. To address the rarity of pathology I investigate novelty detection algorithms to find outliers from normal anatomy. The problem is structured as first finding a low-dimension embedding and then detecting outliers in that embedding space. I evaluate for speed and accuracy several unsupervised embedding and outlier detection methods. Data consist of Magnetic Resonance Imaging (MRI) for interstitial lung disease for which healthy and pathological patches are available; only the healthy patches are used in model training. I then explore the clinical interpretability of a model output. I take related work by the Canon team — a model providing voxel-level detection of acute ischemic stroke signs — and deliver the Alberta Stroke Programme Early CT Score (ASPECTS, a measure of stroke severity). The data are acute head computed tomography volumes of suspected stroke patients. I convert from the voxel level to the brain region level and then to the patient level through a series of rules. Due to the real world clinical complexity of the problem, there are at each level — voxel, region and patient — multiple sources of “truth”; I evaluate my results appropriately against these truths. Finally, federated learning is used to train a model on data that are divided between multiple institutions. I introduce a novel evolution of this algorithm — dubbed “soft federated learning” — that avoids the central coordinating authority, and takes into account domain shift (covariate shift) and dataset size. I first demonstrate the key properties of these two algorithms on a series of MNIST (handwritten digits) toy problems. Then I apply the methods to the BraTS medical dataset, which contains MRI brain glioma scans from multiple institutions, to compare these algorithms in a realistic setting

    Wearable and BAN Sensors for Physical Rehabilitation and eHealth Architectures

    Get PDF
    The demographic shift of the population towards an increase in the number of elderly citizens, together with the sedentary lifestyle we are adopting, is reflected in the increasingly debilitated physical health of the population. The resulting physical impairments require rehabilitation therapies which may be assisted by the use of wearable sensors or body area network sensors (BANs). The use of novel technology for medical therapies can also contribute to reducing the costs in healthcare systems and decrease patient overflow in medical centers. Sensors are the primary enablers of any wearable medical device, with a central role in eHealth architectures. The accuracy of the acquired data depends on the sensors; hence, when considering wearable and BAN sensing integration, they must be proven to be accurate and reliable solutions. This book is a collection of works focusing on the current state-of-the-art of BANs and wearable sensing devices for physical rehabilitation of impaired or debilitated citizens. The manuscripts that compose this book report on the advances in the research related to different sensing technologies (optical or electronic) and body area network sensors (BANs), their design and implementation, advanced signal processing techniques, and the application of these technologies in areas such as physical rehabilitation, robotics, medical diagnostics, and therapy

    Object Recognition

    Get PDF
    Vision-based object recognition tasks are very familiar in our everyday activities, such as driving our car in the correct lane. We do these tasks effortlessly in real-time. In the last decades, with the advancement of computer technology, researchers and application developers are trying to mimic the human's capability of visually recognising. Such capability will allow machine to free human from boring or dangerous jobs

    Investigation of Multi-dimensional Tensor Multi-task Learning for Modeling Alzheimer's Disease Progression

    Get PDF
    Machine learning (ML) techniques for predicting Alzheimer's disease (AD) progression can significantly assist clinicians and researchers in constructing effective AD prevention and treatment strategies. The main constraints on the performance of current ML approaches are prediction accuracy and stability problems in medical small dataset scenarios, monotonic data formats (loss of multi-dimensional knowledge of the data and loss of correlation knowledge between biomarkers) and biomarker interpretability limitations. This thesis investigates how multi-dimensional information and knowledge from biomarker data integrated with multi-task learning approaches to predict AD progression. Firstly, a novel similarity-based quantification approach is proposed with two components: multi-dimensional knowledge vector construction and amalgamated magnitude-direction quantification of brain structural variation, which considers both the magnitude and directional correlations of structural variation between brain biomarkers and encodes the quantified data as a third-order tensor to address the problem of monotonic data form. Secondly, multi-task learning regression algorithms with the ability to integrate multi-dimensional tensor data and mine MRI data for spatio-temporal structural variation information and knowledge were designed and constructed to improve the accuracy, stability and interpretability of AD progression prediction in medical small dataset scenarios. The algorithm consists of three components: supervised symmetric tensor decomposition for extracting biomarker latent factors, tensor multi-task learning regression and algorithmic regularisation terms. The proposed algorithm aims to extract a set of first-order latent factors from the raw data, each represented by its first biomarker, second biomarker and patient sample dimensions, to elucidate potential factors affecting the variability of the data in an interpretable manner and can be utilised as predictor variables for training the prediction model that regards the prediction of each patient as a task, with each task sharing a set of biomarker latent factors obtained from tensor decomposition. Knowledge sharing between tasks improves the generalisation ability of the model and addresses the problem of sparse medical data. The experimental results demonstrate that the proposed approach achieves superior accuracy and stability in predicting various cognitive scores of AD progression compared to single-task learning, benchmarks and state-of-the-art multi-task regression methods. The proposed approach identifies brain structural variations in patients and the important brain biomarker correlations revealed by the experiments can be utilised as potential indicators for AD early identification

    Gaze-Based Human-Robot Interaction by the Brunswick Model

    Get PDF
    We present a new paradigm for human-robot interaction based on social signal processing, and in particular on the Brunswick model. Originally, the Brunswick model copes with face-to-face dyadic interaction, assuming that the interactants are communicating through a continuous exchange of non verbal social signals, in addition to the spoken messages. Social signals have to be interpreted, thanks to a proper recognition phase that considers visual and audio information. The Brunswick model allows to quantitatively evaluate the quality of the interaction using statistical tools which measure how effective is the recognition phase. In this paper we cast this theory when one of the interactants is a robot; in this case, the recognition phase performed by the robot and the human have to be revised w.r.t. the original model. The model is applied to Berrick, a recent open-source low-cost robotic head platform, where the gazing is the social signal to be considered
    corecore