101 research outputs found

    Machine Learning in Medical Image Analysis

    Get PDF
    Machine learning is playing a pivotal role in medical image analysis. Many algorithms based on machine learning have been applied in medical imaging to solve classification, detection, and segmentation problems. Particularly, with the wide application of deep learning approaches, the performance of medical image analysis has been significantly improved. In this thesis, we investigate machine learning methods for two key challenges in medical image analysis: The first one is segmentation of medical images. The second one is learning with weak supervision in the context of medical imaging. The first main contribution of the thesis is a series of novel approaches for image segmentation. First, we propose a framework based on multi-scale image patches and random forests to segment small vessel disease (SVD) lesions on computed tomography (CT) images. This framework is validated in terms of spatial similarity, estimated lesion volumes, visual score ratings and was compared with human experts. The results showed that the proposed framework performs as well as human experts. Second, we propose a generic convolutional neural network (CNN) architecture called the DRINet for medical image segmentation. The DRINet approach is robust in three different types of segmentation tasks, which are multi-class cerebrospinal fluid (CSF) segmentation on brain CT images, multi-organ segmentation on abdomen CT images, and multi-class tumour segmentation on brain magnetic resonance (MR) images. Finally, we propose a CNN-based framework to segment acute ischemic lesions on diffusion weighted (DW)-MR images, where the lesions are highly variable in terms of position, shape, and size. Promising results were achieved on a large clinical dataset. The second main contribution of the thesis is two novel strategies for learning with weak supervision. First, we propose a novel strategy called context restoration to make use of the images without annotations. The context restoration strategy is a proxy learning process based on the CNN, which extracts semantic features from images without using annotations. It was validated on classification, localization, and segmentation problems and was superior to existing strategies. Second, we propose a patch-based framework using multi-instance learning to distinguish normal and abnormal SVD on CT images, where there are only coarse-grained labels available. Our framework was observed to work better than classic methods and clinical practice.Open Acces

    Assessing emphysema in CT scans of the lungs:Using machine learning, crowdsourcing and visual similarity

    Get PDF

    Online learning of personalised human activity recognition models from user-provided annotations

    Get PDF
    PhD ThesisIn Human Activity Recognition (HAR), supervised and semi-supervised training are important tools for devising parametric activity models. For the best modelling performance, large amounts of annotated personalised sample data are typically required. Annotating often represents the bottleneck in the overall modelling process as it usually involves retrospective analysis of experimental ground truth, like video footage. These approaches typically neglect that prospective users of HAR systems are themselves key sources of ground truth for their own activities. This research therefore involves the users of HAR monitors in the annotation process. The process relies solely on users' short term memory and engages with them to parsimoniously provide annotations for their own activities as they unfold. E ects of user input are optimised by using Online Active Learning (OAL) to identify the most critical annotations which are expected to lead to highly optimal HAR model performance gains. Personalised HAR models are trained from user-provided annotations as part of the evaluation, focusing mainly on objective model accuracy. The OAL approach is contrasted with Random Selection (RS) { a naive method which makes uninformed annotation requests. A range of simulation-based annotation scenarios demonstrate that using OAL brings bene ts in terms of HAR model performance over RS. Additionally, a mobile application is implemented and deployed in a naturalistic context to collect annotations from a panel of human participants. The deployment is proof that the method can truly run in online mode and it also shows that considerable HAR model performance gains can be registered even under realistic conditions. The ndings from this research point to the conclusion that online learning from userprovided annotations is a valid solution to the problem of constructing personalised HAR models

    Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries

    Get PDF
    This two-volume set LNCS 12962 and 12963 constitutes the thoroughly refereed proceedings of the 7th International MICCAI Brainlesion Workshop, BrainLes 2021, as well as the RSNA-ASNR-MICCAI Brain Tumor Segmentation (BraTS) Challenge, the Federated Tumor Segmentation (FeTS) Challenge, the Cross-Modality Domain Adaptation (CrossMoDA) Challenge, and the challenge on Quantification of Uncertainties in Biomedical Image Quantification (QUBIQ). These were held jointly at the 23rd Medical Image Computing for Computer Assisted Intervention Conference, MICCAI 2020, in September 2021. The 91 revised papers presented in these volumes were selected form 151 submissions. Due to COVID-19 pandemic the conference was held virtually. This is an open access book

    Low-Cost Indoor Localisation Based on Inertial Sensors, Wi-Fi and Sound

    Get PDF
    The average life expectancy has been increasing in the last decades, creating the need for new technologies to improve the quality of life of the elderly. In the Ambient Assisted Living scope, indoor location systems emerged as a promising technology capable of sup porting the elderly, providing them a safer environment to live in, and promoting their autonomy. Current indoor location technologies are divided into two categories, depend ing on their need for additional infrastructure. Infrastructure-based solutions require expensive deployment and maintenance. On the other hand, most infrastructure-free systems rely on a single source of information, being highly dependent on its availability. Such systems will hardly be deployed in real-life scenarios, as they cannot handle the absence of their source of information. An efficient solution must, thus, guarantee the continuous indoor positioning of the elderly. This work proposes a new room-level low-cost indoor location algorithm. It relies on three information sources: inertial sensors, to reconstruct users’ trajectories; environ mental sound, to exploit the unique characteristics of each home division; and Wi-Fi, to estimate the distance to the Access Point in the neighbourhood. Two data collection protocols were designed to resemble a real living scenario, and a data processing stage was applied to the collected data. Then, each source was used to train individual Ma chine Learning (including Deep Learning) algorithms to identify room-level positions. As each source provides different information to the classification, the data were merged to produce a more robust localization. Three data fusion approaches (input-level, early, and late fusion) were implemented for this goal, providing a final output containing complementary contributions from all data sources. Experimental results show that the performance improved when more than one source was used, attaining a weighted F1-score of 81.8% in the localization between seven home divisions. In conclusion, the evaluation of the developed algorithm shows that it can achieve accurate room-level indoor localization, being, thus, suitable to be applied in Ambient Assisted Living scenarios.O aumento da esperança média de vida nas últimas décadas, criou a necessidade de desenvolvimento de tecnologias que permitam melhorar a qualidade de vida dos idosos. No âmbito da Assistência à Autonomia no Domicílio, sistemas de localização indoor têm emergido como uma tecnologia promissora capaz de acompanhar os idosos e as suas atividades, proporcionando-lhes um ambiente seguro e promovendo a sua autonomia. As tecnologias de localização indoor atuais podem ser divididas em duas categorias, aquelas que necessitam de infrastruturas adicionais e aquelas que não. Sistemas dependentes de infrastrutura necessitam de implementação e manutenção que são muitas vezes dispendiosas. Por outro lado, a maioria das soluções que não requerem infrastrutura, dependem de apenas uma fonte de informação, sendo crucial a sua disponibilidade. Um sistema que não consegue lidar com a falta de informação de um sensor dificilmente será implementado em cenários reais. Uma solução eficiente deverá assim garantir o acompanhamento contínuo dos idosos. A solução proposta consiste no desenvolvimento de um algoritmo de localização indoor de baixo custo, baseando-se nas seguintes fontes de informação: sensores inerciais, capazes de reconstruir a trajetória do utilizador; som, explorando as características dis tintas de cada divisão da casa; e Wi-Fi, responsável pela estimativa da distância entre o ponto de acesso e o smartphone. Cada fonte sensorial, extraída dos sensores incorpora dos no dispositivo, foi, numa primeira abordagem, individualmente otimizada através de algoritmos de Machine Learning (incluindo Deep Learning). Como os dados das diversas fontes contêm informação diferente acerca das mesmas características do sistema, a sua fusão torna a classificação mais informada e robusta. Com este objetivo, foram implementadas três abordagens de fusão de dados (input data, early and late fusion), fornecendo um resultado final derivado de contribuições complementares de todas as fontes de dados. Os resultados experimentais mostram que o desempenho do algoritmo desenvolvido melhorou com a inclusão de informação multi-sensor, alcançando um valor para F1- score de 81.8% na distinção entre sete divisões domésticas. Concluindo, o algoritmo de localização indoor, combinando informações de três fontes diferentes através de métodos de fusão de dados, alcançou uma localização room-level e está apto para ser aplicado num cenário de Assistência à Autonomia no Domicílio

    Pacific Symposium on Biocomputing 2023

    Get PDF
    The Pacific Symposium on Biocomputing (PSB) 2023 is an international, multidisciplinary conference for the presentation and discussion of current research in the theory and application of computational methods in problems of biological significance. Presentations are rigorously peer reviewed and are published in an archival proceedings volume. PSB 2023 will be held on January 3-7, 2023 in Kohala Coast, Hawaii. Tutorials and workshops will be offered prior to the start of the conference.PSB 2023 will bring together top researchers from the US, the Asian Pacific nations, and around the world to exchange research results and address open issues in all aspects of computational biology. It is a forum for the presentation of work in databases, algorithms, interfaces, visualization, modeling, and other computational methods, as applied to biological problems, with emphasis on applications in data-rich areas of molecular biology.The PSB has been designed to be responsive to the need for critical mass in sub-disciplines within biocomputing. For that reason, it is the only meeting whose sessions are defined dynamically each year in response to specific proposals. PSB sessions are organized by leaders of research in biocomputing's 'hot topics.' In this way, the meeting provides an early forum for serious examination of emerging methods and approaches in this rapidly changing field

    Learning Human Behaviour Patterns by Trajectory and Activity Recognition

    Get PDF
    The world’s population is ageing, increasing the awareness of neurological and behavioural impairments that may arise from the human ageing. These impairments can be manifested by cognitive conditions or mobility reduction. These conditions are difficult to be detected on time, relying only on the periodic medical appointments. Therefore, there is a lack of routine screening which demands the development of solutions to better assist and monitor human behaviour. The available technologies to monitor human behaviour are limited to indoors and require the installation of sensors around the user’s homes presenting high maintenance and installation costs. With the widespread use of smartphones, it is possible to take advantage of their sensing information to better assist the elderly population. This study investigates the question of what we can learn about human pattern behaviour from this rich and pervasive mobile sensing data. A deployment of a data collection over a period of 6 months was designed to measure three different human routines through human trajectory analysis and activity recognition comprising indoor and outdoor environment. A framework for modelling human behaviour was developed using human motion features, extracted in an unsupervised and supervised manner. The unsupervised feature extraction is able to measure mobility properties such as step length estimation, user points of interest or even locomotion activities inferred from an user-independent trained classifier. The supervised feature extraction was design to be user-dependent as each user may have specific behaviours that are common to his/her routine. The human patterns were modelled through probability density functions and clustering approaches. Using the human learned patterns, inferences about the current human behaviour were continuously quantified by an anomaly detection algorithm, where distance measurements were used to detect significant changes in behaviour. Experimental results demonstrate the effectiveness of the proposed framework that revealed an increase potential to learn behaviour patterns and detect anomalies

    WiFi-Based Human Activity Recognition Using Attention-Based BiLSTM

    Get PDF
    Recently, significant efforts have been made to explore human activity recognition (HAR) techniques that use information gathered by existing indoor wireless infrastructures through WiFi signals without demanding the monitored subject to carry a dedicated device. The key intuition is that different activities introduce different multi-paths in WiFi signals and generate different patterns in the time series of channel state information (CSI). In this paper, we propose and evaluate a full pipeline for a CSI-based human activity recognition framework for 12 activities in three different spatial environments using two deep learning models: ABiLSTM and CNN-ABiLSTM. Evaluation experiments have demonstrated that the proposed models outperform state-of-the-art models. Also, the experiments show that the proposed models can be applied to other environments with different configurations, albeit with some caveats. The proposed ABiLSTM model achieves an overall accuracy of 94.03%, 91.96%, and 92.59% across the 3 target environments. While the proposed CNN-ABiLSTM model reaches an accuracy of 98.54%, 94.25% and 95.09% across those same environments

    Report from Dagstuhl Seminar 23031: Frontiers of Information Access Experimentation for Research and Education

    Full text link
    This report documents the program and the outcomes of Dagstuhl Seminar 23031 ``Frontiers of Information Access Experimentation for Research and Education'', which brought together 37 participants from 12 countries. The seminar addressed technology-enhanced information access (information retrieval, recommender systems, natural language processing) and specifically focused on developing more responsible experimental practices leading to more valid results, both for research as well as for scientific education. The seminar brought together experts from various sub-fields of information access, namely IR, RS, NLP, information science, and human-computer interaction to create a joint understanding of the problems and challenges presented by next generation information access systems, from both the research and the experimentation point of views, to discuss existing solutions and impediments, and to propose next steps to be pursued in the area in order to improve not also our research methods and findings but also the education of the new generation of researchers and developers. The seminar featured a series of long and short talks delivered by participants, who helped in setting a common ground and in letting emerge topics of interest to be explored as the main output of the seminar. This led to the definition of five groups which investigated challenges, opportunities, and next steps in the following areas: reality check, i.e. conducting real-world studies, human-machine-collaborative relevance judgment frameworks, overcoming methodological challenges in information retrieval and recommender systems through awareness and education, results-blind reviewing, and guidance for authors.Comment: Dagstuhl Seminar 23031, report
    • …
    corecore