2,654 research outputs found

    A Survey of Multimodal Information Fusion for Smart Healthcare: Mapping the Journey from Data to Wisdom

    Full text link
    Multimodal medical data fusion has emerged as a transformative approach in smart healthcare, enabling a comprehensive understanding of patient health and personalized treatment plans. In this paper, a journey from data to information to knowledge to wisdom (DIKW) is explored through multimodal fusion for smart healthcare. We present a comprehensive review of multimodal medical data fusion focused on the integration of various data modalities. The review explores different approaches such as feature selection, rule-based systems, machine learning, deep learning, and natural language processing, for fusing and analyzing multimodal data. This paper also highlights the challenges associated with multimodal fusion in healthcare. By synthesizing the reviewed frameworks and theories, it proposes a generic framework for multimodal medical data fusion that aligns with the DIKW model. Moreover, it discusses future directions related to the four pillars of healthcare: Predictive, Preventive, Personalized, and Participatory approaches. The components of the comprehensive survey presented in this paper form the foundation for more successful implementation of multimodal fusion in smart healthcare. Our findings can guide researchers and practitioners in leveraging the power of multimodal fusion with the state-of-the-art approaches to revolutionize healthcare and improve patient outcomes.Comment: This work has been submitted to the ELSEVIER for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessibl

    Fall Prediction and Prevention Systems: Recent Trends, Challenges, and Future Research Directions.

    Get PDF
    Fall prediction is a multifaceted problem that involves complex interactions between physiological, behavioral, and environmental factors. Existing fall detection and prediction systems mainly focus on physiological factors such as gait, vision, and cognition, and do not address the multifactorial nature of falls. In addition, these systems lack efficient user interfaces and feedback for preventing future falls. Recent advances in internet of things (IoT) and mobile technologies offer ample opportunities for integrating contextual information about patient behavior and environment along with physiological health data for predicting falls. This article reviews the state-of-the-art in fall detection and prediction systems. It also describes the challenges, limitations, and future directions in the design and implementation of effective fall prediction and prevention systems

    Decision support by machine learning systems for acute management of severely injured patients: A systematic review

    Full text link
    Introduction Treating severely injured patients requires numerous critical decisions within short intervals in a highly complex situation. The coordination of a trauma team in this setting has been shown to be associated with multiple procedural errors, even of experienced care teams. Machine learning (ML) is an approach that estimates outcomes based on past experiences and data patterns using a computer-generated algorithm. This systematic review aimed to summarize the existing literature on the value of ML for the initial management of severely injured patients. Methods We conducted a systematic review of the literature with the goal of finding all articles describing the use of ML systems in the context of acute management of severely injured patients. MESH search of Pubmed/Medline and Web of Science was conducted. Studies including fewer than 10 patients were excluded. Studies were divided into the following main prediction groups: (1) injury pattern, (2) hemorrhage/need for transfusion, (3) emergency intervention, (4) ICU/length of hospital stay, and (5) mortality. Results Thirty-six articles met the inclusion criteria; among these were two prospective and thirty-four retrospective case series. Publication dates ranged from 2000 to 2020 and included 32 different first authors. A total of 18,586,929 patients were included in the prediction models. Mortality was the most represented main prediction group (n = 19). ML models used were artificial neural network ( n = 15), singular vector machine (n = 3), Bayesian network (n = 7), random forest (n = 6), natural language processing (n = 2), stacked ensemble classifier [SuperLearner (SL), n = 3], k-nearest neighbor (n = 1), belief system (n = 1), and sequential minimal optimization (n = 2) models. Thirty articles assessed results as positive, five showed moderate results, and one article described negative results to their implementation of the respective prediction model. Conclusions While the majority of articles show a generally positive result with high accuracy and precision, there are several requirements that need to be met to make the implementation of such models in daily clinical work possible. Furthermore, experience in dealing with on-site implementation and more clinical trials are necessary before the implementation of ML techniques in clinical care can become a reality

    Learning a Joint Embedding of Multiple Satellite Sensors: A Case Study for Lake Ice Monitoring

    Full text link
    Fusing satellite imagery acquired with different sensors has been a long-standing challenge of Earth observation, particularly across different modalities such as optical and synthetic aperture radar (SAR) images. Here, we explore the joint analysis of imagery from different sensors in the light of representation learning: we propose to learn a joint embedding of multiple satellite sensors within a deep neural network. Our application problem is the monitoring of lake ice on Alpine lakes. To reach the temporal resolution requirement of the Swiss Global Climate Observing System (GCOS) office, we combine three image sources: Sentinel-1 SAR (S1-SAR), Terra moderate resolution imaging spectroradiometer (MODIS), and Suomi-NPP visible infrared imaging radiometer suite (VIIRS). The large gaps between the optical and SAR domains and between the sensor resolutions make this a challenging instance of the sensor fusion problem. Our approach can be classified as a late fusion that is learned in a data-driven manner. The proposed network architecture has separate encoding branches for each image sensor, which feed into a single latent embedding, i.e., a common feature representation shared by all inputs, such that subsequent processing steps deliver comparable output irrespective of which sort of input image was used. By fusing satellite data, we map lake ice at a temporal resolution of 91% [respectively, mean per-class Intersection-over-Union (mIoU) scores >60%] and generalizes well across different lakes and winters. Moreover, it sets a new state-of-the-art for determining the important ice-on and ice-off dates for the target lakes, in many cases meeting the GCOS requirement

    Clinical validation of a public health policy-making platform for hearing loss (EVOTION): protocol for a big data study

    Get PDF
    INTRODUCTION: The holistic management of hearing loss (HL) requires an understanding of factors that predict hearing aid (HA) use and benefit beyond the acoustics of listening environments. Although several predictors have been identified, no study has explored the role of audiological, cognitive, behavioural and physiological data nor has any study collected real-time HA data. This study will collect ‘big data’, including retrospective HA logging data, prospective clinical data and real-time data via smart HAs, a mobile application and biosensors. The main objective is to enable the validation of the EVOTION platform as a public health policy-making tool for HL. METHODS AND ANALYSIS: This will be a big data international multicentre study consisting of retrospective and prospective data collection. Existing data from approximately 35 000 HA users will be extracted from clinical repositories in the UK and Denmark. For the prospective data collection, 1260 HA candidates will be recruited across four clinics in the UK and Greece. Participants will complete a battery of audiological and other assessments (measures of patient-reported HA benefit, mood, cognition, quality of life). Patients will be offered smart HAs and a mobile phone application and a subset will also be given wearable biosensors, to enable the collection of dynamic real-life HA usage data. Big data analytics will be used to detect correlations between contextualised HA usage and effectiveness, and different factors and comorbidities affecting HL, with a view to informing public health decision-making. ETHICS AND DISSEMINATION: Ethical approval was received from the London South East Research Ethics Committee (17/LO/0789), the Hippokrateion Hospital Ethics Committee (1847) and the Athens Medical Center’s Ethics Committee (KM140670). Results will be disseminated through national and international events in Greece and the UK, scientific journals, newsletters, magazines and social media. Target audiences include HA users, clinicians, policy-makers and the general public. TRIAL REGISTRATION NUMBER: NCT03316287; Pre-results
    • …
    corecore