10,766 research outputs found

    Cross-Modal Data Programming Enables Rapid Medical Machine Learning

    Full text link
    Labeling training datasets has become a key barrier to building medical machine learning models. One strategy is to generate training labels programmatically, for example by applying natural language processing pipelines to text reports associated with imaging studies. We propose cross-modal data programming, which generalizes this intuitive strategy in a theoretically-grounded way that enables simpler, clinician-driven input, reduces required labeling time, and improves with additional unlabeled data. In this approach, clinicians generate training labels for models defined over a target modality (e.g. images or time series) by writing rules over an auxiliary modality (e.g. text reports). The resulting technical challenge consists of estimating the accuracies and correlations of these rules; we extend a recent unsupervised generative modeling technique to handle this cross-modal setting in a provably consistent way. Across four applications in radiography, computed tomography, and electroencephalography, and using only several hours of clinician time, our approach matches or exceeds the efficacy of physician-months of hand-labeling with statistical significance, demonstrating a fundamentally faster and more flexible way of building machine learning models in medicine

    An Unsupervised Learning Model for Deformable Medical Image Registration

    Full text link
    We present a fast learning-based algorithm for deformable, pairwise 3D medical image registration. Current registration methods optimize an objective function independently for each pair of images, which can be time-consuming for large data. We define registration as a parametric function, and optimize its parameters given a set of images from a collection of interest. Given a new pair of scans, we can quickly compute a registration field by directly evaluating the function using the learned parameters. We model this function using a convolutional neural network (CNN), and use a spatial transform layer to reconstruct one image from another while imposing smoothness constraints on the registration field. The proposed method does not require supervised information such as ground truth registration fields or anatomical landmarks. We demonstrate registration accuracy comparable to state-of-the-art 3D image registration, while operating orders of magnitude faster in practice. Our method promises to significantly speed up medical image analysis and processing pipelines, while facilitating novel directions in learning-based registration and its applications. Our code is available at https://github.com/balakg/voxelmorph .Comment: 9 pages, in CVPR 201

    Neuro-critical multimodal Edge-AI monitoring algorithm and IoT system design and development

    Get PDF
    In recent years, with the continuous development of neurocritical medicine, the success rate of treatment of patients with traumatic brain injury (TBI) has continued to increase, and the prognosis has also improved. TBI patients' condition is usually very complicated, and after treatment, patients often need a more extended time to recover. The degree of recovery is also related to prognosis. However, as a young discipline, neurocritical medicine still has many shortcomings. Especially in most hospitals, the condition of Neuro-intensive Care Unit (NICU) is uneven, the equipment has limited functionality, and there is no unified data specification. Most of the instruments are cumbersome and expensive, and patients often need to pay high medical expenses. Recent years have seen a rapid development of big data and artificial intelligence (AI) technology, which are advancing the medical IoT field. However, further development and a wider range of applications of these technologies are needed to achieve widespread adoption. Based on the above premises, the main contributions of this thesis are the following. First, the design and development of a multi-modal brain monitoring system including 8-channel electroencephalography (EEG) signals, dual-channel NIRS signals, and intracranial pressure (ICP) signals acquisition. Furthermore, an integrated display platform for multi-modal physiological data to display and analysis signals in real-time was designed. This thesis also introduces the use of the Qt signal and slot event processing mechanism and multi-threaded to improve the real-time performance of data processing to a higher level. In addition, multi-modal electrophysiological data storage and processing was realized on cloud server. The system also includes a custom built Django cloud server which realizes real-time transmission between server and WeChat applet. Based on WebSocket protocol, the data transmission delay is less than 10ms. The analysis platform can be equipped with deep learning models to realize the monitoring of patients with epileptic seizures and assess the level of consciousness of Disorders of Consciousness (DOC) patients. This thesis combines the standard open-source data set CHB-MIT, a clinical data set provided by Huashan Hospital, and additional data collected by the system described in this thesis. These data sets are merged to build a deep learning network model and develop related applications for automatic disease diagnosis for smart medical IoT systems. It mainly includes the use of the clinical data to analyze the characteristics of the EEG signal of DOC patients and building a CNN model to evaluate the patient's level of consciousness automatically. Also, epilepsy is a common disease in neuro-intensive care. In this regard, this thesis also analyzes the differences of various deep learning model between the CHB-MIT data set and clinical data set for epilepsy monitoring, in order to select the most appropriate model for the system being designed and developed. Finally, this thesis also verifies the AI-assisted analysis model.. The results show that the accuracy of the CNN network model based on the evaluation of consciousness disorder on the clinical data set reaches 82%. The CNN+STFT network model based on epilepsy monitoring reaches 90% of the accuracy rate in clinical data. Also, the multi-modal brain monitoring system built is fully verified. The EEG signal collected by this system has a high signal-to-noise ratio, strong anti-interference ability, and is very stable. The built brain monitoring system performs well in real-time and stability. Keywords: TBI, Neurocritical care, Multi-modal, Consciousness Assessment, seizures detection, deep learning, CNN, IoT

    Inviwo -- A Visualization System with Usage Abstraction Levels

    Full text link
    The complexity of today's visualization applications demands specific visualization systems tailored for the development of these applications. Frequently, such systems utilize levels of abstraction to improve the application development process, for instance by providing a data flow network editor. Unfortunately, these abstractions result in several issues, which need to be circumvented through an abstraction-centered system design. Often, a high level of abstraction hides low level details, which makes it difficult to directly access the underlying computing platform, which would be important to achieve an optimal performance. Therefore, we propose a layer structure developed for modern and sustainable visualization systems allowing developers to interact with all contained abstraction levels. We refer to this interaction capabilities as usage abstraction levels, since we target application developers with various levels of experience. We formulate the requirements for such a system, derive the desired architecture, and present how the concepts have been exemplary realized within the Inviwo visualization system. Furthermore, we address several specific challenges that arise during the realization of such a layered architecture, such as communication between different computing platforms, performance centered encapsulation, as well as layer-independent development by supporting cross layer documentation and debugging capabilities
    corecore