1,638 research outputs found

    A configurable deep network for high-dimensional clinical trial data

    Get PDF
    Clinical studies provide interesting case studies for data mining researchers, given the often high degree of dimensionality and long term nature of these studies. In areas such as dementia, accurate predictions from data scientists provide vital input into the understanding of how certain features (representing lifestyle) can predict outcomes such as dementia. Most research involved has used traditional or shallow data mining approaches which have been shown to offer varying degrees of accuracy in datasets with high dimensionality. In this research, we explore the use of deep learning architectures, as they have been shown to have high predictive capabilities in image and audio datasets. The purpose of our research is to build a framework which allows easy reconfiguration for the performance of experiments across a number of deep learning approaches. In this paper, we present our framework for a configurable deep learning machine and our evaluation and analysis of two shallow approaches: regression and multi-layer perceptron, as a platform to a deep belief network, and using a dataset created over the course of 12 years by researchers in the area of dementia

    A framework for selecting deep learning hyper-parameters

    Get PDF
    Recent research has found that deep learning architectures show significant improvements over traditional shallow algorithms when mining high dimensional datasets. When the choice of algorithm employed, hyper-parameter setting, number of hidden layers and nodes within a layer are combined, the identification of an optimal configuration can be a lengthy process. Our work provides a framework for building deep learning architectures via a stepwise approach, together with an evaluation methodology to quickly identify poorly performing architectural configurations. Using a dataset with high dimensionality, we illustrate how different architectures perform and how one algorithm configuration can provide input for fine-tuning more complex models

    Clinical application of a template-guided automated planning routine

    Get PDF
    PURPOSE: Determine the dosimetric quality and the planning time reduction when utilizing a template-based automated planning application. METHODS: A software application integrated through the treatment planning system application programing interface, QuickPlan, was developed to facilitate automated planning using configurable templates for contouring, knowledge-based planning structure matching, field design, and algorithm settings. Validations are performed at various levels of the planning procedure and assist in the evaluation of readiness of the CT image, structure set, and plan layout for automated planning. QuickPlan is evaluated dosimetrically against 22 hippocampal-avoidance whole brain radiotherapy patients. The required times to treatment plan generation are compared for the validations set as well as 10 prospective patients whose plans have been automated by QuickPlan. RESULTS: The generations of 22 automated treatment plans are compared against a manual replanning using an identical process, resulting in dosimetric differences of minor clinical significance. The target dose to 2% volume and homogeneity index result in significantly decreased values for automated plans, whereas other dose metric evaluations are nonsignificant. The time to generate the treatment plans is reduced for all automated plans with a median difference of 9\u27 50″ ± 4\u27 33″. CONCLUSIONS: Template-based automated planning allows for reduced treatment planning time with consistent optimization structure creation, treatment field creation, plan optimization, and dose calculation with similar dosimetric quality. This process has potential expansion to numerous disease sites

    Fog Computing in Medical Internet-of-Things: Architecture, Implementation, and Applications

    Full text link
    In the era when the market segment of Internet of Things (IoT) tops the chart in various business reports, it is apparently envisioned that the field of medicine expects to gain a large benefit from the explosion of wearables and internet-connected sensors that surround us to acquire and communicate unprecedented data on symptoms, medication, food intake, and daily-life activities impacting one's health and wellness. However, IoT-driven healthcare would have to overcome many barriers, such as: 1) There is an increasing demand for data storage on cloud servers where the analysis of the medical big data becomes increasingly complex, 2) The data, when communicated, are vulnerable to security and privacy issues, 3) The communication of the continuously collected data is not only costly but also energy hungry, 4) Operating and maintaining the sensors directly from the cloud servers are non-trial tasks. This book chapter defined Fog Computing in the context of medical IoT. Conceptually, Fog Computing is a service-oriented intermediate layer in IoT, providing the interfaces between the sensors and cloud servers for facilitating connectivity, data transfer, and queryable local database. The centerpiece of Fog computing is a low-power, intelligent, wireless, embedded computing node that carries out signal conditioning and data analytics on raw data collected from wearables or other medical sensors and offers efficient means to serve telehealth interventions. We implemented and tested an fog computing system using the Intel Edison and Raspberry Pi that allows acquisition, computing, storage and communication of the various medical data such as pathological speech data of individuals with speech disorders, Phonocardiogram (PCG) signal for heart rate estimation, and Electrocardiogram (ECG)-based Q, R, S detection.Comment: 29 pages, 30 figures, 5 tables. Keywords: Big Data, Body Area Network, Body Sensor Network, Edge Computing, Fog Computing, Medical Cyberphysical Systems, Medical Internet-of-Things, Telecare, Tele-treatment, Wearable Devices, Chapter in Handbook of Large-Scale Distributed Computing in Smart Healthcare (2017), Springe

    A Deep learning toolkit for high dimensional sequential data

    Get PDF
    Deep learning is a more recent form of machine learning based on a set of algorithms that attempt to learn using a deep graph with multiple processing layers, where layers are composed of multiple linear and non-linear transformational nodes. While research in this area has shown to improve the predictive accuracy in a number of domains, deep learning systems are highly complex and experiments can be hard to manage. In this dissertation, we present a deep learning system, built from scratch, which enables fully configurable deep learning experiments. By configurable, we mean selecting the overall learning algorithm, the number of layers within the deep network, the nodes within network layers and the propagation functions deployed at each node. We use a range of deep network configurations together with different datasets to illustrate the potential of this system but also to highlight the difficulties in tuning the model and hyper-parameters to maximise accuracy. Our research also provides a conceptual data model to capture all aspects of deep learning experiments. By specifying a conceptual model, it provides a platform for the storage and management of experimental snapshots, a key support for experiment and parameter optimisation and analysis. In addition, we developed a toolkit which supports the management and analysis of deep learning experiments and provides a new method for pausing and calibrating experiments. It also offers possibilities for interchanging experiment setup and results between deep learning researchers. Our validation takes the form of a series of case studies built from the requirements of end users and demonstrates the effectiveness of our toolkit in building deep learning algorithms

    Hardware acceleration using FPGAs for adaptive radiotherapy

    Get PDF
    Adaptive radiotherapy (ART) seeks to improve the accuracy of radiotherapy by adapting the treatment based on up-to-date images of the patient's anatomy captured at the time of treatment delivery. The amount of image data, combined with the clinical time requirements for ART, necessitates automatic image analysis to adapt the treatment plan. Currently, the computational effort of the image processing and plan adaptation means they cannot be completed in a clinically acceptable timeframe. This thesis aims to investigate the use of hardware acceleration on Field Programmable Gate Arrays (FPGAs) to accelerate algorithms for segmenting bony anatomy in Computed Tomography (CT) scans, to reduce the plan adaptation time for ART. An assessment was made of the overhead incurred by transferring image data to an FPGA-based hardware accelerator using the industry-standard DICOM protocol over an Ethernet connection. The rate was found to be likely to limit the performanceof hardware accelerators for ART, highlighting the need for an alternative method of integrating hardware accelerators with existing radiotherapy equipment. A clinically-validated segmentation algorithm was adapted for implementation in hardware. This was shown to process three-dimensional CT images up to 13.81 times faster than the original software implementation. The segmentations produced by the two implementations showed strong agreement. Modifications to the hardware implementation were proposed for segmenting fourdimensional CT scans. This was shown to process image volumes 14.96 times faster than the original software implementation, and the segmentations produced by the two implementations showed strong agreement in most cases.A second, novel, method for segmenting four-dimensional CT data was also proposed. The hardware implementation executed 1.95 times faster than the software implementation. However, the algorithm was found to be unsuitable for the global segmentation task examined here, although it may be suitable as a refining segmentation in the context of a larger ART algorithm.Adaptive radiotherapy (ART) seeks to improve the accuracy of radiotherapy by adapting the treatment based on up-to-date images of the patient's anatomy captured at the time of treatment delivery. The amount of image data, combined with the clinical time requirements for ART, necessitates automatic image analysis to adapt the treatment plan. Currently, the computational effort of the image processing and plan adaptation means they cannot be completed in a clinically acceptable timeframe. This thesis aims to investigate the use of hardware acceleration on Field Programmable Gate Arrays (FPGAs) to accelerate algorithms for segmenting bony anatomy in Computed Tomography (CT) scans, to reduce the plan adaptation time for ART. An assessment was made of the overhead incurred by transferring image data to an FPGA-based hardware accelerator using the industry-standard DICOM protocol over an Ethernet connection. The rate was found to be likely to limit the performanceof hardware accelerators for ART, highlighting the need for an alternative method of integrating hardware accelerators with existing radiotherapy equipment. A clinically-validated segmentation algorithm was adapted for implementation in hardware. This was shown to process three-dimensional CT images up to 13.81 times faster than the original software implementation. The segmentations produced by the two implementations showed strong agreement. Modifications to the hardware implementation were proposed for segmenting fourdimensional CT scans. This was shown to process image volumes 14.96 times faster than the original software implementation, and the segmentations produced by the two implementations showed strong agreement in most cases.A second, novel, method for segmenting four-dimensional CT data was also proposed. The hardware implementation executed 1.95 times faster than the software implementation. However, the algorithm was found to be unsuitable for the global segmentation task examined here, although it may be suitable as a refining segmentation in the context of a larger ART algorithm
    corecore