927 research outputs found

    Investigation of the Effects of Image Signal-to-Noise Ratio on TSPO PET Quantification of Neuroinflammation

    Get PDF
    Neuroinflammation may be imaged using positron emission tomography (PET) and the tracer [11C]-PK11195. Accurate and precise quantification of 18 kilodalton Translocator Protein (TSPO) binding parameters in the brain has proven difficult with this tracer, due to an unfavourable combination of low target concentration in tissue, low brain uptake of the tracer and relatively high non-specific binding, all of which leads to higher levels of relative image noise. To address these limitations, research into new radioligands for the TSPO, with higher brain uptake and lower non-specific binding relative to [11C]-PK11195, is being conducted world-wide. However, factors other than radioligand properties are known to influence signal-to-noise ratio in quantitative PET studies, including the scanner sensitivity, image reconstruction algorithms and data analysis methodology. The aim of this thesis was to investigate and validate computational tools for predicting image noise in dynamic TSPO PET studies, and to employ those tools to investigate the factors that affect image SNR and reliability of TSPO quantification in the human brain. The feasibility of performing multiple (n≥40) independent Monte Carlo simulations for each dynamic [11C]-PK11195 frame- with realistic modelling of the radioactivity source, attenuation and PET tomograph geometries- was investigated. A Beowulf-type high performance computer cluster, constructed from commodity components, was found to be well suited to this task. Timing tests on a single desktop computer system indicated that a computer cluster capable of simulating an hour-long dynamic [11C]-PK11195 PET scan, with 40 independent repeats, and with a total simulation time of less than 6 weeks, could be constructed for less than 10,000 Australian dollars. A computer cluster containing 44 computing cores was therefore assembled, and a peak simulation rate of 2.84x105 photon pairs per second was achieved using the GEANT4 Application for Tomographic Emission (GATE) Monte Carlo simulation software. A simulated PET tomograph was developed in GATE that closely modelled the performance characteristics of several real-world clinical PET systems in terms of spatial resolution, sensitivity, scatter fraction and counting rate performance. The simulated PET system was validated using adaptations of the National Electrical Manufacturers Association (NEMA) quality assurance procedures within GATE. Image noise in dynamic TSPO PET scans was estimated by performing n=40 independent Monte Carlo simulations of an hour-long [11C]-PK11195 scan, and of an hour- long dynamic scan for a hypothetical TSPO ligand with double the brain activity concentration of [11C]-PK11195. From these data an analytical noise model was developed that allowed image noise to be predicted for any combination of brain tissue activity concentration and scan duration. The noise model was validated for the purpose of determining the precision of kinetic parameter estimates for TSPO PET. An investigation was made into the effects of activity concentration in tissue, radionuclide half-life, injected dose and compartmental model complexity on the reproducibility of kinetic parameters. Injecting 555 MBq of carbon-11 labelled TSPO tracer produced similar binding parameter precision to 185 MBq of fluorine-18, and a moderate (20%) reduction in precision was observed for the reduced carbon-11 dose of 370 MBq. Results indicated that a factor of 2 increase in frame count level (relative to [11C]-PK11195, and due for example to higher ligand uptake, injected dose or absolute scanner sensitivity) is required to obtain reliable binding parameter estimates for small regions of interest when fitting a two-tissue compartment, four-parameter compartmental model. However, compartmental model complexity had a similarly large effect, with the reduction of model complexity from the two-tissue compartment, four-parameter to a one-tissue compartment, two-parameter model producing a 78% reduction in coefficient of variation of the binding parameter estimates at each tissue activity level and region size studied. In summary, this thesis describes the development and validation of Monte Carlo methods for estimating image noise in dynamic TSPO PET scans, and analytical methods for predicting relative image noise for a wide range of tissue activity concentration and acquisition durations. The findings of this research suggest that a broader consideration of the kinetic properties of novel TSPO radioligands, with a view to selection of ligands that are potentially amenable to analysis with a simple one-tissue compartment model, is at least as important as efforts directed towards reducing image noise, such as higher brain uptake, in the search for the next generation of TSPO PET tracers

    Evaluating Attentional Impulsivity: A Biomechatronic Approach

    Full text link
    Executive function, also known as executive control, is a multifaceted construct encompassing several cognitive abilities, including working memory, attention, impulse control, and cognitive flexibility. To accurately measure executive functioning skills, it is necessary to develop assessment tools and strategies that can quantify the behaviors associated with cognitive control. Impulsivity, a range of cognitive control deficits, is typically evaluated using conventional neuropsychological tests. However, this study proposes a biomechatronic approach to assess impulsivity as a behavioral construct, in line with traditional neuropsychological assessments. The study involved thirty-four healthy adults who completed the Barratt Impulsiveness Scale (BIS-11) as an initial step. A low-cost biomechatronic system was developed, and an approach based on standard neuropsychological tests, including the trail-making test and serial subtraction-by-seven, was used to evaluate impulsivity. Three tests were conducted: WTMT-A (numbers only), WTMT-B (numbers and letters), and a dual-task of WTMT-A and serial subtraction-by-seven. The preliminary findings suggest that the proposed instrument and experiments successfully generated an attentional impulsivity score and differentiated between participants with high and low attentional impulsivity.Comment: 10 pages, 5 figures, 5 table

    Non-invasive wearable sensing systems for continuous health monitoring and long-term behavior modeling

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, February 2006.Includes bibliographical references (p. 212-228).Deploying new healthcare technologies for proactive health and elder care will become a major priority over the next decade, as medical care systems worldwide become strained by the aging populations. This thesis presents LiveNet, a distributed mobile system based on low-cost commodity hardware that can be deployed for a variety of healthcare applications. LiveNet embodies a flexible infrastructure platform intended for long-term ambulatory health monitoring with real-time data streaming and context classification capabilities. Using LiveNet, we are able to continuously monitor a wide range of physiological signals together with the user's activity and context, to develop a personalized, data-rich health profile of a user over time. Most clinical sensing technologies that exist have focused on accuracy and reliability, at the expense of cost-effectiveness, burden on the patient, and portability. Future proactive health technologies, on the other hand, must be affordable, unobtrusive, and non-invasive if the general population is going to adopt them.(cont.) In this thesis, we focus on the potential of using features derived from minimally invasive physiological and contextual sensors such as motion, speech, heart rate, skin conductance, and temperature/heat flux that can be used in combination with mobile technology to create powerful context-aware systems that are transparent to the user. In many cases, these non-invasive sensing technologies can completely replace more invasive diagnostic sensing for applications in long-term monitoring, behavior and physiology trending, and real-time proactive feedback and alert systems. Non-invasive sensing technologies are particularly important in ambulatory and continuous monitoring applications, where more cumbersome sensing equipment that is typically found in medical and clinical research settings is not usable. The research in this thesis demonstrates that it is possible to use simple non-invasive physiological and contextual sensing using the LiveNet system to accurately classify a variety of physiological conditions. We demonstrate that non-invasive sensing can be correlated to a variety of important physiological and behavioral phenomenon, and thus can serve as substitutes to more invasive and unwieldy forms of medical monitoring devices while still providing a high level of diagnostic power.(cont.) From this foundation, the LiveNet system is deployed in a number of studies to quantify physiological and contextual state. First, a number of classifiers for important health and general contextual cues such as activity state and stress level are developed from basic non-invasive physiological sensing. We then demonstrate that the LiveNet system can be used to develop systems that can classify clinically significant physiological and pathological conditions and that are robust in the presence of noise, motion artifacts, and other adverse conditions found in real-world situations. This is highlighted in a cold exposure and core body temperature study in collaboration with the U.S. Army Research Institute of Environmental Medicine. In this study, we show that it is possible to develop real-time implementations of these classifiers for proactive health monitors that can provide instantaneous feedback relevant in soldier monitoring applications. This thesis also demonstrates that the LiveNet platform can be used for long-term continuous monitoring applications to study physiological trends that vary slowly with time.(cont.) In a clinical study with the Psychiatry Department at the Massachusetts General Hospital, the LiveNet platform is used to continuously monitor clinically depressed patients during their stays on an in-patient ward for treatment. We show that we can accurately correlate physiology and behavior to depression state, as well as to track changes in depression state over time through the course of treatment. This study demonstrates how long-term physiology and behavioral changes can be captured to objectively measure medical treatment and medication efficacy. In another long-term monitoring study, the LiveNet platform is used to collect data on people's everyday behavior as they go through daily life. By collecting long-term behavioral data, we demonstrate the possibility of modeling and predicting high-level behavior using simple physiologic and contextual information derived solely from ambulatory mobile sensing technology.by Michael Sung.Ph.D

    Computational design and optimization of electro-physiological sensors

    Get PDF
    Electro-physiological sensing devices are becoming increasingly common in diverse applications. However, designing such sensors in compact form factors and for high-quality signal acquisition is a challenging task even for experts, is typically done using heuristics, and requires extensive training. Our work proposes a computational approach for designing multi-modal electro-physiological sensors. By employing an optimization-based approach alongside an integrated predictive model for multiple modalities, compact sensors can be created which offer an optimal trade-off between high signal quality and small device size. The task is assisted by a graphical tool that allows to easily specify design preferences and to visually analyze the generated designs in real-time, enabling designer-in-the-loop optimization. Experimental results show high quantitative agreement between the prediction of the optimizer and experimentally collected physiological data. They demonstrate that generated designs can achieve an optimal balance between the size of the sensor and its signal acquisition capability, outperforming expert generated solutions

    Multi-GPU Acceleration of Iterative X-ray CT Image Reconstruction

    Get PDF
    X-ray computed tomography is a widely used medical imaging modality for screening and diagnosing diseases and for image-guided radiation therapy treatment planning. Statistical iterative reconstruction (SIR) algorithms have the potential to significantly reduce image artifacts by minimizing a cost function that models the physics and statistics of the data acquisition process in X-ray CT. SIR algorithms have superior performance compared to traditional analytical reconstructions for a wide range of applications including nonstandard geometries arising from irregular sampling, limited angular range, missing data, and low-dose CT. The main hurdle for the widespread adoption of SIR algorithms in multislice X-ray CT reconstruction problems is their slow convergence rate and associated computational time. We seek to design and develop fast parallel SIR algorithms for clinical X-ray CT scanners. Each of the following approaches is implemented on real clinical helical CT data acquired from a Siemens Sensation 16 scanner and compared to the straightforward implementation of the Alternating Minimization (AM) algorithm of O’Sullivan and Benac [1]. We parallelize the computationally expensive projection and backprojection operations by exploiting the massively parallel hardware architecture of 3 NVIDIA TITAN X Graphical Processing Unit (GPU) devices with CUDA programming tools and achieve an average speedup of 72X over a straightforward CPU implementation. We implement a multi-GPU based voxel-driven multislice analytical reconstruction algorithm called Feldkamp-Davis-Kress (FDK) [2] and achieve an average overall speedup of 1382X over the baseline CPU implementation by using 3 TITAN X GPUs. Moreover, we propose a novel adaptive surrogate-function based optimization scheme for the AM algorithm, resulting in more aggressive update steps in every iteration. On average, we double the convergence rate of our baseline AM algorithm and also improve image quality by using the adaptive surrogate function. We extend the multi-GPU and adaptive surrogate-function based acceleration techniques to dual-energy reconstruction problems as well. Furthermore, we design and develop a GPU-based deep Convolutional Neural Network (CNN) to denoise simulated low-dose X-ray CT images. Our experiments show significant improvements in the image quality with our proposed deep CNN-based algorithm against some widely used denoising techniques including Block Matching 3-D (BM3D) and Weighted Nuclear Norm Minimization (WNNM). Overall, we have developed novel fast, parallel, computationally efficient methods to perform multislice statistical reconstruction and image-based denoising on clinically-sized datasets

    Performance Factors in Neurosurgical Simulation and Augmented Reality Image Guidance

    Get PDF
    Virtual reality surgical simulators have seen widespread adoption in an effort to provide safe, cost-effective and realistic practice of surgical skills. However, the majority of these simulators focus on training low-level technical skills, providing only prototypical surgical cases. For many complex procedures, this approach is deficient in representing anatomical variations that present clinically, failing to challenge users’ higher-level cognitive skills important for navigation and targeting. Surgical simulators offer the means to not only simulate any case conceivable, but to test novel approaches and examine factors that influence performance. Unfortunately, there is a void in the literature surrounding these questions. This thesis was motivated by the need to expand the role of surgical simulators to provide users with clinically relevant scenarios and evaluate human performance in relation to image guidance technologies, patient-specific anatomy, and cognitive abilities. To this end, various tools and methodologies were developed to examine cognitive abilities and knowledge, simulate procedures, and guide complex interventions all within a neurosurgical context. The first chapter provides an introduction to the material. The second chapter describes the development and evaluation of a virtual anatomical training and examination tool. The results suggest that learning occurs and that spatial reasoning ability is an important performance predictor, but subordinate to anatomical knowledge. The third chapter outlines development of automation tools to enable efficient simulation studies and data management. In the fourth chapter, subjects perform abstract targeting tasks on ellipsoid targets with and without augmented reality guidance. While the guidance tool improved accuracy, performance with the tool was strongly tied to target depth estimation – an important consideration for implementation and training with similar guidance tools. In the fifth chapter, neurosurgically experienced subjects were recruited to perform simulated ventriculostomies. Results showed anatomical variations influence performance and could impact outcome. Augmented reality guidance showed no marked improvement in performance, but exhibited a mild learning curve, indicating that additional training may be warranted. The final chapter summarizes the work presented. Our results and novel evaluative methodologies lay the groundwork for further investigation into simulators as versatile research tools to explore performance factors in simulated surgical procedures

    Trustworthy Wireless Personal Area Networks

    Get PDF
    In the Internet of Things (IoT), everyday objects are equipped with the ability to compute and communicate. These smart things have invaded the lives of everyday people, being constantly carried or worn on our bodies, and entering into our homes, our healthcare, and beyond. This has given rise to wireless networks of smart, connected, always-on, personal things that are constantly around us, and have unfettered access to our most personal data as well as all of the other devices that we own and encounter throughout our day. It should, therefore, come as no surprise that our personal devices and data are frequent targets of ever-present threats. Securing these devices and networks, however, is challenging. In this dissertation, we outline three critical problems in the context of Wireless Personal Area Networks (WPANs) and present our solutions to these problems. First, I present our Trusted I/O solution (BASTION-SGX) for protecting sensitive user data transferred between wirelessly connected (Bluetooth) devices. This work shows how in-transit data can be protected from privileged threats, such as a compromised OS, on commodity systems. I present insights into the Bluetooth architecture, Intel’s Software Guard Extensions (SGX), and how a Trusted I/O solution can be engineered on commodity devices equipped with SGX. Second, I present our work on AMULET and how we successfully built a wearable health hub that can run multiple health applications, provide strong security properties, and operate on a single charge for weeks or even months at a time. I present the design and evaluation of our highly efficient event-driven programming model, the design of our low-power operating system, and developer tools for profiling ultra-low-power applications at compile time. Third, I present a new approach (VIA) that helps devices at the center of WPANs (e.g., smartphones) to verify the authenticity of interactions with other devices. This work builds on past work in anomaly detection techniques and shows how these techniques can be applied to Bluetooth network traffic. Specifically, we show how to create normality models based on fine- and course-grained insights from network traffic, which can be used to verify the authenticity of future interactions

    Activity Report: Automatic Control 2013

    Get PDF

    Development and Implementation of Fully 3D Statistical Image Reconstruction Algorithms for Helical CT and Half-Ring PET Insert System

    Get PDF
    X-ray computed tomography: CT) and positron emission tomography: PET) have become widely used imaging modalities for screening, diagnosis, and image-guided treatment planning. Along with the increased clinical use are increased demands for high image quality with reduced ionizing radiation dose to the patient. Despite their significantly high computational cost, statistical iterative reconstruction algorithms are known to reconstruct high-quality images from noisy tomographic datasets. The overall goal of this work is to design statistical reconstruction software for clinical x-ray CT scanners, and for a novel PET system that utilizes high-resolution detectors within the field of view of a whole-body PET scanner. The complex choices involved in the development and implementation of image reconstruction algorithms are fundamentally linked to the ways in which the data is acquired, and they require detailed knowledge of the various sources of signal degradation. Both of the imaging modalities investigated in this work have their own set of challenges. However, by utilizing an underlying statistical model for the measured data, we are able to use a common framework for this class of tomographic problems. We first present the details of a new fully 3D regularized statistical reconstruction algorithm for multislice helical CT. To reduce the computation time, the algorithm was carefully parallelized by identifying and taking advantage of the specific symmetry found in helical CT. Some basic image quality measures were evaluated using measured phantom and clinical datasets, and they indicate that our algorithm achieves comparable or superior performance over the fast analytical methods considered in this work. Next, we present our fully 3D reconstruction efforts for a high-resolution half-ring PET insert. We found that this unusual geometry requires extensive redevelopment of existing reconstruction methods in PET. We redesigned the major components of the data modeling process and incorporated them into our reconstruction algorithms. The algorithms were tested using simulated Monte Carlo data and phantom data acquired by a PET insert prototype system. Overall, we have developed new, computationally efficient methods to perform fully 3D statistical reconstructions on clinically-sized datasets
    • …
    corecore