329 research outputs found

    Phase-based regional oxygen metabolism in magnetic resonance imaging at high field

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010.Includes bibliographical references (p. 45-48).Venous oxygen saturation (Yv) in cerebral veins and the cerebral metabolic rate of oxygen (CMRO₂) are important indicators for brain function and disease. Phase-susceptibility measurements in magnetic resonance imaging (MRI) have been used to quantify Yv in candidate cerebral veins. However, currently there is no method to quantify regional CMRO₂ using MRI. Here we propose a novel technique to quantify CMRO₂ from independent MRI estimates of Yv and cerebral blood flow (CBF). Our approach used standard gradient-echo (GRE) and arterial spin labeling (ASL) to make these measurements. Results for in vivo Y, and CMRO₂ estimates on human subjects are presented from application of our technique at 3 Tesla (3T). We also extended our method to high-field human imaging at 7 Tesla (7T), which allows us to take advantage of improved signal-to-noise ratio (SNR) for the same scan duration to achieve higherresolution analysis of vessels of interest. While the higher field strength poses additional challenges, such as increased main field and excitation field inhomogeneities as well as more severe susceptibility artifacts, initial results suggest that substantial benefits can be realized with phase-based regional oxygen metabolism in MRI at high field.by Audrey Peiwen Fan.S.M

    Convolutional Neural Net Learning Can Achieve Production-Level Brain Segmentation in Structural Magnetic Resonance Imaging

    Get PDF
    Deep learning implementations using convolutional neural nets have recently demonstrated promise in many areas of medical imaging. In this article we lay out the methods by which we have achieved consistently high quality, high throughput computation of intra-cranial segmentation from whole head magnetic resonance images, an essential but typically time-consuming bottleneck for brain image analysis. We refer to this output as “production-level” because it is suitable for routine use in processing pipelines. Training and testing with an extremely large archive of structural images, our segmentation algorithm performs uniformly well over a wide variety of separate national imaging cohorts, giving Dice metric scores exceeding those of other recent deep learning brain extractions. We describe the components involved to achieve this performance, including size, variety and quality of ground truth, and appropriate neural net architecture. We demonstrate the crucial role of appropriately large and varied datasets, suggesting a less prominent role for algorithm development beyond a threshold of capability

    Fast image reconstruction with L2-regularization

    Get PDF
    Purpose We introduce L2-regularized reconstruction algorithms with closed-form solutions that achieve dramatic computational speed-up relative to state of the art L1- and L2-based iterative algorithms while maintaining similar image quality for various applications in MRI reconstruction. Materials and Methods We compare fast L2-based methods to state of the art algorithms employing iterative L1- and L2-regularization in numerical phantom and in vivo data in three applications; (i) Fast Quantitative Susceptibility Mapping (QSM), (ii) Lipid artifact suppression in Magnetic Resonance Spectroscopic Imaging (MRSI), and (iii) Diffusion Spectrum Imaging (DSI). In all cases, proposed L2-based methods are compared with the state of the art algorithms, and two to three orders of magnitude speed up is demonstrated with similar reconstruction quality. Results The closed-form solution developed for regularized QSM allows processing of a three-dimensional volume under 5 s, the proposed lipid suppression algorithm takes under 1 s to reconstruct single-slice MRSI data, while the PCA based DSI algorithm estimates diffusion propagators from undersampled q-space for a single slice under 30 s, all running in Matlab using a standard workstation. Conclusion For the applications considered herein, closed-form L2-regularization can be a faster alternative to its iterative counterpart or L1-based iterative algorithms, without compromising image quality.National Institute for Biomedical Imaging and Bioengineering (U.S.) (Grant NIBIB K99EB012107)National Institutes of Health (U.S.) (Grant NIH R01 EB007942)National Institute for Biomedical Imaging and Bioengineering (U.S.) (Grant NIBIB R01EB006847)Grant K99/R00 EB008129National Center for Research Resources (U.S.) (Grant NCRR P41RR14075)National Institutes of Health (U.S.) (Blueprint for Neuroscience Research U01MH093765)Siemens CorporationSiemens-MIT AllianceMIT-Center for Integration of Medicine and Innovative Technology (Medical Engineering Fellowship

    Fitting Age-Period-Cohort Models Using the Intrinsic Estimator: Assumptions and Misapplications

    Get PDF
    We thank Demography’s editorial office for the opportunity to respond to te Grotenhuis et al.’s commentary regarding the methods used and the results presented in our earlier paper (Masters et al. 2014). In this response, we briefly reply to three general themes raised in the commentary: (1) the presentation and discussion of APC results, (2) the fitting of full APC models to data for which a simpler model holds, and (3) the variation in the estimated age, period, and cohort coefficients produced by the intrinsic estimator (IE) (i.e., the “non-uniqueness property” of the IE, as referred to by Pelzer et al. (2015))

    Which microbial factors really are important in Pseudomonas aeruginosa infections?

    Get PDF
    Over the last two decades, tens of millions of dollars have been invested in understanding virulence in the human pathogen, Pseudomonas aeruginosa. However, the top 'hits' obtained in a recent TnSeq analysis aimed at identifying those genes that are conditionally essential for infection did not include most of the known virulence factors identified in these earlier studies. Instead, it seems that P. aeruginosa faces metabolic challenges in vivo, and unless it can overcome these, it fails to thrive and is cleared from the host. In this review, we look at the kinds of metabolic pathways that the pathogen seems to find essential, and comment on how this knowledge might be therapeutically exploited.Work in the MW laboratory is funded by the BBSRC (grant BB/M019411/1) and the EU (Marie Curie Educational Training Network “INTEGRATE”). AC is supported by the Cambridge Trusts. EM is funded by a studentship from the MRC. SB is supported by a Hershel Smith studentship. E-FU is a clinical research fellow funded by the CF Trust (UK), Papworth Hospital NHS Trust and the Wellcome Trust. YA is supported by a scholarship from the Yosef Jameel Foundation. YB is an EPSRC-funded PhD student. Work in the laboratory of AF is supported by the Wellcome Trust. Work in the DRS laboratory is supported by the EPSRC.This is the author accepted manuscript. The final version is available from Future Science Group via http://dx.doi.org/10.2217/fmb.15.10

    Multitask Learning for Time Series Data with 2D Convolution

    Full text link
    Multitask learning (MTL) aims to develop a unified model that can handle a set of closely related tasks simultaneously. By optimizing the model across multiple tasks, MTL generally surpasses its non-MTL counterparts in terms of generalizability. Although MTL has been extensively researched in various domains such as computer vision, natural language processing, and recommendation systems, its application to time series data has received limited attention. In this paper, we investigate the application of MTL to the time series classification (TSC) problem. However, when we integrate the state-of-the-art 1D convolution-based TSC model with MTL, the performance of the TSC model actually deteriorates. By comparing the 1D convolution-based models with the Dynamic Time Warping (DTW) distance function, it appears that the underwhelming results stem from the limited expressive power of the 1D convolutional layers. To overcome this challenge, we propose a novel design for a 2D convolution-based model that enhances the model's expressiveness. Leveraging this advantage, our proposed method outperforms competing approaches on both the UCR Archive and an industrial transaction TSC dataset

    Toward a Foundation Model for Time Series Data

    Full text link
    A foundation model is a machine learning model trained on a large and diverse set of data, typically using self-supervised learning-based pre-training techniques, that can be adapted to various downstream tasks. However, current research on time series pre-training has mostly focused on models pre-trained solely on data from a single domain, resulting in a lack of knowledge about other types of time series. However, current research on time series pre-training has predominantly focused on models trained exclusively on data from a single domain. As a result, these models possess domain-specific knowledge that may not be easily transferable to time series from other domains. In this paper, we aim to develop an effective time series foundation model by leveraging unlabeled samples from multiple domains. To achieve this, we repurposed the publicly available UCR Archive and evaluated four existing self-supervised learning-based pre-training methods, along with a novel method, on the datasets. We tested these methods using four popular neural network architectures for time series to understand how the pre-training methods interact with different network designs. Our experimental results show that pre-training improves downstream classification tasks by enhancing the convergence of the fine-tuning process. Furthermore, we found that the proposed pre-training method, when combined with the Transformer model, outperforms the alternatives

    Improving hypertension control through a collaboration between an academic medical center and a chain community pharmacy

    Full text link
    IntroductionApproximately one-third of adults in the United States have hypertension (HTN), leading to increased morbidity and mortality.ObjectivesThis quality improvement intervention was designed to improve HTN control among community-dwelling adults through collaboration between patient-centered medical homes (PCMH) within an academic medical center and chain community pharmacies.MethodsFour PCMH sites in Ann Arbor, Michigan that were in close proximity to two Meijer pharmacies participated in this study between September 2016 and March 2017, which compared HTN outcomes for patients seen at two community pharmacies where the pharmacists received training on HTN management for patients who received usual care at their PCMH. The primary outcome was percent of patients who met their blood pressure (BP) goal of either <140/90-mmHg or-<-150/90-mmHg compared with matched controls who received usual care at the PCMH. Secondary outcomes included number of medication recommendations made, percent of recommendations accepted by the primary care provider (PCP), and patient satisfaction.ResultsPatients who received care at the community pharmacy (n = 155) had a higher rate of BP control at 3-months than matched controls (61.8% vs 47.7%, P = 0.013). A total of 29 medication recommendations were made by community pharmacists and 26 were accepted by the PCP. Nearly 95% of patients rated the care they received as excellent or very good and over 95% stated that they would recommend the pharmacist at the Meijer pharmacy to their family and friends.ConclusionPatients who received HTN management services as part of a collaboration between an academic medical center and chain community pharmacy were significantly more likely to have controlled BP at 3-months compared with matched controls who received standard care. This model shows promise as being a strategy to expand access to care for patients while being mutually beneficial for community pharmacies and health systems.Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/151336/1/jac51158_am.pdfhttps://deepblue.lib.umich.edu/bitstream/2027.42/151336/2/jac51158.pd

    An Efficient Content-based Time Series Retrieval System

    Full text link
    A Content-based Time Series Retrieval (CTSR) system is an information retrieval system for users to interact with time series emerged from multiple domains, such as finance, healthcare, and manufacturing. For example, users seeking to learn more about the source of a time series can submit the time series as a query to the CTSR system and retrieve a list of relevant time series with associated metadata. By analyzing the retrieved metadata, users can gather more information about the source of the time series. Because the CTSR system is required to work with time series data from diverse domains, it needs a high-capacity model to effectively measure the similarity between different time series. On top of that, the model within the CTSR system has to compute the similarity scores in an efficient manner as the users interact with the system in real-time. In this paper, we propose an effective and efficient CTSR model that outperforms alternative models, while still providing reasonable inference runtimes. To demonstrate the capability of the proposed method in solving business problems, we compare it against alternative models using our in-house transaction data. Our findings reveal that the proposed model is the most suitable solution compared to others for our transaction data problem

    HDAC1 modulates OGG1-initiated oxidative DNA damage repair in the aging brain and Alzheimer’s disease

    Get PDF
    DNA damage contributes to brain aging and neurodegenerative diseases. However, the factors stimulating DNA repair to stave off functional decline remain obscure. We show that HDAC1 modulates OGG1-initated 8-oxoguanine (8-oxoG) repair in the brain. HDAC1-deficient mice display age-associated DNA damage accumulation and cognitive impairment. HDAC1 stimulates OGG1, a DNA glycosylase known to remove 8-oxoG lesions that are associated with transcriptional repression. HDAC1 deficiency causes impaired OGG1 activity, 8-oxoG accumulation at the promoters of genes critical for brain function, and transcriptional repression. Moreover, we observe elevated 8-oxoG along with reduced HDAC1 activity and downregulation of a similar gene set in the 5XFAD mouse model of Alzheimer’s disease. Notably, pharmacological activation of HDAC1 alleviates the deleterious effects of 8-oxoG in aged wild-type and 5XFAD mice. Our work uncovers important roles for HDAC1 in 8-oxoG repair and highlights the therapeutic potential of HDAC1 activation to counter functional decline in brain aging and neurodegeneration
    corecore