231 research outputs found

    Can adding Ephedrine to Admixture of Propofol & Lidocaine Overcome Propofol Associated Hemodynamic Changes and Injection Pain?

    Get PDF
    Purpose: There are numerous studies researching ways to alleviate propofol injection pain. In this study, we evaluated and compared the use of propofol-lidocaine admixture vs propofol-lidocaine combined with ephedrine, on vascular pain and hemodynamic changes associated propofol. Methods: This double-blinded, prospective, randomised study was performed on 100 patients with ASA I-II who were divided into two group. The first received admixture consisting of 20 mg of lidocaine and propofol 1% 20 ml (Group L), and the other received admixture consisting of 20 mcg ephedrine, 20 mg lidocaine and propofol 1% 20 ml (Group LE). Baseline and after induction heart rate, mean arterial pressure and rate pressure product (RPP) were recorded per minute. Vascular pain were evaluated with verbal rating scale. Results: Data of 40 patients in group L and 39 patients in Group LE were evaluated in the study. The incidence of pain in group L was 90%, it was 38.4% for Group LE. Mild pain was observed significantly more in Group L when compared to Group LE (p<0.05). Average blood pressure and RPP immediately after induction and 1 min after intubation were significantly higher in group LE compared to group L (p<0.05). Heart rate was higher in Group LE immediately after induction and at initially 4 minutes after intubation. Conclusion: Our study has demonstrated significant decrease in rate of vascular pain and increased hemodynamic stability in patients receiving 20 mg ephedrine added to 20 ml % 1 propofol and 20 mg lidocaine admixture when compared to those who only received the lidocaine-propofol admixtur

    Supervised Nonparametric Image Parcellation

    Get PDF
    Author Manuscript 2010 August 25. 12th International Conference, London, UK, September 20-24, 2009, Proceedings, Part IISegmentation of medical images is commonly formulated as a supervised learning problem, where manually labeled training data are summarized using a parametric atlas. Summarizing the data alleviates the computational burden at the expense of possibly losing valuable information on inter-subject variability. This paper presents a novel framework for Supervised Nonparametric Image Parcellation (SNIP). SNIP models the intensity and label images as samples of a joint distribution estimated from the training data in a non-parametric fashion. By capitalizing on recently developed fast and robust pairwise image alignment tools, SNIP employs the entire training data to segment a new image via Expectation Maximization. The use of multiple registrations increases robustness to occasional registration failures. We report experiments on 39 volumetric brain MRI scans with manual labels for the white matter, cortex and subcortical structures. SNIP yields better segmentation than state-of-the-art algorithms in multiple regions of interest.NAMIC (NIHNIBIBNAMICU54-EB005149)NAC (NIHNCRRNACP41-RR13218)mBIRN (NIHNCRRmBIRNU24-RR021382)NIH NINDS (Grant R01-NS051826)National Science Foundation (U.S.) (CAREER Grant 0642971)NCRR (P41-RR14075)NCRR (R01 RR16594-01A1)NIBIB (R01 EB001550)NIBIB (R01EB006758)NINDS (R01 NS052585-01)Mind Research InstituteEllison Medical FoundationSingapore. Agency for Science, Technology and Researc

    Medical Image Imputation from Image Collections

    Get PDF
    We present an algorithm for creating high resolution anatomically plausible images consistent with acquired clinical brain MRI scans with large inter-slice spacing. Although large data sets of clinical images contain a wealth of information, time constraints during acquisition result in sparse scans that fail to capture much of the anatomy. These characteristics often render computational analysis impractical as many image analysis algorithms tend to fail when applied to such images. Highly specialized algorithms that explicitly handle sparse slice spacing do not generalize well across problem domains. In contrast, we aim to enable application of existing algorithms that were originally developed for high resolution research scans to significantly undersampled scans. We introduce a generative model that captures fine-scale anatomical structure across subjects in clinical image collections and derive an algorithm for filling in the missing data in scans with large inter-slice spacing. Our experimental results demonstrate that the resulting method outperforms state-of-the-art upsampling super-resolution techniques, and promises to facilitate subsequent analysis not previously possible with scans of this quality. Our implementation is freely available at https://github.com/adalca/papago .Comment: Accepted at IEEE Transactions on Medical Imaging (\c{opyright} 2018 IEEE

    Bayesian model reveals latent atrophy factors with dissociable cognitive trajectories in Alzheimer’s disease

    Get PDF
    We used a data-driven Bayesian model to automatically identify distinct latent factors of overlapping atrophy patterns from voxelwise structural MRIs of late-onset Alzheimer’s disease (AD) dementia patients. Our approach estimated the extent to which multiple distinct atrophy patterns were expressed within each participant rather than assuming that each participant expressed a single atrophy factor. The model revealed a temporal atrophy factor (medial temporal cortex, hippocampus, and amygdala), a subcortical atrophy factor (striatum, thalamus, and cerebellum), and a cortical atrophy factor (frontal, parietal, lateral temporal, and lateral occipital cortices). To explore the influence of each factor in early AD, atrophy factor compositions were inferred in beta-amyloid–positive (Aβ+) mild cognitively impaired (MCI) and cognitively normal (CN) participants. All three factors were associated with memory decline across the entire clinical spectrum, whereas the cortical factor was associated with executive function decline in Aβ+ MCI participants and AD dementia patients. Direct comparison between factors revealed that the temporal factor showed the strongest association with memory, whereas the cortical factor showed the strongest association with executive function. The subcortical factor was associated with the slowest decline for both memory and executive function compared with temporal and cortical factors. These results suggest that distinct patterns of atrophy influence decline across different cognitive domains. Quantification of this heterogeneity may enable the computation of individual-level predictions relevant for disease monitoring and customized therapies. Factor compositions of participants and code used in this article are publicly available for future research.United States. National Institutes of Health (1K25EB013649-01)United States. National Institutes of Health (1R21AG050122-01A1)United States. National Institutes of Health (P01AG036694)United States. National Institutes of Health (F32AG044054

    Insight into the Spatial Arrangement of the Lysine Tyrosylquinone and Cu2+ in the Active Site of Lysyl Oxidase-like 2

    Get PDF
    Lysyl oxidase-2 (LOXL2) is a Cu2+ and lysine tyrosylquinone (LTQ)-dependent amine oxidase that catalyzes the oxidative deamination of peptidyl lysine and hydroxylysine residues to promote crosslinking of extracellular matrix proteins. LTQ is post-translationally derived from Lys653 and Tyr689, but its biogenesis mechanism remains still elusive. A 2.4 Ă… Zn2+-bound precursor structure lacking LTQ (PDB:5ZE3) has become available, where Lys653 and Tyr689 are 16.6 Ă… apart, thus a substantial conformational rearrangement is expected to take place for LTQ biogenesis. However, we have recently shown that the overall structures of the precursor (no LTQ) and the mature (LTQ-containing) LOXL2s are very similar and disulfide bonds are conserved. In this study, we aim to gain insights into the spatial arrangement of LTQ and the active site Cu2+ in the mature LOXL2 using a recombinant LOXL2 that is inhibited by 2-hydrazinopyridine (2HP). Comparative UV-vis and resonance Raman spectroscopic studies of the 2HP-inhibited LOXL2 and the corresponding model compounds and an EPR study of the latter support that 2HP-modified LTQ serves as a tridentate ligand to the active site Cu2. We propose that LTQ resides within 2.9 Ă… of the active site of Cu2+ in the mature LOXL2, and both LTQ and Cu2+ are solvent-exposed

    Atmospheric Channel Characteristics for Quantum Communication with Continuous Polarization Variables

    Full text link
    We investigate the properties of an atmospheric channel for free space quantum communication with continuous polarization variables. In our prepare-and-measure setup, coherent polarization states are transmitted through an atmospheric quantum channel of 100m length on the roof of our institute's building. The signal states are measured by homodyne detection with the help of a local oscillator (LO) which propagates in the same spatial mode as the signal, orthogonally polarized to it. Thus the interference of signal and LO is excellent and atmospheric fluctuations are autocompensated. The LO also acts as spatial and spectral filter, which allows for unrestrained daylight operation. Important characteristics for our system are atmospheric channel influences that could cause polarization, intensity and position excess noise. Therefore we study these influences in detail. Our results indicate that the channel is suitable for our quantum communication system in most weather conditions.Comment: 6 pages, 4 figures, submitted to Applied Physics B following an invitation for the special issue "Selected Papers Presented at the 2009 Spring Meeting of the Quantum Optics and Photonics Section of the German Physical Society

    Supervised Nonparametric Image Parcellation

    Get PDF
    Abstract. Segmentation of medical images is commonly formulated as a supervised learning problem, where manually labeled training data are summarized using a parametric atlas. Summarizing the data alleviates the computational burden at the expense of possibly losing valuable information on inter-subject variability. This paper presents a novel framework for Supervised Nonparametric Image Parcellation (SNIP). SNIP models the intensity and label images as samples of a joint distribution estimated from the training data in a non-parametric fashion. By capitalizing on recently developed fast and robust pairwise image alignment tools, SNIP employs the entire training data to segment a new image via Expectation Maximization. The use of multiple registrations increases robustness to occasional registration failures. We report experiments on 39 volumetric brain MRI scans with manual labels for the white matter, cortex and subcortical structures. SNIP yields better segmentation than state-of-the-art algorithms in multiple regions of interest

    Quantum optical coherence can survive photon losses: a continuous-variable quantum erasure correcting code

    Get PDF
    A fundamental requirement for enabling fault-tolerant quantum information processing is an efficient quantum error-correcting code (QECC) that robustly protects the involved fragile quantum states from their environment. Just as classical error-correcting codes are indispensible in today's information technologies, it is believed that QECC will play a similarly crucial role in tomorrow's quantum information systems. Here, we report on the first experimental demonstration of a quantum erasure-correcting code that overcomes the devastating effect of photon losses. Whereas {\it errors} translate, in an information theoretic language, the noise affecting a transmission line, {\it erasures} correspond to the in-line probabilistic loss of photons. Our quantum code protects a four-mode entangled mesoscopic state of light against erasures, and its associated encoding and decoding operations only require linear optics and Gaussian resources. Since in-line attenuation is generally the strongest limitation to quantum communication, much more than noise, such an erasure-correcting code provides a new tool for establishing quantum optical coherence over longer distances. We investigate two approaches for circumventing in-line losses using this code, and demonstrate that both approaches exhibit transmission fidelities beyond what is possible by classical means.Comment: 5 pages, 4 figure

    Medical Image Imputation From Image Collections

    Get PDF
    We present an algorithm for creating high-resolution anatomically plausible images consistent with acquired clinical brain MRI scans with large inter-slice spacing. Although large data sets of clinical images contain a wealth of information, time constraints during acquisition result in sparse scans that fail to capture much of the anatomy. These characteristics often render computational analysis impractical as many image analysis algorithms tend to fail when applied to such images. Highly specialized algorithms that explicitly handle sparse slice spacing do not generalize well across problem domains. In contrast, we aim to enable the application of existing algorithms that were originally developed for high-resolution research scans to significantly undersampled scans. We introduce a generative model that captures a fine-scale anatomical structure across subjects in clinical image collections and derives an algorithm for filling in the missing data in scans with large inter-slice spacing. Our experimental results demonstrate that the resulting method outperforms the state-of-the-art upsampling super-resolution techniques, and promises to facilitate subsequent analysis not previously possible with scans of this quality. Our implementation is freely available at https://github.com/adalca/papago
    • …
    corecore