15 research outputs found

    Black holes as mirrors: quantum information in random subsystems

    Get PDF
    We study information retrieval from evaporating black holes, assuming that the internal dynamics of a black hole is unitary and rapidly mixing, and assuming that the retriever has unlimited control over the emitted Hawking radiation. If the evaporation of the black hole has already proceeded past the "half-way" point, where half of the initial entropy has been radiated away, then additional quantum information deposited in the black hole is revealed in the Hawking radiation very rapidly. Information deposited prior to the half-way point remains concealed until the half-way point, and then emerges quickly. These conclusions hold because typical local quantum circuits are efficient encoders for quantum error-correcting codes that nearly achieve the capacity of the quantum erasure channel. Our estimate of a black hole's information retention time, based on speculative dynamical assumptions, is just barely compatible with the black hole complementarity hypothesis.Comment: 18 pages, 2 figures. (v2): discussion of decoding complexity clarifie

    Outage capacity and source distortion analysis for NOMA users in 5G systems

    No full text

    Reproducibility of Deep Gray Matter Atrophy Rate Measurement in a Large Multicenter Dataset

    No full text
    Contains fulltext : 190740.pdf (Publisher’s version ) (Open Access)BACKGROUND AND PURPOSE: Precise in vivo measurement of deep GM volume change is a highly demanded prerequisite for an adequate evaluation of disease progression and new treatments. However, quantitative data on the reproducibility of deep GM structure volumetry are not yet available. In this paper we aim to investigate this reproducibility using a large multicenter dataset. MATERIALS AND METHODS: We have assessed the reproducibility of 2 automated segmentation software packages (FreeSurfer and the FMRIB Integrated Registration and Segmentation Tool) by quantifying the volume changes of deep GM structures by using back-to-back MR imaging scans from the Alzheimer Disease Neuroimaging Initiative's multicenter dataset. Five hundred sixty-two subjects with scans at baseline and 1 year were included. Reproducibility was investigated in the bilateral caudate nucleus, putamen, amygdala, globus pallidus, and thalamus by carrying out descriptives as well as multilevel and variance component analysis. RESULTS: Median absolute back-to-back differences varied between GM structures, ranging from 59.6-156.4 muL for volume change, and 1.26%-8.63% for percentage volume change. FreeSurfer had a better performance for the outcome of longitudinal volume change for the bilateral amygdala, putamen, left caudate nucleus (P < .005), and right thalamus (P < .001). For longitudinal percentage volume change, Freesurfer performed better for the left amygdala, bilateral caudate nucleus, and left putamen (P < .001). Smaller limits of agreement were found for FreeSurfer for both outcomes for all GM structures except the globus pallidus. Our results showed that back-to-back differences in 1-year percentage volume change were approximately 1.5-3.5 times larger than the mean measured 1-year volume change of those structures. CONCLUSIONS: Longitudinal deep GM atrophy measures should be interpreted with caution. Furthermore, deep GM atrophy measurement techniques require substantially improved reproducibility, specifically when aiming for personalized medicine

    Loss bounds for online category ranking

    No full text
    Abstract. Category ranking is the task of ordering labels with respect to their relevance to an input instance. In this paper we describe and analyze several algorithms for online category ranking where the instances are revealed in a sequential manner. We describe additive and multiplicative updates which constitute the core of the learning algorithms. The updates are derived by casting a constrained optimization problem for each new instance. We derive loss bounds for the algorithms by using the properties of the dual solution while imposing additional constraints on the dual form. Finally, we outline and analyze the convergence of a general update that can be employed with any Bregman divergence.

    Assessing the reproducibility of the SienaX and Siena brain atrophy measures using the ADNI back-to-back MP-RAGE MRI scans

    No full text
    SienaX and Siena are widely used and fully automated algorithms for measuring whole brain volume and volume change in cross-sectional and longitudinal MRI studies and are particularly useful in studies of brain atrophy. The reproducibility of the algorithms was assessed using the 3D T1 weighted MP-RAGE scans from the Alzheimer's Disease Neuroimaging Initiative (ADNI) study. The back-to-back (BTB) MP-RAGE scans in the ADNI data set makes it a valuable benchmark against which to assess the performance of algorithms of measuring atrophy in the human brain with MRI scans. A total of 671 subjects were included for SienaX and 385 subjects for Siena. The annual percentage brain volume change (PBVC) rates were -0.65 ± 0.82%/year for the healthy controls, -1.15 ± 1.21%/year for mild cognitively impairment (MCI) and -1.84 ± 1.33%/year for AD, in line with previous findings. The median of the absolute value of the reproducibility of SienaX's normalized brain volume (NBV) was 0.96% while the 90th percentile was 5.11%. The reproducibility of Siena's PBVC had a median of 0.35% and a 90th percentile of 1.37%. While the median reproducibility for SienaX's NBV was in line with the values previously reported in the literature, the median reproducibility of Siena's PBVC was about twice that reported. Also, the 90th percentiles for both SienaX and Siena were about twice the size that would be expected for a Gaussian distribution. Because of the natural variation of the disease among patients over a year, a perfectly reproducible whole brain atrophy algorithm would reduce the estimated group size needed to detect a specified treatment effect by only 30% to 40% as compared to Siena's. © 2011 Elsevier Ireland Ltd

    The SIENA/FSL whole brain atrophy algorithm is no more reproducible at 3 T than 1.5 T for Alzheimer's disease

    No full text
    The back-to-back (BTB) acquisition of MP-RAGE MRI scans of the Alzheimer's Disease Neuroimaging Initiative (ADNI1) provides an excellent data set with which to check the reproducibility of brain atrophy measures. As part of ADNI1, 131 subjects received BTB MP-RAGEs at multiple time points and two field strengths of 3. T and 1.5. T. As a result, high quality data from 200 subject-visit-pairs was available to compare the reproducibility of brain atrophies measured with FSL/SIENA over 12 to 18 month intervals at both 3. T and 1.5. T. Although several publications have reported on the differing performance of brain atrophy measures at 3. T and 1.5. T, no formal comparison of reproducibility has been published to date. Another goal was to check whether tuning SIENA options, including -B, -S, -R and the fractional intensity threshold ( f) had a significant impact on the reproducibility. The BTB reproducibility for SIENA was quantified by the 50th percentile of the absolute value of the difference in the percentage brain volume change (PBVC) for the BTB MP-RAGES. At both 3. T and 1.5. T the SIENA option combination of "-B f=0.2", which is different from the default values of f=0.5, yielded the best reproducibility as measured by the 50th percentile yielding 0.28 (0.23-0.39)% and 0.26 (0.20-0.32)%. These results demonstrated that in general 3. T had no advantage over 1.5. T for the whole brain atrophy measure - at least for SIENA. While 3. T MRI is superior to 1.5. T for many types of measurements, and thus worth the additional cost, brain atrophy measurement does not seem to be one of them. © 2014 Elsevier Ireland Ltd
    corecore