166 research outputs found
Generalizability of electroencephalographic interpretation using artificial intelligence : An external validation study
The automated interpretation of clinical electroencephalograms (EEGs) using artificial intelligence (AI) holds the potential to bridge the treatment gap in resource-limited settings and reduce the workload at specialized centers. However, to facilitate broad clinical implementation, it is essential to establish generalizability across diverse patient populations and equipment. We assessed whether SCORE-AI demonstrates diagnostic accuracy comparable to that of experts when applied to a geographically different patient population, recorded with distinct EEG equipment and technical settings. We assessed the diagnostic accuracy of a "fixed-and-frozen" AI model, using an independent dataset and external gold standard, and benchmarked it against three experts blinded to all other data. The dataset comprised 50% normal and 50% abnormal routine EEGs, equally distributed among the four major classes of EEG abnormalities (focal epileptiform, generalized epileptiform, focal nonepileptiform, and diffuse nonepileptiform). To assess diagnostic accuracy, we computed sensitivity, specificity, and accuracy of the AI model and the experts against the external gold standard. We analyzed EEGs from 104 patients (64 females, median age = 38.6 [range = 16-91] years). SCORE-AI performed equally well compared to the experts, with an overall accuracy of 92% (95% confidence interval [CI] = 90%-94%) versus 94% (95% CI = 92%-96%). There was no significant difference between SCORE-AI and the experts for any metric or category. SCORE-AI performed well independently of the vigilance state (false classification during awake: 5/41 [12.2%], false classification during sleep: 2/11 [18.2%]; p =.63) and normal variants (false classification in presence of normal variants: 4/14 [28.6%], false classification in absence of normal variants: 3/38 [7.9%]; p =.07). SCORE-AI achieved diagnostic performance equal to human experts in an EEG dataset independent of the development dataset, in a geographically distinct patient population, recorded with different equipment and technical settings than the development dataset
Interrater agreement on classification of photoparoxysmal electroencephalographic response
Our goal was to assess the interrater agreement (IRA) of photoparoxysmal response (PPR) using the classification proposed by a task force of the International League Against Epilepsy (ILAE), and a simplified classification system proposed by our group. In addition, we evaluated IRA of epileptiform discharges (EDs) and the diagnostic significance of the electroencephalographic (EEG) abnormalities. We used EEG recordings from the European Reference Network (EpiCARE) and Standardized Computer-based Organized Reporting of EEG (SCORE). Six raters independently scored EEG recordings from 30 patients. We calculated the agreement coefficient (AC) for each feature. IRA of PPR using the classification proposed by the ILAE task force was only fair (AC = 0.38). This improved to a moderate agreement by using the simplified classification (AC = 0.56; P = .004). IRA of EDs was almost perfect (AC = 0.98), and IRA of scoring the diagnostic significance was moderate (AC = 0.51). Our results suggest that the simplified classification of the PPR is suitable for implementation in clinical practice
Cosmoglobe DR1. III. First full-sky model of polarized synchrotron emission from all WMAP and Planck LFI data
We present the first model of full-sky polarized synchrotron emission that is
derived from all WMAP and Planck LFI frequency maps. The basis of this analysis
is the set of end-to-end reprocessed Cosmoglobe Data Release 1 sky maps
presented in a companion paper, which have significantly lower instrumental
systematics than the legacy products from each experiment. We find that the
resulting polarized synchrotron amplitude map has an average noise rms of
at 30 GHz and FWHM, which is 30% lower than
the recently released BeyondPlanck model that included only LFI+WMAP Ka-V data,
and 29% lower than the WMAP K-band map alone. The mean -to- power
spectrum ratio is , with amplitudes consistent with those measured
previously by Planck and QUIJOTE. Assuming a power law model for the
synchrotron spectral energy distribution, and using the -- plot method,
we find a full-sky inverse noise-variance weighted mean of
between Cosmoglobe DR1 K-band and 30 GHz, in
good agreement with previous estimates. In summary, the novel Cosmoglobe DR1
synchrotron model is both more sensitive and systematically cleaner than
similar previous models, and it has a more complete error description that is
defined by a set of Monte Carlo posterior samples. We believe that these
products are preferable over previous Planck and WMAP products for all
synchrotron-related scientific applications, including simulation, forecasting
and component separation.Comment: 15 pages, 15 figures, submitted to A&
BEYONDPLANCK
We discuss the treatment of bandpass and beam leakage corrections in the Bayesian BEYONDPLANCK cosmic microwave background (CMB) analysis pipeline as applied to the Planck LFI measurements. As a preparatory step, we first applied three corrections to the nominal LFI bandpass profiles, including the removal of a known systematic effect in the ground measuring equipment at 61 GHz, along with a smoothing of standing wave ripples and edge regularization. The main net impact of these modifications is an overall shift in the 70 GHz bandpass of +0.6 GHz. We argue that any analysis of LFI data products, either from Planck or BEYONDPLANCK, should use these new bandpasses. In addition, we fit a single free bandpass parameter for each radiometer of the form δiâ =â δ0+δi, where δ0 represents an absolute frequency shift per frequency band and δi is a relative shift per detector. The absolute correction is only fitted at 30 GHz, with a full Ï 2-based likelihood, resulting in a correction of δ30â =â 0.24±0.03â GHz. The relative corrections were fitted using a spurious map approach that is fundamentally similar to the method pioneered by the WMAP team, but excluding the introduction of many additional degrees of freedom. All the bandpass parameters were sampled using a standard Metropolis sampler within the main BEYONDPLANCK Gibbs chain and the bandpass uncertainties were thus propagated to all other data products in the analysis. In summary, we find that our bandpass model significantly reduces leakage effects. For beam leakage corrections, we adopted the official Planck LFI beam estimates without any additional degrees of freedom and we only marginalized over the underlying sky model. We note that this is the first time that leakage from beam mismatch has been included for Planck LFI maps
BeyondPlanck II. CMB map-making through Gibbs sampling
We present a Gibbs sampling solution to the map-making problem for CMB
measurements, building on existing destriping methodology. Gibbs sampling
breaks the computationally heavy destriping problem into two separate steps;
noise filtering and map binning. Considered as two separate steps, both are
computationally much cheaper than solving the combined problem. This provides a
huge performance benefit as compared to traditional methods, and allows us for
the first time to bring the destriping baseline length to a single sample. We
apply the Gibbs procedure to simulated Planck 30 GHz data. We find that gaps in
the time-ordered data are handled efficiently by filling them with simulated
noise as part of the Gibbs process. The Gibbs procedure yields a chain of map
samples, from which we may compute the posterior mean as a best-estimate map.
The variation in the chain provides information on the correlated residual
noise, without need to construct a full noise covariance matrix. However, if
only a single maximum-likelihood frequency map estimate is required, we find
that traditional conjugate gradient solvers converge much faster than a Gibbs
sampler in terms of total number of iterations. The conceptual advantages of
the Gibbs sampling approach lies in statistically well-defined error
propagation and systematic error correction, and this methodology forms the
conceptual basis for the map-making algorithm employed in the BeyondPlanck
framework, which implements the first end-to-end Bayesian analysis pipeline for
CMB observations.Comment: 11 pages, 10 figures. All BeyondPlanck products and software will be
released publicly at http://beyondplanck.science during the online release
conference (November 18-20, 2020). Connection details will be made available
at the same website. Registration is mandatory for the online tutorial, but
optional for the conferenc
BeyondPlanck VII. Bayesian estimation of gain and absolute calibration for CMB experiments
We present a Bayesian calibration algorithm for CMB observations as
implemented within the global end-to-end BeyondPlanck (BP) framework, and apply
this to the Planck Low Frequency Instrument (LFI) data. Following the most
recent Planck analysis, we decompose the full time-dependent gain into a sum of
three orthogonal components: One absolute calibration term, common to all
detectors; one time-independent term that can vary between detectors; and one
time-dependent component that is allowed to vary between one-hour pointing
periods. Each term is then sampled conditionally on all other parameters in the
global signal model through Gibbs sampling. The absolute calibration is sampled
using only the orbital dipole as a reference source, while the two relative
gain components are sampled using the full sky signal, including the orbital
and Solar CMB dipoles, CMB fluctuations, and foreground contributions. We
discuss various aspects of the data that influence gain estimation, including
the dipole/polarization quadrupole degeneracy and anomalous jumps in the
instrumental gain. Comparing our solution to previous pipelines, we find good
agreement in general, with relative deviations of -0.84% (-0.67%) for 30 GHz,
-0.14% (0.02%) for 44 GHz and -0.69% (-0.08%) for 70 GHz, compared to Planck
2018 (NPIPE). The deviations we find are within expected error bounds, and we
attribute them to differences in data usage and general approach between the
pipelines. In particular, the BP calibration is performed globally, resulting
in better inter-frequency consistency. Additionally, WMAP observations are used
actively in the BP analysis, which breaks degeneracies in the Planck data set
and results in better agreement with WMAP. Although our presentation and
algorithm are currently oriented toward LFI processing, the procedure is fully
generalizable to other experiments.Comment: 18 pages, 15 figures. All BeyondPlanck products and software will be
released publicly at http://beyondplanck.science during the online release
conference (November 18-20, 2020). Connection details will be made available
at the same website. Registration is mandatory for the online tutorial, but
optional for the conferenc
BeyondPlanck XII. Cosmological parameter constraints with end-to-end error propagation
We present cosmological parameter constraints as estimated using the Bayesian
BeyondPlanck (BP) analysis framework. This method supports seamless end-to-end
error propagation from raw time-ordered data to final cosmological parameters.
As a first demonstration of the method, we analyze time-ordered Planck LFI
observations, combined with selected external data (WMAP 33-61GHz, Planck HFI
DR4 353 and 857GHz, and Haslam 408MHz) in the form of pixelized maps which are
used to break critical astrophysical degeneracies. Overall, all results are
generally in good agreement with previously reported values from Planck 2018
and WMAP, with the largest relative difference for any parameter of about 1
sigma when considering only temperature multipoles between 29<l<601. In cases
where there are differences, we note that the BP results are generally slightly
closer to the high-l HFI-dominated Planck 2018 results than previous analyses,
suggesting slightly less tension between low and high multipoles. Using low-l
polarization information from LFI and WMAP, we find a best-fit value of
tau=0.066 +/- 0.013, which is higher than the low value of tau=0.051 +/- 0.006
derived from Planck 2018 and slightly lower than the value of 0.069 +/- 0.011
derived from joint analysis of official LFI and WMAP products. Most
importantly, however, we find that the uncertainty derived in the BP processing
is about 30% larger than when analyzing the official products, after taking
into account the different sky coverage. We argue that this is due to
marginalizing over a more complete model of instrumental and astrophysical
parameters, and this results in both more reliable and more rigorously defined
uncertainties. We find that about 2000 Monte Carlo samples are required to
achieve robust convergence for low-resolution CMB covariance matrix with 225
independent modes.Comment: 13 pages, 10 figure
BEYONDPLANCK
We constrained the level of polarized anomalous microwave emission (AME) on large angular scales using Planck Low-Frequency Instrument (LFI) and WMAP polarization data within a Bayesian cosmic microwave background (CMB) analysis framework. We modeled synchrotron emission with a power-law spectral energy distribution, as well as the sum of AME and thermal dust emission through linear regression with the Planck High-Frequency Instrument (HFI) 353 GHz data. This template-based dust emission model allowed us to constrain the level of polarized AME while making minimal assumptions on its frequency dependence. We neglected CMB fluctuations, but show through simulations that these fluctuations have a minor impact on the results. We find that the resulting AME polarization fraction confidence limit is sensitive to the polarized synchrotron spectral index prior. In addition, for prior means βsâ <â 3.1 we find an upper limit of pAMEmaxâ ²0.6% (95% confidence). In contrast, for means βsâ =â 3.0, we find a nominal detection of pAMEâ =â 2.5±1.0% (95% confidence). These data are thus not strong enough to simultaneously and robustly constrain both polarized synchrotron emission and AME, and our main result is therefore a constraint on the AME polarization fraction explicitly as a function of βs. Combining the current Planck and WMAP observations with measurements from high-sensitivity low-frequency experiments such as C-BASS and QUIJOTE will be critical to improve these limits further
Cosmoglobe: Towards end-to-end CMB cosmological parameter estimation without likelihood approximations
We implement support for a cosmological parameter estimation algorithm as
proposed by Racine et al. (2016) in Commander, and quantify its computational
efficiency and cost. For a semi-realistic simulation similar to Planck LFI 70
GHz, we find that the computational cost of producing one single sample is
about 60 CPU-hours and that the typical Markov chain correlation length is
100 samples. The net effective cost per independent sample is 6 000
CPU-hours, in comparison with all low-level processing costs of 812 CPU-hours
for Planck LFI and WMAP in Cosmoglobe Data Release 1. Thus, although
technically possible to run already in its current state, future work should
aim to reduce the effective cost per independent sample by at least one order
of magnitude to avoid excessive runtimes, for instance through multi-grid
preconditioners and/or derivative-based Markov chain sampling schemes. This
work demonstrates the computational feasibility of true Bayesian cosmological
parameter estimation with end-to-end error propagation for high-precision CMB
experiments without likelihood approximations, but it also highlights the need
for additional optimizations before it is ready for full production-level
analysis.Comment: 10 pages, 8 figures. Submitted to A&
- …
