17,530 research outputs found
Quantifying and containing the curse of high resolution coronal imaging
Future missions such as Solar Orbiter (SO), InterHelioprobe, or Solar Probe
aim at approaching the Sun closer than ever before, with on board some high
resolution imagers (HRI) having a subsecond cadence and a pixel area of about
at the Sun during perihelion. In order to guarantee their scientific
success, it is necessary to evaluate if the photon counts available at these
resolution and cadence will provide a sufficient signal-to-noise ratio (SNR).
We perform a first step in this direction by analyzing and characterizing the
spatial intermittency of Quiet Sun images thanks to a multifractal analysis.
We identify the parameters that specify the scale-invariance behavior. This
identification allows next to select a family of multifractal processes, namely
the Compound Poisson Cascades, that can synthesize artificial images having
some of the scale-invariance properties observed on the recorded images.
The prevalence of self-similarity in Quiet Sun coronal images makes it
relevant to study the ratio between the SNR present at SoHO/EIT images and in
coarsened images. SoHO/EIT images thus play the role of 'high resolution'
images, whereas the 'low-resolution' coarsened images are rebinned so as to
simulate a smaller angular resolution and/or a larger distance to the Sun. For
a fixed difference in angular resolution and in Spacecraft-Sun distance, we
determine the proportion of pixels having a SNR preserved at high resolution
given a particular increase in effective area. If scale-invariance continues to
prevail at smaller scales, the conclusion reached with SoHO/EIT images can be
transposed to the situation where the resolution is increased from SoHO/EIT to
SO/HRI resolution at perihelion.Comment: 25 pages, 1 table, 7 figure
Sparse component separation for accurate CMB map estimation
The Cosmological Microwave Background (CMB) is of premier importance for the
cosmologists to study the birth of our universe. Unfortunately, most CMB
experiments such as COBE, WMAP or Planck do not provide a direct measure of the
cosmological signal; CMB is mixed up with galactic foregrounds and point
sources. For the sake of scientific exploitation, measuring the CMB requires
extracting several different astrophysical components (CMB, Sunyaev-Zel'dovich
clusters, galactic dust) form multi-wavelength observations. Mathematically
speaking, the problem of disentangling the CMB map from the galactic
foregrounds amounts to a component or source separation problem. In the field
of CMB studies, a very large range of source separation methods have been
applied which all differ from each other in the way they model the data and the
criteria they rely on to separate components. Two main difficulties are i) the
instrument's beam varies across frequencies and ii) the emission laws of most
astrophysical components vary across pixels. This paper aims at introducing a
very accurate modeling of CMB data, based on sparsity, accounting for beams
variability across frequencies as well as spatial variations of the components'
spectral characteristics. Based on this new sparse modeling of the data, a
sparsity-based component separation method coined Local-Generalized
Morphological Component Analysis (L-GMCA) is described. Extensive numerical
experiments have been carried out with simulated Planck data. These experiments
show the high efficiency of the proposed component separation methods to
estimate a clean CMB map with a very low foreground contamination, which makes
L-GMCA of prime interest for CMB studies.Comment: submitted to A&
Functional Regression
Functional data analysis (FDA) involves the analysis of data whose ideal
units of observation are functions defined on some continuous domain, and the
observed data consist of a sample of functions taken from some population,
sampled on a discrete grid. Ramsay and Silverman's 1997 textbook sparked the
development of this field, which has accelerated in the past 10 years to become
one of the fastest growing areas of statistics, fueled by the growing number of
applications yielding this type of data. One unique characteristic of FDA is
the need to combine information both across and within functions, which Ramsay
and Silverman called replication and regularization, respectively. This article
will focus on functional regression, the area of FDA that has received the most
attention in applications and methodological development. First will be an
introduction to basis functions, key building blocks for regularization in
functional regression methods, followed by an overview of functional regression
methods, split into three types: [1] functional predictor regression
(scalar-on-function), [2] functional response regression (function-on-scalar)
and [3] function-on-function regression. For each, the role of replication and
regularization will be discussed and the methodological development described
in a roughly chronological manner, at times deviating from the historical
timeline to group together similar methods. The primary focus is on modeling
and methodology, highlighting the modeling structures that have been developed
and the various regularization approaches employed. At the end is a brief
discussion describing potential areas of future development in this field
Unified Heat Kernel Regression for Diffusion, Kernel Smoothing and Wavelets on Manifolds and Its Application to Mandible Growth Modeling in CT Images
We present a novel kernel regression framework for smoothing scalar surface
data using the Laplace-Beltrami eigenfunctions. Starting with the heat kernel
constructed from the eigenfunctions, we formulate a new bivariate kernel
regression framework as a weighted eigenfunction expansion with the heat kernel
as the weights. The new kernel regression is mathematically equivalent to
isotropic heat diffusion, kernel smoothing and recently popular diffusion
wavelets. Unlike many previous partial differential equation based approaches
involving diffusion, our approach represents the solution of diffusion
analytically, reducing numerical inaccuracy and slow convergence. The numerical
implementation is validated on a unit sphere using spherical harmonics. As an
illustration, we have applied the method in characterizing the localized growth
pattern of mandible surfaces obtained in CT images from subjects between ages 0
and 20 years by regressing the length of displacement vectors with respect to
the template surface.Comment: Accepted in Medical Image Analysi
Strong support for the millisecond pulsar origin of the Galactic center GeV excess
Using gamma-ray data from the Fermi Large Area Telescope, various groups have
identified a clear excess emission in the Inner Galaxy, at energies around a
few GeV. This excess resembles remarkably well a signal from dark-matter
annihilation. One of the most compelling astrophysical interpretations is that
the excess is caused by the combined effect of a previously undetected
population of dim gamma-ray sources. Because of their spectral similarity, the
best candidates are millisecond pulsars. Here, we search for this hypothetical
source population, using a novel approach based on wavelet decomposition of the
gamma-ray sky and the statistics of Gaussian random fields. Using almost seven
years of Fermi-LAT data, we detect a clustering of photons as predicted for the
hypothetical population of millisecond pulsar, with a statistical significance
of 10.0 sigma. For plausible values of the luminosity function, this population
explains 100% of the observed excess emission. We argue that other
extragalactic or Galactic sources, a mismodeling of Galactic diffuse emission,
or the thick-disk population of pulsars are unlikely to account for this
observation.Comment: 6+10 pages, 3+10 figures, 1 table; v2 updated to pass 8 Fermi data,
additional supplemental material with extended discussion (conclusions
unchanged); v3 matches PRL version with further checks (conclusions
unchanged
- …