70,351 research outputs found
Compensated isocurvature perturbations in the curvaton model
Primordial fluctuations in the relative number densities of particles, or
isocurvature perturbations, are generally well constrained by cosmic microwave
background (CMB) data. A less probed mode is the compensated isocurvature
perturbation (CIP), a fluctuation in the relative number densities of cold dark
matter and baryons. In the curvaton model, a subdominant field during inflation
later sets the primordial curvature fluctuation . In some curvaton-decay
scenarios, the baryon and cold dark matter isocurvature fluctuations nearly
cancel, leaving a large CIP correlated with . This correlation can be
used to probe these CIPs more sensitively than the uncorrelated CIPs considered
in past work, essentially by measuring the squeezed bispectrum of the CMB for
triangles whose shortest side is limited by the sound horizon. Here, the
sensitivity of existing and future CMB experiments to correlated CIPs is
assessed, with an eye towards testing specific curvaton-decay scenarios. The
planned CMB Stage 4 experiment could detect the largest CIPs attainable in
curvaton scenarios with more than 3 significance. The significance
could improve if small-scale CMB polarization foregrounds can be effectively
subtracted. As a result, future CMB observations could discriminate between
some curvaton-decay scenarios in which baryon number and dark matter are
produced during different epochs relative to curvaton decay. Independent of the
specific motivation for the origin of a correlated CIP perturbation,
cross-correlation of CIP reconstructions with the primary CMB can improve the
signal-to-noise ratio of a CIP detection. For fully correlated CIPs the
improvement is a factor of 23.Comment: 20 pages, 8 figures, minor changes matching publicatio
Sample Complexity of Sample Average Approximation for Conditional Stochastic Optimization
In this paper, we study a class of stochastic optimization problems, referred
to as the \emph{Conditional Stochastic Optimization} (CSO), in the form of
\min_{x \in \mathcal{X}}
\EE_{\xi}f_\xi\Big({\EE_{\eta|\xi}[g_\eta(x,\xi)]}\Big), which finds a wide
spectrum of applications including portfolio selection, reinforcement learning,
robust learning, causal inference and so on. Assuming availability of samples
from the distribution \PP(\xi) and samples from the conditional distribution
\PP(\eta|\xi), we establish the sample complexity of the sample average
approximation (SAA) for CSO, under a variety of structural assumptions, such as
Lipschitz continuity, smoothness, and error bound conditions. We show that the
total sample complexity improves from \cO(d/\eps^4) to \cO(d/\eps^3) when
assuming smoothness of the outer function, and further to \cO(1/\eps^2) when
the empirical function satisfies the quadratic growth condition. We also
establish the sample complexity of a modified SAA, when and are
independent. Several numerical experiments further support our theoretical
findings.
Keywords: stochastic optimization, sample average approximation, large
deviations theoryComment: Typo corrected. Reference added. Revision comments handle
- …