129 research outputs found
Fast Pixel Space Convolution for CMB Surveys with Asymmetric Beams and Complex Scan Strategies: FEBeCoP
Precise measurement of the angular power spectrum of the Cosmic Microwave
Background (CMB) temperature and polarization anisotropy can tightly constrain
many cosmological models and parameters. However, accurate measurements can
only be realized in practice provided all major systematic effects have been
taken into account. Beam asymmetry, coupled with the scan strategy, is a major
source of systematic error in scanning CMB experiments such as Planck, the
focus of our current interest. We envision Monte Carlo methods to rigorously
study and account for the systematic effect of beams in CMB analysis. Toward
that goal, we have developed a fast pixel space convolution method that can
simulate sky maps observed by a scanning instrument, taking into account real
beam shapes and scan strategy. The essence is to pre-compute the "effective
beams" using a computer code, "Fast Effective Beam Convolution in Pixel space"
(FEBeCoP), that we have developed for the Planck mission. The code computes
effective beams given the focal plane beam characteristics of the Planck
instrument and the full history of actual satellite pointing, and performs very
fast convolution of sky signals using the effective beams. In this paper, we
describe the algorithm and the computational scheme that has been implemented.
We also outline a few applications of the effective beams in the precision
analysis of Planck data, for characterizing the CMB anisotropy and for
detecting and measuring properties of point sources.Comment: 26 pages, 15 figures. New subsection on beam/PSF statistics, new and
better figures, more explicit algebra for polarized beams, added explanatory
text at many places following referees comments [Accepted for publication in
ApJS
On the Quantitative Impact of the Schechter-Valle Theorem
We evaluate the Schechter-Valle (Black Box) theorem quantitatively by
considering the most general Lorentz invariant Lagrangian consisting of
point-like operators for neutrinoless double beta decay. It is well known that
the Black Box operators induce Majorana neutrino masses at four-loop level.
This warrants the statement that an observation of neutrinoless double beta
decay guarantees the Majorana nature of neutrinos. We calculate these
radiatively generated masses and find that they are many orders of magnitude
smaller than the observed neutrino masses and splittings. Thus, some lepton
number violating New Physics (which may at tree-level not be related to
neutrino masses) may induce Black Box operators which can explain an observed
rate of neutrinoless double beta decay. Although these operators guarantee
finite Majorana neutrino masses, the smallness of the Black Box contributions
implies that other neutrino mass terms (Dirac or Majorana) must exist. If
neutrino masses have a significant Majorana contribution then this will become
the dominant part of the Black Box operator. However, neutrinos might also be
predominantly Dirac particles, while other lepton number violating New Physics
dominates neutrinoless double beta decay. Translating an observed rate of
neutrinoless double beta decay into neutrino masses would then be completely
misleading. Although the principal statement of the Schechter-Valle theorem
remains valid, we conclude that the Black Box diagram itself generates
radiatively only mass terms which are many orders of magnitude too small to
explain neutrino masses. Therefore, other operators must give the leading
contributions to neutrino masses, which could be of Dirac or Majorana nature.Comment: 18 pages, 4 figures; v2: minor corrections, reference added, matches
journal version; v3: typo corrected, physics result and conclusions unchange
The pre-launch Planck Sky Model: a model of sky emission at submillimetre to centimetre wavelengths
We present the Planck Sky Model (PSM), a parametric model for the generation
of all-sky, few arcminute resolution maps of sky emission at submillimetre to
centimetre wavelengths, in both intensity and polarisation. Several options are
implemented to model the cosmic microwave background, Galactic diffuse emission
(synchrotron, free-free, thermal and spinning dust, CO lines), Galactic H-II
regions, extragalactic radio sources, dusty galaxies, and thermal and kinetic
Sunyaev-Zeldovich signals from clusters of galaxies. Each component is
simulated by means of educated interpolations/extrapolations of data sets
available at the time of the launch of the Planck mission, complemented by
state-of-the-art models of the emission. Distinctive features of the
simulations are: spatially varying spectral properties of synchrotron and dust;
different spectral parameters for each point source; modeling of the clustering
properties of extragalactic sources and of the power spectrum of fluctuations
in the cosmic infrared background. The PSM enables the production of random
realizations of the sky emission, constrained to match observational data
within their uncertainties, and is implemented in a software package that is
regularly updated with incoming information from observations. The model is
expected to serve as a useful tool for optimizing planned microwave and
sub-millimetre surveys and to test data processing and analysis pipelines. It
is, in particular, used for the development and validation of data analysis
pipelines within the planck collaboration. A version of the software that can
be used for simulating the observations for a variety of experiments is made
available on a dedicated website.Comment: 35 pages, 31 figure
WIMP-nucleus scattering in chiral effective theory
We discuss long-distance QCD corrections to the WIMP-nucleon(s) interactions
in the framework of chiral effective theory. For scalar-mediated WIMP-quark
interactions, we calculate all the next-to-leading-order corrections to the
WIMP-nucleus elastic cross-section, including two-nucleon amplitudes and
recoil-energy dependent shifts to the single-nucleon scalar form factors. As a
consequence, the scalar-mediated WIMP-nucleus cross-section cannot be
parameterized in terms of just two quantities, namely the neutron and proton
scalar form factors at zero momentum transfer, but additional parameters
appear, depending on the short-distance WIMP-quark interaction. Moreover,
multiplicative factorization of the cross-section into particle, nuclear and
astro-particle parts is violated. In practice, while the new effects are of the
natural size expected by chiral power counting, they become very important in
those regions of parameter space where the leading order WIMP-nucleus amplitude
is suppressed, including the so-called "isospin-violating dark matter" regime.
In these regions of parameter space we find order-of-magnitude corrections to
the total scattering rates and qualitative changes to the shape of recoil
spectra.Comment: 23 pages, 6 figures, 1 tabl
Making maps from Planck LFI 30 GHz data with asymmetric beams and cooler noise
The Planck satellite will observe the full sky at nine frequencies from 30 to 857 GHz. Temperature and polarization frequency maps made from these observations are prime deliverables of the Planck mission. The goal of this paper is to examine the effects of four realistic instrument systematics in the 30 GHz frequency maps: non-axially-symmetric beams, sample integration, sorption cooler noise, and pointing errors. We simulated one-year long observations of four 30 GHz detectors. The simulated timestreams contained cosmic microwave background (CMB) signal, foreground components ( both galactic and extra-galactic), instrument noise ( correlated and white), and the four instrument systematic effects. We made maps from the timelines and examined the magnitudes of the systematics effects in the maps and their angular power spectra. We also compared the maps of different mapmaking codes to see how they performed. We used five mapmaking codes ( two destripers and three optimal codes). None of our mapmaking codes makes any attempt to deconvolve the beam from its output map. Therefore all our maps had similar smoothing due to beams and sample integration. This is a complicated smoothing, because each map pixel has its own effective beam. Temperature to polarization cross-coupling due to beam mismatch causes a detectable bias in the TE spectrum of the CMB map. The effects of cooler noise and pointing errors did not appear to be major concerns for the 30 GHz channel. The only essential difference found so far between mapmaking codes that affects accuracy ( in terms of residual root-mean-square) is baseline length. All optimal codes give essentially indistinguishable results. A destriper gives the same result as the optimal codes when the baseline is set short enough ( Madam). For longer baselines destripers (Springtide and Madam) require less computing resources but deliver a noisier map.The Planck satellite will observe the full sky at nine frequencies from 30 to 857 GHz. Temperature and polarization frequency maps made from these observations are prime deliverables of the Planck mission. The goal of this paper is to examine the effects of four realistic instrument systematics in the 30 GHz frequency maps: non-axially-symmetric beams, sample integration, sorption cooler noise, and pointing errors. We simulated one-year long observations of four 30 GHz detectors. The simulated timestreams contained cosmic microwave background (CMB) signal, foreground components ( both galactic and extra-galactic), instrument noise ( correlated and white), and the four instrument systematic effects. We made maps from the timelines and examined the magnitudes of the systematics effects in the maps and their angular power spectra. We also compared the maps of different mapmaking codes to see how they performed. We used five mapmaking codes ( two destripers and three optimal codes). None of our mapmaking codes makes any attempt to deconvolve the beam from its output map. Therefore all our maps had similar smoothing due to beams and sample integration. This is a complicated smoothing, because each map pixel has its own effective beam. Temperature to polarization cross-coupling due to beam mismatch causes a detectable bias in the TE spectrum of the CMB map. The effects of cooler noise and pointing errors did not appear to be major concerns for the 30 GHz channel. The only essential difference found so far between mapmaking codes that affects accuracy ( in terms of residual root-mean-square) is baseline length. All optimal codes give essentially indistinguishable results. A destriper gives the same result as the optimal codes when the baseline is set short enough ( Madam). For longer baselines destripers (Springtide and Madam) require less computing resources but deliver a noisier map.The Planck satellite will observe the full sky at nine frequencies from 30 to 857 GHz. Temperature and polarization frequency maps made from these observations are prime deliverables of the Planck mission. The goal of this paper is to examine the effects of four realistic instrument systematics in the 30 GHz frequency maps: non-axially-symmetric beams, sample integration, sorption cooler noise, and pointing errors. We simulated one-year long observations of four 30 GHz detectors. The simulated timestreams contained cosmic microwave background (CMB) signal, foreground components ( both galactic and extra-galactic), instrument noise ( correlated and white), and the four instrument systematic effects. We made maps from the timelines and examined the magnitudes of the systematics effects in the maps and their angular power spectra. We also compared the maps of different mapmaking codes to see how they performed. We used five mapmaking codes ( two destripers and three optimal codes). None of our mapmaking codes makes any attempt to deconvolve the beam from its output map. Therefore all our maps had similar smoothing due to beams and sample integration. This is a complicated smoothing, because each map pixel has its own effective beam. Temperature to polarization cross-coupling due to beam mismatch causes a detectable bias in the TE spectrum of the CMB map. The effects of cooler noise and pointing errors did not appear to be major concerns for the 30 GHz channel. The only essential difference found so far between mapmaking codes that affects accuracy ( in terms of residual root-mean-square) is baseline length. All optimal codes give essentially indistinguishable results. A destriper gives the same result as the optimal codes when the baseline is set short enough ( Madam). For longer baselines destripers (Springtide and Madam) require less computing resources but deliver a noisier map.Peer reviewe
Constraining New Physics with a Positive or Negative Signal of Neutrino-less Double Beta Decay
We investigate numerically how accurately one could constrain the strengths
of different short-range contributions to neutrino-less double beta decay in
effective field theory. Depending on the outcome of near-future experiments
yielding information on the neutrino masses, the corresponding bounds or
estimates can be stronger or weaker. A particularly interesting case, resulting
in strong bounds, would be a positive signal of neutrino-less double beta decay
that is consistent with complementary information from neutrino oscillation
experiments, kinematical determinations of the neutrino mass, and measurements
of the sum of light neutrino masses from cosmological observations. The keys to
more robust bounds are improvements of the knowledge of the nuclear physics
involved and a better experimental accuracy.Comment: 23 pages, 3 figures. Minor changes. Matches version published in JHE
Future Directions in Parity Violation: From Quarks to the Cosmos
I discuss the prospects for future studies of parity-violating (PV)
interactions at low energies and the insights they might provide about open
questions in the Standard Model as well as physics that lies beyond it. I cover
four types of parity-violating observables: PV electron scattering; PV hadronic
interactions; PV correlations in weak decays; and searches for the permanent
electric dipole moments of quantum systems.Comment: Talk given at PAVI 06 workshop on parity-violating interactions,
Milos, Greece (May, 2006); 10 page
Neutrinoless double beta decay in seesaw models
We study the general phenomenology of neutrinoless double beta decay in
seesaw models. In particular, we focus on the dependence of the neutrinoless
double beta decay rate on the mass of the extra states introduced to account
for the Majorana masses of light neutrinos. For this purpose, we compute the
nuclear matrix elements as functions of the mass of the mediating fermions and
estimate the associated uncertainties. We then discuss what can be inferred on
the seesaw model parameters in the different mass regimes and clarify how the
contribution of the light neutrinos should always be taken into account when
deriving bounds on the extra parameters. Conversely, the extra states can also
have a significant impact, cancelling the Standard Model neutrino contribution
for masses lighter than the nuclear scale and leading to vanishing neutrinoless
double beta decay amplitudes even if neutrinos are Majorana particles. We also
discuss how seesaw models could reconcile large rates of neutrinoless double
beta decay with more stringent cosmological bounds on neutrino masses.Comment: 34 pages, 5 eps figures and 1 axodraw figure. Final version published
in JHEP. NME results available in Appendi
Probing New Physics Models of Neutrinoless Double Beta Decay with SuperNEMO
The possibility to probe new physics scenarios of light Majorana neutrino
exchange and right-handed currents at the planned next generation neutrinoless
double beta decay experiment SuperNEMO is discussed. Its ability to study
different isotopes and track the outgoing electrons provides the means to
discriminate different underlying mechanisms for the neutrinoless double beta
decay by measuring the decay half-life and the electron angular and energy
distributions.Comment: 17 pages, 14 figures, to be published in E.P.J.
Double Beta Decay
We review recent developments in double-beta decay, focusing on what can be
learned about the three light neutrinos in future experiments. We examine the
effects of uncertainties in already measured neutrino parameters and in
calculated nuclear matrix elements on the interpretation of upcoming
double-beta decay measurements. We then review a number of proposed
experiments.Comment: Some typos corrected, references corrected and added. A less blurry
version of figure 3 is available from authors. 41 pages, 5 figures, submitted
to J. Phys.
- …