38,298 research outputs found
POTENT Reconstruction from Mark III Velocities
We present an improved POTENT method for reconstructing the velocity and mass
density fields from radial peculiar velocities, test it with mock catalogs, and
apply it to the Mark III Catalog. Method improvments: (a) inhomogeneous
Malmquist bias is reduced by grouping and corrected in forward or inverse
analyses of inferred distances, (b) the smoothing into a radial velocity field
is optimized to reduce window and sampling biases, (c) the density is derived
from the velocity using an improved nonlinear approximation, and (d) the
computational errors are made negligible. The method is tested and optimized
using mock catalogs based on an N-body simulation that mimics our cosmological
neighborhood, and the remaining errors are evaluated quantitatively. The Mark
III catalog, with ~3300 grouped galaxies, allows a reliable reconstruction with
fixed Gaussian smoothing of 10-12 Mpc/h out to ~60 Mpc/h. We present maps of
the 3D velocity and mass-density fields and the corresponding errors. The
typical systematic and random errors in the density fluctuations inside 40
Mpc/h are \pm 0.13 and \pm 0.18. The recovered mass distribution resembles in
its gross features the galaxy distribution in redshift surveys and the mass
distribution in a similar POTENT analysis of a complementary velocity catalog
(SFI), including the Great Attractor, Perseus-Pisces, and the void in between.
The reconstruction inside ~40 Mpc/h is not affected much by a revised
calibration of the distance indicators (VM2, tailored to match the velocities
from the IRAS 1.2Jy redshift survey). The bulk velocity within the sphere of
radius 50 Mpc/h about the Local Group is V_50=370 \pm 110 km/s (including
systematic errors), and is shown to be mostly generated by external mass
fluctuations. With the VM2 calibration, V_50 is reduced to 305 \pm 110 km/s.Comment: 60 pages, LaTeX, 3 tables and 27 figures incorporated (may print the
most crucial figures only, by commenting out one line in the LaTex source
Recommended from our members
The LSST DESC data challenge 1: Generation and analysis of synthetic images for next-generation surveys
Data Challenge 1 (DC1) is the first synthetic data set produced by the Rubin Observatory Legacy Survey of Space and Time (LSST) Dark Energy Science Collaboration (DESC). DC1 is designed to develop and validate data reduction and analysis and to study the impact of systematic effects that will affect the LSST data set. DC1 is comprised of r-band observations of 40 deg2 to 10 yr LSST depth. We present each stage of the simulation and analysis process: (a) generation, by synthesizing sources from cosmological N-body simulations in individual sensor-visit images with different observing conditions; (b) reduction using a development version of the LSST Science Pipelines; and (c) matching to the input cosmological catalogue for validation and testing. We verify that testable LSST requirements pass within the fidelity of DC1. We establish a selection procedure that produces a sufficiently clean extragalactic sample for clustering analyses and we discuss residual sample contamination, including contributions from inefficiency in star-galaxy separation and imperfect deblending. We compute the galaxy power spectrum on the simulated field and conclude that: (i) survey properties have an impact of 50 per cent of the statistical uncertainty for the scales and models used in DC1; (ii) a selection to eliminate artefacts in the catalogues is necessary to avoid biases in the measured clustering; and (iii) the presence of bright objects has a significant impact (2-6) in the estimated power spectra at small scales (> 1200), highlighting the impact of blending in studies at small angular scales in LSST
Optimization of Planck/LFI on--board data handling
To asses stability against 1/f noise, the Low Frequency Instrument (LFI)
onboard the Planck mission will acquire data at a rate much higher than the
data rate allowed by its telemetry bandwith of 35.5 kbps. The data are
processed by an onboard pipeline, followed onground by a reversing step. This
paper illustrates the LFI scientific onboard processing to fit the allowed
datarate. This is a lossy process tuned by using a set of 5 parameters Naver,
r1, r2, q, O for each of the 44 LFI detectors. The paper quantifies the level
of distortion introduced by the onboard processing, EpsilonQ, as a function of
these parameters. It describes the method of optimizing the onboard processing
chain. The tuning procedure is based on a optimization algorithm applied to
unprocessed and uncompressed raw data provided either by simulations, prelaunch
tests or data taken from LFI operating in diagnostic mode. All the needed
optimization steps are performed by an automated tool, OCA2, which ends with
optimized parameters and produces a set of statistical indicators, among them
the compression rate Cr and EpsilonQ. For Planck/LFI the requirements are Cr =
2.4 and EpsilonQ <= 10% of the rms of the instrumental white noise. To speedup
the process an analytical model is developed that is able to extract most of
the relevant information on EpsilonQ and Cr as a function of the signal
statistics and the processing parameters. This model will be of interest for
the instrument data analysis. The method was applied during ground tests when
the instrument was operating in conditions representative of flight. Optimized
parameters were obtained and the performance has been verified, the required
data rate of 35.5 Kbps has been achieved while keeping EpsilonQ at a level of
3.8% of white noise rms well within the requirements.Comment: 51 pages, 13 fig.s, 3 tables, pdflatex, needs JINST.csl, graphicx,
txfonts, rotating; Issue 1.0 10 nov 2009; Sub. to JINST 23Jun09, Accepted
10Nov09, Pub.: 29Dec09; This is a preprint, not the final versio
Sticky Brownian Rounding and its Applications to Constraint Satisfaction Problems
Semidefinite programming is a powerful tool in the design and analysis of
approximation algorithms for combinatorial optimization problems. In
particular, the random hyperplane rounding method of Goemans and Williamson has
been extensively studied for more than two decades, resulting in various
extensions to the original technique and beautiful algorithms for a wide range
of applications. Despite the fact that this approach yields tight approximation
guarantees for some problems, e.g., Max-Cut, for many others, e.g., Max-SAT and
Max-DiCut, the tight approximation ratio is still unknown. One of the main
reasons for this is the fact that very few techniques for rounding semidefinite
relaxations are known.
In this work, we present a new general and simple method for rounding
semi-definite programs, based on Brownian motion. Our approach is inspired by
recent results in algorithmic discrepancy theory. We develop and present tools
for analyzing our new rounding algorithms, utilizing mathematical machinery
from the theory of Brownian motion, complex analysis, and partial differential
equations. Focusing on constraint satisfaction problems, we apply our method to
several classical problems, including Max-Cut, Max-2SAT, and MaxDiCut, and
derive new algorithms that are competitive with the best known results. To
illustrate the versatility and general applicability of our approach, we give
new approximation algorithms for the Max-Cut problem with side constraints that
crucially utilizes measure concentration results for the Sticky Brownian
Motion, a feature missing from hyperplane rounding and its generalization
- …