7,733 research outputs found

    Overviews of Optimization Techniques for Geometric Estimation

    Get PDF
    We summarize techniques for optimal geometric estimation from noisy observations for computer vision applications. We first discuss the interpretation of optimality and point out that geometric estimation is different from the standard statistical estimation. We also describe our noise modeling and a theoretical accuracy limit called the KCR lower bound. Then, we formulate estimation techniques based on minimization of a given cost function: least squares (LS), maximum likelihood (ML), which includes reprojection error minimization as a special case, and Sampson error minimization. We describe bundle adjustment and the FNS scheme for numerically solving them and the hyperaccurate correction that improves the accuracy of ML. Next, we formulate estimation techniques not based on minimization of any cost function: iterative reweight, renormalization, and hyper-renormalization. Finally, we show numerical examples to demonstrate that hyper-renormalization has higher accuracy than ML, which has widely been regarded as the most accurate method of all. We conclude that hyper-renormalization is robust to noise and currently is the best method

    Imfit: A Fast, Flexible New Program for Astronomical Image Fitting

    Full text link
    I describe a new, open-source astronomical image-fitting program called Imfit, specialized for galaxies but potentially useful for other sources, which is fast, flexible, and highly extensible. A key characteristic of the program is an object-oriented design which allows new types of image components (2D surface-brightness functions) to be easily written and added to the program. Image functions provided with Imfit include the usual suspects for galaxy decompositions (Sersic, exponential, Gaussian), along with Core-Sersic and broken-exponential profiles, elliptical rings, and three components which perform line-of-sight integration through 3D luminosity-density models of disks and rings seen at arbitrary inclinations. Available minimization algorithms include Levenberg-Marquardt, Nelder-Mead simplex, and Differential Evolution, allowing trade-offs between speed and decreased sensitivity to local minima in the fit landscape. Minimization can be done using the standard chi^2 statistic (using either data or model values to estimate per-pixel Gaussian errors, or else user-supplied error images) or Poisson-based maximum-likelihood statistics; the latter approach is particularly appropriate for cases of Poisson data in the low-count regime. I show that fitting low-S/N galaxy images using chi^2 minimization and individual-pixel Gaussian uncertainties can lead to significant biases in fitted parameter values, which are avoided if a Poisson-based statistic is used; this is true even when Gaussian read noise is present.Comment: pdflatex, 27 pages, 19 figures. Revised version, accepted by ApJ. Programs, source code, and documentation available at: http://www.mpe.mpg.de/~erwin/code/imfit

    Systematic effects on dark energy from 3D weak shear

    Full text link
    We present an investigation into the potential effect of systematics inherent in multi-band wide field surveys on the dark energy equation of state determination for two 3D weak lensing methods. The weak lensing methods are a geometric shear-ratio method and 3D cosmic shear. The analysis here uses an extension of the Fisher matrix framework to jointly include photometric redshift systematics, shear distortion systematics and intrinsic alignments. We present results for DUNE and Pan-STARRS surveys. We show that assuming systematic parameters are fixed, but possibly biased, results in potentially large biases in dark energy parameters. We quantify any potential bias by defining a Bias Figure of Merit. We also show the effect on the dark energy Figure of Merit of marginalising over each systematic parameter individually. We find that the largest effect on the Figure of Merit comes from uncertainty in the photometric redshift systematic parameters. These can reduce the Figure of Merit by up to a factor of 2 to 4 in both 3D weak lensing methods, if no informative prior on the systematic parameters is applied. Shear distortion systematics have a smaller overall effect. Intrinsic alignment effects can reduce the Figure of Merit by up to a further factor of 2. This, however, is a worst case scenario. By including prior information on systematic parameters the Figure of Merit can be recovered to a large extent. We conclude that, as a rule of thumb, given a realistic current understanding of intrinsic alignments and photometric redshifts, then including all three primary systematic effects reduces the Figure of Merit by at most a factor of 2, but that in reality this factor should be much less. [abridged]Comment: 20 pages, 11 figures, submitted to MNRA

    The X-ray Cluster Normalization of the Matter Power Spectrum

    Full text link
    The number density of galaxy clusters provides tight statistical constraints on the matter fluctuation power spectrum normalization, traditionally phrased in terms of sigma_8, the root mean square mass fluctuation in spheres with radius 8 h^-1 Mpc. We present constraints on sigma_8 and the total matter density Omega_m0 from local cluster counts as a function of X-ray temperature, taking care to incorporate and minimize systematic errors that plagued previous work with this method. In particular, we present new determinations of the cluster luminosity - temperature and mass - temperature relations, including their intrinsic scatter, and a determination of the Jenkins mass function parameters for the same mass definition as the mass - temperature calibration. Marginalizing over the 12 uninteresting parameters associated with this method, we find that the local cluster temperature function implies sigma_8 (Omega_m0/0.32)^alpha = 0.86+/-0.04 with alpha = 0.30 (0.41) for Omega_m0 < 0.32 (Omega_mo > 0.32) (68% confidence for two parameters). This result agrees with a wide range of recent independent determinations, and we find no evidence of any additional sources of systematic error for the X-ray cluster temperature function determination of the matter power spectrum normalization. The joint WMAP5 + cluster constraints are: Omega_m0 = 0.30+0.03/-0.02 and sigma_8 = 0.85+0.04/-0.02 (68% confidence for two parameters).Comment: 31 pages, 16 figures, accept for publication in ApJ 609, Jan. 10, 200

    Unbiased Cosmological Parameter Estimation from Emission Line Surveys with Interlopers

    Full text link
    The galaxy catalogs generated from low-resolution emission line surveys often contain both foreground and background interlopers due to line misidentification, which can bias the cosmological parameter estimation. In this paper, we present a method for correcting the interloper bias by using the joint-analysis of auto- and cross-power spectra of the main and the interloper samples. In particular, we can measure the interloper fractions from the cross-correlation between the interlopers and survey galaxies, because the true cross-correlation must be negligibly small. The estimated interloper fractions, in turn, remove the interloper bias in the cosmological parameter estimation. For example, in the Hobby-Eberly Telescope Dark Energy Experiment (HETDEX) low-redshift (z<0.5z<0.5) [O II] Ī»3727\lambda3727{\AA} emitters contaminate high-redshift (1.9<z<3.51.9<z<3.5) Lyman-Ī±\alpha line emitters. We demonstrate that the joint-analysis method yields a high signal-to-noise ratio measurement of the interloper fractions while only marginally increasing the uncertainties in the cosmological parameters relative to the case without interlopers. We also show the same is true for the high-latitude spectroscopic survey of Wide-Field Infrared Survey Telescope (WFIRST) mission where contamination occurs between the Balmer-Ī±\alpha line emitters at lower redshifts (1.1<z<1.91.1<z<1.9) and Oxygen ([O III] Ī»5007\lambda5007{\AA}) line emitters at higher redshifts (1.7<z<2.81.7<z<2.8).Comment: 36 pages, 26 figure

    Photometric Supernova Cosmology with BEAMS and SDSS-II

    Full text link
    Supernova cosmology without spectroscopic confirmation is an exciting new frontier which we address here with the Bayesian Estimation Applied to Multiple Species (BEAMS) algorithm and the full three years of data from the Sloan Digital Sky Survey II Supernova Survey (SDSS-II SN). BEAMS is a Bayesian framework for using data from multiple species in statistical inference when one has the probability that each data point belongs to a given species, corresponding in this context to different types of supernovae with their probabilities derived from their multi-band lightcurves. We run the BEAMS algorithm on both Gaussian and more realistic SNANA simulations with of order 10^4 supernovae, testing the algorithm against various pitfalls one might expect in the new and somewhat uncharted territory of photometric supernova cosmology. We compare the performance of BEAMS to that of both mock spectroscopic surveys and photometric samples which have been cut using typical selection criteria. The latter typically are either biased due to contamination or have significantly larger contours in the cosmological parameters due to small data-sets. We then apply BEAMS to the 792 SDSS-II photometric supernovae with host spectroscopic redshifts. In this case, BEAMS reduces the area of the (\Omega_m,\Omega_\Lambda) contours by a factor of three relative to the case where only spectroscopically confirmed data are used (297 supernovae). In the case of flatness, the constraints obtained on the matter density applying BEAMS to the photometric SDSS-II data are \Omega_m(BEAMS)=0.194\pm0.07. This illustrates the potential power of BEAMS for future large photometric supernova surveys such as LSST.Comment: 25 pages, 15 figures, submitted to Ap

    The Hyper Suprime-Cam Software Pipeline

    Full text link
    In this paper, we describe the optical imaging data processing pipeline developed for the Subaru Telescope's Hyper Suprime-Cam (HSC) instrument. The HSC Pipeline builds on the prototype pipeline being developed by the Large Synoptic Survey Telescope's Data Management system, adding customizations for HSC, large-scale processing capabilities, and novel algorithms that have since been reincorporated into the LSST codebase. While designed primarily to reduce HSC Subaru Strategic Program (SSP) data, it is also the recommended pipeline for reducing general-observer HSC data. The HSC pipeline includes high level processing steps that generate coadded images and science-ready catalogs as well as low-level detrending and image characterizations.Comment: 39 pages, 21 figures, 2 tables. Submitted to Publications of the Astronomical Society of Japa
    • ā€¦
    corecore