10,178 research outputs found

    Reconstruction Analysis of Galaxy Redshift Surveys: A Hybrid Reconstruction Method

    Full text link
    In reconstruction analysis of galaxy redshift surveys, one works backwards from the observed galaxy distribution to the primordial density field in the same region, then evolves the primordial fluctuations forward in time with an N-body code. This incorporates assumptions about the cosmological parameters, the properties of primordial fluctuations, and the biasing relation between galaxies and mass. These can be tested by comparing the reconstruction to the observed galaxy distribution, and to peculiar velocity data. This paper presents a hybrid reconstruction method that combines the `Gaussianization'' technique of Weinberg(1992) with the dynamical schemes of Nusser & Dekel(1992) and Gramann(1993). We test the method on N-body simulations and on N-body mock catalogs that mimic the depth and geometry of the Point Source Catalog Redshift Survey and the Optical Redshift Survey. This method is more accurate than Gaussianization or dynamical reconstruction alone. Matching the observed morphology of clustering can limit the bias factor b, independent of Omega. Matching the cluster velocity dispersions and z-space distortions of the correlation function xi(s,mu) constrains the parameter beta=Omega^{0.6}/b. Relative to linear or quasi-linear approximations, a fully non-linear reconstruction makes more accurate predictions of xi(s,mu) for a given beta, thus reducing the systematic biases of beta measurements and offering further scope for breaking the degeneracy between Omega and b. It also circumvents the cosmic variance noise that limits conventional analyses of xi(s,mu). It can also improve the determination of Omega and b from joint analyses of redshift & peculiar velocity surveys as it predicts the fully non-linear peculiar velocity distribution at each point in z-space.Comment: 72 pages including 33 figures, submitted to Ap

    Converted wave imaging and velocity analysis using elastic reverse-time migration

    Get PDF
    Master's thesis in petroleum geosciences engineeringAlong the continuous evolution of exploration seismology, the main objective has been producing better subsurface seismic images that lead to lower risk exploration and enhanced production. The unique characteristics of converted (P-S) waves enable retrieving more accurate subsurface information, which made it play a complementary role in hydrocarbon seismic exploration, where the primary method of conventional compressional wave (P-P) data has limited capabilities. Conventional processing techniques of P-S data are based on approximations that do not respect the elastic nature of the subsurface and the vector nature of the recorded wave-fields, which urge the need for accurate modeling of subsurface velocity fields, and elastic imaging algorithm that can overcome the shortcomings following the conventional approximations. In this thesis we presented a novel workflow for accurate depth imaging and velocity analysis for multicomponent data. The workflow is based on elastic reverse-time migration as a robust migration algorithm, and automatic wave equation migration velocity analysis techniques. We practically tested novel imaging conditions for elastic reverse-time migration in order to overcome the polarity reversal problem and investigated the cross-talking between wave-modes. For velocity analysis we applied stack-power maximization to produce improved velocity fields that enhance the image coherency, then we applied co-depthing technique based on novel Born modeling/demigration method and target image fitting procedure in order to produce the shear-wave velocity model that result in depth consistent P-S and P-P images. We successfully implemented the workflow on synthetic and field datasets. The results obtained show the robustness and practicality of the workflow to produce enhanced velocity models and accurate subsurface elastic images

    Observational biases in Lagrangian reconstructions of cosmic velocity fields

    Full text link
    Lagrangian reconstruction of large-scale peculiar velocity fields can be strongly affected by observational biases. We develop a thorough analysis of these systematic effects by relying on specially selected mock catalogues. For the purpose of this paper, we use the MAK reconstruction method, although any other Lagrangian reconstruction method should be sensitive to the same problems. We extensively study the uncertainty in the mass-to-light assignment due to luminosity incompleteness, and the poorly-determined relation between mass and luminosity. The impact of redshift distortion corrections is analyzed in the context of MAK and we check the importance of edge and finite-volume effects on the reconstructed velocities. Using three mock catalogues with different average densities, we also study the effect of cosmic variance. In particular, one of them presents the same global features as found in observational catalogues that extend to 80 Mpc/h scales. We give recipes, checked using the aforementioned mock catalogues, to handle these particular observational effects, after having introduced them into the mock catalogues so as to quantitatively mimic the most densely sampled currently available galaxy catalogue of the nearby universe. Once biases have been taken care of, the typical resulting error in reconstructed velocities is typically about a quarter of the overall velocity dispersion, and without significant bias. We finally model our reconstruction errors to propose an improved Bayesian approach to measure Omega_m in an unbiased way by comparing the reconstructed velocities to the measured ones in distance space, even though they may be plagued by large errors. We show that, in the context of observational data, a nearly unbiased estimator of Omega_m may be built using MAK reconstruction.Comment: 29 pages, 21 figures, 6 tables, Accepted by MNRAS on 2007 October 2. Received 2007 September 30; in original form 2007 July 2

    Measurability of kinetic temperature from metal absorption-line spectra formed in chaotic media

    Get PDF
    We present a new method for recovering the kinetic temperature of the intervening diffuse gas to an accuracy of 10%. The method is based on the comparison of unsaturated absorption-line profiles of two species with different atomic weights. The species are assumed to have the same temperature and bulk motion within the absorbing region. The computational technique involves the Fourier transform of the absorption profiles and the consequent Entropy-Regularized chi^2-Minimization [ERM] to estimate the model parameters. The procedure is tested using synthetic spectra of CII, SiII and FeII ions. The comparison with the standard Voigt fitting analysis is performed and it is shown that the Voigt deconvolution of the complex absorption-line profiles may result in estimated temperatures which are not physical. We also successfully analyze Keck telescope spectra of CII1334 and SiII1260 lines observed at the redshift z = 3.572 toward the quasar Q1937--1009 by Tytler {\it et al.}.Comment: 25 pages, 6 Postscript figures, aaspp4.sty file, submit. Ap

    MECI: A Method for Eclipsing Component Identification

    Get PDF
    We describe an automated method for assigning the most probable physical parameters to the components of an eclipsing binary, using only its photometric light curve and combined colors. With traditional methods, one attempts to optimize a multi-parameter model over many iterations, so as to minimize the chi-squared value. We suggest an alternative method, where one selects pairs of coeval stars from a set of theoretical stellar models, and compares their simulated light curves and combined colors with the observations. This approach greatly reduces the parameter space over which one needs to search, and allows one to estimate the components' masses, radii and absolute magnitudes, without spectroscopic data. We have implemented this method in an automated program using published theoretical isochrones and limb-darkening coefficients. Since it is easy to automate, this method lends itself to systematic analyses of datasets consisting of photometric time series of large numbers of stars, such as those produced by OGLE, MACHO, TrES, HAT, and many others surveys.Comment: 25 pages, 7 figures, accepted for publication in Ap

    The Frontier Fields Lens Modeling Comparison Project

    Get PDF
    Gravitational lensing by clusters of galaxies offers a powerful probe of their structure and mass distribution. Deriving a lens magnification map for a galaxy cluster is a classic inversion problem and many methods have been developed over the past two decades to solve it. Several research groups have developed techniques independently to map the predominantly dark matter distribution in cluster lenses. While these methods have all provided remarkably high precision mass maps, particularly with exquisite imaging data from the Hubble Space Telescope (HST), the reconstructions themselves have never been directly compared. In this paper, we report the results of comparing various independent lens modeling techniques employed by individual research groups in the community. Here we present for the first time a detailed and robust comparison of methodologies for fidelity, accuracy and precision. For this collaborative exercise, the lens modeling community was provided simulated cluster images -- of two clusters Ares and Hera -- that mimic the depth and resolution of the ongoing HST Frontier Fields. The results of the submitted reconstructions with the un-blinded true mass profile of these two clusters are presented here. Parametric, free-form and hybrid techniques have been deployed by the participating groups and we detail the strengths and trade-offs in accuracy and systematics that arise for each methodology. We note in conclusion that lensing reconstruction methods produce reliable mass distributions that enable the use of clusters as extremely valuable astrophysical laboratories and cosmological probes.Comment: 38 pages, 25 figures, submitted to MNRAS, version with full resolution images can be found at http://pico.bo.astro.it/~massimo/papers/FFsims.pd

    Measuring the galaxy power spectrum with multiresolution decomposition -- II. diagonal and off-diagonal power spectra of the LCRS galaxies

    Full text link
    The power spectrum estimator based on the discrete wavelet transform (DWT) for 3-dimensional samples has been studied. The DWT estimator for multi-dimensional samples provides two types of spectra with respect to diagonal and off-diagonal modes, which are very flexible to deal with configuration-related problems in the power spectrum detection. With simulation samples and mock catalogues of the Las Campanas redshift survey (LCRS), we show (1) the slice-like geometry of the LCRS doesn't affect the off-diagonal power spectrum with ``slice-like'' mode; (2) the Poisson sampling with the LCRS selection function doesn't cause more than 1-σ\sigma error in the DWT power spectrum; and (3) the powers of peculiar velocity fluctuations, which cause the redshift distortion, are approximately scale-independent. These results insure that the uncertainties of the power spectrum measurement are under control. The scatter of the DWT power spectra of the six strips of the LCRS survey is found to be rather small. It is less than 1-σ\sigma of the cosmic variance of mock samples in the wavenumber range 0.1<k<20.1 < k < 2 h Mpc1^{-1}. To fit the detected LCRS diagonal DWT power spectrum with CDM models, we find that the best-fitting redshift distortion parameter β\beta is about the same as that obtained from the Fourier power spectrum. The velocity dispersions σv\sigma_v for SCDM and Λ\LambdaCDM models are also consistent with other σv\sigma_v detections with the LCRS. A systematic difference between the best-fitting parameters of diagonal and off-diagonal power spectra has been significantly measured. This indicates that the off-diagonal power spectra are capable of providing information about the power spectrum of galaxy velocity field.Comment: AAS LaTeX file, 41 pages, 10 figures included, accepted for publication in Ap

    A Neural Model of How the Brain Computes Heading from Optic Flow in Realistic Scenes

    Full text link
    Animals avoid obstacles and approach goals in novel cluttered environments using visual information, notably optic flow, to compute heading, or direction of travel, with respect to objects in the environment. We present a neural model of how heading is computed that describes interactions among neurons in several visual areas of the primate magnocellular pathway, from retina through V1, MT+, and MSTd. The model produces outputs which are qualitatively and quantitatively similar to human heading estimation data in response to complex natural scenes. The model estimates heading to within 1.5° in random dot or photo-realistically rendered scenes and within 3° in video streams from driving in real-world environments. Simulated rotations of less than 1 degree per second do not affect model performance, but faster simulated rotation rates deteriorate performance, as in humans. The model is part of a larger navigational system that identifies and tracks objects while navigating in cluttered environments.National Science Foundation (SBE-0354378, BCS-0235398); Office of Naval Research (N00014-01-1-0624); National-Geospatial Intelligence Agency (NMA201-01-1-2016
    corecore