7,851 research outputs found

    How does the grouping scheme affect the Wiener Filter reconstruction of the local Universe?

    Full text link
    High quality reconstructions of the three dimensional velocity and density fields of the local Universe are essential to study the local Large Scale Structure. In this paper, the Wiener Filter reconstruction technique is applied to galaxy radial peculiar velocity catalogs to understand how the Hubble constant (H0) value and the grouping scheme affect the reconstructions. While H0 is used to derive radial peculiar velocities from galaxy distance measurements and total velocities, the grouping scheme serves the purpose of removing non linear motions. Two different grouping schemes (based on the literature and a systematic algorithm) as well as five H0 values ranging from 72 to 76 km/s/Mpc are selected. The Wiener Filter is applied to the resulting catalogs. Whatever grouping scheme is used, the larger H0 is, the larger the infall onto the local Volume is. However, this conclusion has to be strongly mitigated: a bias minimization scheme applied to the catalogs after grouping suppresses this effect. At fixed H0, reconstructions obtained with catalogs grouped with the different schemes exhibit structures at the proper location in both cases but the latter are more contrasted in the less aggressive scheme case: having more constraints permits an infall from both sides onto the structures to reinforce their overdensity. Such findings highlight the importance of a balance between grouping to suppress non linear motions and preserving constraints to produce an infall onto structures expected to be large overdensities. Such an observation is promising to perform constrained simulations of the local Universe including its massive clusters.Comment: Accepted for publication in MNRAS, 10 pages, 6 figures, 3 table

    Tests of Inference for Dummy Variables in Regressions with Logarithmic Transformed Dependent Variables

    Get PDF
    The interpretation of dummy variables in regressions where the dependent variable is subject to a log transformation has been of continuing interest in economics. However, in the main, these earlier papers do not deal with the inferential aspects of the parameters estimated. In this paper we compare the inference implied by the hypotheses tested on the linear parameter estimated in the model and the tests applied to the proportional change that this parameter implies. An important element in this analysis is the asymmetry introduced by the log transformation. Suggestions are made for the appropriate test procedure in this case. Examples are presented from some common econometric applications of this model in the estimation of hedonic price models and wage equations.Hypothesis tests;lognormal distribution; measures of proportional change; wage equation; hedonic price model

    Clustering in a Data Envelopment Analysis Using Bootstrapped Efficiency Scores

    Get PDF
    This paper explores the insight from the application of cluster analysis to the results of a Data Envelopment Analysis of productive behaviour. Cluster analysis involves the identification of groups among a set of different objects (individuals or characteristics). This is done via the definitions of a distance matrix that defines the relationship between the different objects, which then allows the determination of which objects are most similar into clusters. In the case of DEA, cluster analysis methods can be used to determine the degree of sensitivity of the efficiency score for a particular DMU to the presence of the other DMUs in the sample that make up the reference technology to that DMU. Using the bootstrapped values of the efficiency measures we construct two types of distance matrices. One is defined as a function of the variance covariance matrix of the scores with respect to each other. This implies that the covariance of the score of one DMU is used as a measure of the degree to which the efficiency measure for a single DMU is influenced by the efficiency level of another. An alternative distance measure is defined as a function of the ranks of the bootstrapped efficiency. An example is provided using both measures as the clustering distance for both a one input one output case and a two input two output case.

    Inferences for the Extremum of Quadratic Regression Models

    Get PDF
    Quadratic functions are often used in regression to infer the existence of an extremum in a relationship although tests of the location of the extremum are rarely performed. We investigate the construction of the following confidence intervals: Delta, Fieller, estimated first derivative, bootstrapping, Bayesian and likelihood ratio. We propose interpretations for the unbounded intervals that may be generated by some of these methods. The coverage of the confidence intervals is assessed by Monte Carlo; the Delta and studentized bootstrap can perform quite poorly. Of all the methods, the first derivative method is easiest to implement.Inverted U-Shaped, turning point, Fieller method, Delta method, 1st derivative function, Bayesian, Likelihood ratio, Bootstrap.
    • 

    corecore