8,555 research outputs found

    The Cosmic Mach Number: Comparison from Observations, Numerical Simulations and Nonlinear Predictions

    Get PDF
    We calculate the cosmic Mach number M - the ratio of the bulk flow of the velocity field on scale R to the velocity dispersion within regions of scale R. M is effectively a measure of the ratio of large-scale to small-scale power and can be a useful tool to constrain the cosmological parameter space. Using a compilation of existing peculiar velocity surveys, we calculate M and compare it to that estimated from mock catalogues extracted from the LasDamas (a LCDM cosmology) numerical simulations. We find agreement with expectations for the LasDamas cosmology at ~ 1.5 sigma CL. We also show that our Mach estimates for the mocks are not biased by selection function effects. To achieve this, we extract dense and nearly-isotropic distributions using Gaussian selection functions with the same width as the characteristic depth of the real surveys, and show that the Mach numbers estimated from the mocks are very similar to the values based on Gaussian profiles of the corresponding widths. We discuss the importance of the survey window functions in estimating their effective depths. We investigate the nonlinear matter power spectrum interpolator PkANN as an alternative to numerical simulations, in the study of Mach number.Comment: 12 pages, 9 figures, 3 table

    Cosmological Constraints from Measurements of Type Ia Supernovae discovered during the first 1.5 years of the Pan-STARRS1 Survey

    Get PDF
    We present griz light curves of 146 spectroscopically confirmed Type Ia Supernovae (0.03<z<0.650.03 < z <0.65) discovered during the first 1.5 years of the Pan-STARRS1 Medium Deep Survey. The Pan-STARRS1 natural photometric system is determined by a combination of on-site measurements of the instrument response function and observations of spectrophotometric standard stars. We find that the systematic uncertainties in the photometric system are currently 1.2\% without accounting for the uncertainty in the HST Calspec definition of the AB system. A Hubble diagram is constructed with a subset of 113 out of 146 SNe Ia that pass our light curve quality cuts. The cosmological fit to 310 SNe Ia (113 PS1 SNe Ia + 222 light curves from 197 low-z SNe Ia), using only SNe and assuming a constant dark energy equation of state and flatness, yields w=1.1200.206+0.360(Stat)0.291+0.269(Sys)w=-1.120^{+0.360}_{-0.206}\textrm{(Stat)} ^{+0.269}_{-0.291}\textrm{(Sys)}. When combined with BAO+CMB(Planck)+H0H_0, the analysis yields ΩM=0.2800.012+0.013\Omega_{\rm M}=0.280^{+0.013}_{-0.012} and w=1.1660.069+0.072w=-1.166^{+0.072}_{-0.069} including all identified systematics (see also Scolnic et al. 2014). The value of ww is inconsistent with the cosmological constant value of 1-1 at the 2.3σ\sigma level. Tension endures after removing either the BAO or the H0H_0 constraint, though it is strongest when including the H0H_0 constraint. If we include WMAP9 CMB constraints instead of those from Planck, we find w=1.1240.065+0.083w=-1.124^{+0.083}_{-0.065}, which diminishes the discord to <2σ<2\sigma. We cannot conclude whether the tension with flat Λ\LambdaCDM is a feature of dark energy, new physics, or a combination of chance and systematic errors. The full Pan-STARRS1 supernova sample with  ⁣ ⁣\sim\!\!3 times as many SNe should provide more conclusive results.Comment: 38 pages, 16 figures, 14 tables, ApJ in pres

    Data Mining and Machine Learning in Astronomy

    Full text link
    We review the current state of data mining and machine learning in astronomy. 'Data Mining' can have a somewhat mixed connotation from the point of view of a researcher in this field. If used correctly, it can be a powerful approach, holding the potential to fully exploit the exponentially increasing amount of available data, promising great scientific advance. However, if misused, it can be little more than the black-box application of complex computing algorithms that may give little physical insight, and provide questionable results. Here, we give an overview of the entire data mining process, from data collection through to the interpretation of results. We cover common machine learning algorithms, such as artificial neural networks and support vector machines, applications from a broad range of astronomy, emphasizing those where data mining techniques directly resulted in improved science, and important current and future directions, including probability density functions, parallel algorithms, petascale computing, and the time domain. We conclude that, so long as one carefully selects an appropriate algorithm, and is guided by the astronomical problem at hand, data mining can be very much the powerful tool, and not the questionable black box.Comment: Published in IJMPD. 61 pages, uses ws-ijmpd.cls. Several extra figures, some minor additions to the tex

    Determining rules for closing customer service centers: A public utility company's fuzzy decision

    Get PDF
    In the present work, we consider the general problem of knowledge acquisition under uncertainty. A commonly used method is to learn by examples. We observe how the expert solves specific cases and from this infer some rules by which the decision was made. Unique to this work is the fuzzy set representation of the conditions or attributes upon which the decision make may base his fuzzy set decision. From our examples, we infer certain and possible rules containing fuzzy terms. It should be stressed that the procedure determines how closely the expert follows the conditions under consideration in making his decision. We offer two examples pertaining to the possible decision to close a customer service center of a public utility company. In the first example, the decision maker does not follow too closely the conditions. In the second example, the conditions are much more relevant to the decision of the expert

    Certain and possible rules for decision making using rough set theory extended to fuzzy sets

    Get PDF
    Uncertainty may be caused by the ambiguity in the terms used to describe a specific situation. It may also be caused by skepticism of rules used to describe a course of action or by missing and/or erroneous data. To deal with uncertainty, techniques other than classical logic need to be developed. Although, statistics may be the best tool available for handling likelihood, it is not always adequate for dealing with knowledge acquisition under uncertainty. Inadequacies caused by estimating probabilities in statistical processes can be alleviated through use of the Dempster-Shafer theory of evidence. Fuzzy set theory is another tool used to deal with uncertainty where ambiguous terms are present. Other methods include rough sets, the theory of endorsements and nonmonotonic logic. J. Grzymala-Busse has defined the concept of lower and upper approximation of a (crisp) set and has used that concept to extract rules from a set of examples. We will define the fuzzy analogs of lower and upper approximations and use these to obtain certain and possible rules from a set of examples where the data is fuzzy. Central to these concepts will be the idea of the degree to which a fuzzy set A is contained in another fuzzy set B, and the degree of intersection of a set A with set B. These concepts will also give meaning to the statement; A implies B. The two meanings will be: (1) if x is certainly in A then it is certainly in B, and (2) if x is possibly in A then it is possibly in B. Next, classification will be looked at and it will be shown that if a classification will be looked at and it will be shown that if a classification is well externally definable then it is well internally definable, and if it is poorly externally definable then it is poorly internally definable, thus generalizing a result of Grzymala-Busse. Finally, some ideas of how to define consensus and group options to form clusters of rules will be given
    corecore