5,652 research outputs found

    Cluster versus POTENT Density and Velocity Fields: Cluster Biasing and Omega

    Get PDF
    The density and velocity fields as extracted from the Abell/ACO clusters are compared to the corresponding fields recovered by the POTENT method from the Mark~III peculiar velocities of galaxies. In order to minimize non-linear effects and to deal with ill-sampled regions we smooth both fields using a Gaussian window with radii ranging between 12 - 20\hmpc. The density and velocity fields within 70\hmpc exhibit similarities, qualitatively consistent with gravitational instability theory and a linear biasing relation between clusters and mass. The random and systematic errors are evaluated with the help of mock catalogs. Quantitative comparisons within a volume containing  ⁣12\sim\!12 independent samples yield \betac\equiv\Omega^{0.6}/b_c=0.22\pm0.08, where bcb_c is the cluster biasing parameter at 15\hmpc. If bc4.5b_c \sim 4.5, as indicated by the cluster correlation function, our result is consistent with Ω1\Omega \sim 1.Comment: 18 pages, latex, 2 ps figures 6 gif figures. Accepted for pubblications in MNRA

    A practical guide and software for analysing pairwise comparison experiments

    Get PDF
    Most popular strategies to capture subjective judgments from humans involve the construction of a unidimensional relative measurement scale, representing order preferences or judgments about a set of objects or conditions. This information is generally captured by means of direct scoring, either in the form of a Likert or cardinal scale, or by comparative judgments in pairs or sets. In this sense, the use of pairwise comparisons is becoming increasingly popular because of the simplicity of this experimental procedure. However, this strategy requires non-trivial data analysis to aggregate the comparison ranks into a quality scale and analyse the results, in order to take full advantage of the collected data. This paper explains the process of translating pairwise comparison data into a measurement scale, discusses the benefits and limitations of such scaling methods and introduces a publicly available software in Matlab. We improve on existing scaling methods by introducing outlier analysis, providing methods for computing confidence intervals and statistical testing and introducing a prior, which reduces estimation error when the number of observers is low. Most of our examples focus on image quality assessment.Comment: Code available at https://github.com/mantiuk/pwcm

    New heuristic algorithm to improve the Minimax for Gomoku artificial intelligence

    Get PDF
    Back in the 1990s, after IBM developed Deep Blue to defeat human chess players, people tried to solve all kinds of board game problems using computers. Gomoku is one of the popular board games in Asia and Europe, people also try to simulate and solve it through computer algorithms. The traditional and effective strategy for Gomoku AI is the tree search algorithm. The minimax algorithm is one of the most common game trees used in AI strategy. But obviously, the biggest problem with this is that as the number of stones on the board and the number of search depth increases, even computers have to spend a lot of time calculating each step. The number of nodes with exponential increment is really difficult to achieve in practical terms. In the paper, we will discuss in detail how to improve the most basic minimax algorithm. The direction of the research is on how to improve the efficiency of the algorithm and maximize the search depth. The most common means used now is to cut out the clutter in the search tree through Alpha-Beta pruning. Moreover, we offer a new heuristic algorithm which can boost the depth search processing a lot. The core idea is how to sort and reduce the potential candidates for each depth and nodes, and how to return the best path in a recursive way. We finally will compare and compete with the traditional minimax algorithm and New Heuristic minimax algorithm in the experimental testing session. Based on the API developed individually, this paper will explain back-end algorithms and the program user interfaces itself in detail

    Computational phylogenetics and the classification of South American languages

    Get PDF
    In recent years, South Americanist linguists have embraced computational phylogenetic methods to resolve the numerous outstanding questions about the genealogi- cal relationships among the languages of the continent. We provide a critical review of the methods and language classification results that have accumulated thus far, emphasizing the superiority of character-based methods over distance-based ones and the importance of develop- ing adequate comparative datasets for producing well- resolved classifications

    Population toxicokinetics of benzene.

    Get PDF
    In assessing the distribution and metabolism of toxic compounds in the body, measurements are not always feasible for ethical or technical reasons. Computer modeling offers a reasonable alternative, but the variability and complexity of biological systems pose unique challenges in model building and adjustment. Recent tools from population pharmacokinetics, Bayesian statistical inference, and physiological modeling can be brought together to solve these problems. As an example, we modeled the distribution and metabolism of benzene in humans. We derive statistical distributions for the parameters of a physiological model of benzene, on the basis of existing data. The model adequately fits both prior physiological information and experimental data. An estimate of the relationship between benzene exposure (up to 10 ppm) and fraction metabolized in the bone marrow is obtained and is shown to be linear for the subjects studied. Our median population estimate for the fraction of benzene metabolized, independent of exposure levels, is 52% (90% confidence interval, 47-67%). At levels approaching occupational inhalation exposure (continuous 1 ppm exposure), the estimated quantity metabolized in the bone marrow ranges from 2 to 40 mg/day

    When, where and how to perform efficiency estimation

    Get PDF
    In this paper we compare two flexible estimators of technical efficiency in a cross-sectional setting: the nonparametric kernel SFA estimator of Fan, Li and Weersink (1996) to the nonparametric bias corrected DEA estimator of Kneip, Simar and Wilson (2008). We assess the finite sample performance of each estimator via Monte Carlo simulations and empirical examples. We find that the reliability of efficiency scores critically hinges upon the ratio of the variation in efficiency to the variation in noise. These results should be a valuable resource to both academic researchers and practitioners.Bootstrap, Nonparametric Kernel, Technical Efficiency

    When, Where and How to Perform Efficiency Estimation

    Get PDF
    In this paper we compare two flexible estimators of technical efficiency in a cross-sectional setting: the nonparametric kernel SFA estimator of Fan, Li and Weersink (1996) to the nonparametric bias corrected DEA estimator of Kneip, Simar and Wilson (2008). We assess the finite sample performance of each estimator via Monte Carlo simulations and empirical examples. We find that the reliability of efficiency scores critically hinges upon the ratio of the variation in efficiency to the variation in noise. These results should be a valuable resource to both academic researchers and practitioners.nonparametric kernel, technical efficiency, bootstrap
    corecore