4,113 research outputs found

    Understanding distributions of chess performances

    Get PDF
    This paper presents evidence for several features of the population of chess players, and the distribution of their performances measured in terms of Elo ratings and by computer analysis of moves. Evidence that ratings have remained stable since the inception of the Elo system in the 1970’s is given in several forms: by showing that the population of strong players fits a simple logistic-curve model without inflation, by plotting players’ average error against the FIDE category of tournaments over time, and by skill parameters from a model that employs computer analysis keeping a nearly constant relation to Elo rating across that time. The distribution of the model’s Intrinsic Performance Ratings can hence be used to compare populations that have limited interaction, such as between players in a national chess federation and FIDE, and ascertain relative drift in their respective rating systems

    Deep learning investigation for chess player attention prediction using eye-tracking and game data

    Get PDF
    This article reports on an investigation of the use of convolutional neural networks to predict the visual attention of chess players. The visual attention model described in this article has been created to generate saliency maps that capture hierarchical and spatial features of chessboard, in order to predict the probability fixation for individual pixels Using a skip-layer architecture of an autoencoder, with a unified decoder, we are able to use multiscale features to predict saliency of part of the board at different scales, showing multiple relations between pieces. We have used scan path and fixation data from players engaged in solving chess problems, to compute 6600 saliency maps associated to the corresponding chess piece configurations. This corpus is completed with synthetically generated data from actual games gathered from an online chess platform. Experiments realized using both scan-paths from chess players and the CAT2000 saliency dataset of natural images, highlights several results. Deep features, pretrained on natural images, were found to be helpful in training visual attention prediction for chess. The proposed neural network architecture is able to generate meaningful saliency maps on unseen chess configurations with good scores on standard metrics. This work provides a baseline for future work on visual attention prediction in similar contexts

    Characterization of light production and transport in tellurium dioxide crystals

    Get PDF
    Simultaneous measurement of phonon and light signatures is an effective way to reduce the backgrounds and increase the sensitivity of CUPID, a next-generation bolometric neutrinoless double-beta decay (0νββ) experiment. Light emission in tellurium dioxide (TeO2) crystals, one of the candidate materials for CUPID, is dominated by faint Cherenkov radiation, and the high refractive index of TeO2 complicates light collection. Positive identification of 0νββ events therefore requires high-sensitivity light detectors and careful optimization of light transport. A detailed microphysical understanding of the optical properties of TeO2 crystals is essential for such optimization. We present a set of quantitative measurements of light production and transport in a cubic TeO2 crystal, verified with a complete optical model and calibrated against a UVT acrylic standard. We measure the optical surface properties of the crystal, and set stringent limits on the amount of room-temperature scintillation in TeO2 for β and ι particles of 5.3 and 8 photons/MeV, respectively, at 90% confidence. The techniques described here can be used to optimize and verify the particle identification capabilities of CUPID

    Automatic Bayesian Density Analysis

    Full text link
    Making sense of a dataset in an automatic and unsupervised fashion is a challenging problem in statistics and AI. Classical approaches for {exploratory data analysis} are usually not flexible enough to deal with the uncertainty inherent to real-world data: they are often restricted to fixed latent interaction models and homogeneous likelihoods; they are sensitive to missing, corrupt and anomalous data; moreover, their expressiveness generally comes at the price of intractable inference. As a result, supervision from statisticians is usually needed to find the right model for the data. However, since domain experts are not necessarily also experts in statistics, we propose Automatic Bayesian Density Analysis (ABDA) to make exploratory data analysis accessible at large. Specifically, ABDA allows for automatic and efficient missing value estimation, statistical data type and likelihood discovery, anomaly detection and dependency structure mining, on top of providing accurate density estimation. Extensive empirical evidence shows that ABDA is a suitable tool for automatic exploratory analysis of mixed continuous and discrete tabular data.Comment: In proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence (AAAI-19

    The (Sport) Performer-Environment System as the Base Unit in Explanations of Expert Performance

    Get PDF
    In this article we propose that expertise can be best explained as the interaction of the varying constraints/characteristics of the environment and of the individual, framed by the ecological dynamics approach. This rationale of expert performance is contrasted with the typical way that science has approached the study of expertise: i.e., by looking for constraints, located in the individual, either nurture- or nature-based, and related to high performance levels. In ecological dynamics, the base unit of analysis for understanding expertise is the individual-environment system. Illustrating this perspective with Bob Beamon’s 8.90 m long jump, whose 1968 world-record jump was substantially longer than any previous, we argue that expert performers should not be seen as an agglomeration of genes, traits, or mental dispositions and capacities. Rather, expert performance can be captured by the dynamically-varying, functional relationship between the constraints imposed by the environment and the resources of each individual performer

    Finding the True Frequent Itemsets

    Full text link
    Frequent Itemsets (FIs) mining is a fundamental primitive in data mining. It requires to identify all itemsets appearing in at least a fraction θ\theta of a transactional dataset D\mathcal{D}. Often though, the ultimate goal of mining D\mathcal{D} is not an analysis of the dataset \emph{per se}, but the understanding of the underlying process that generated it. Specifically, in many applications D\mathcal{D} is a collection of samples obtained from an unknown probability distribution π\pi on transactions, and by extracting the FIs in D\mathcal{D} one attempts to infer itemsets that are frequently (i.e., with probability at least θ\theta) generated by π\pi, which we call the True Frequent Itemsets (TFIs). Due to the inherently stochastic nature of the generative process, the set of FIs is only a rough approximation of the set of TFIs, as it often contains a huge number of \emph{false positives}, i.e., spurious itemsets that are not among the TFIs. In this work we design and analyze an algorithm to identify a threshold θ^\hat{\theta} such that the collection of itemsets with frequency at least θ^\hat{\theta} in D\mathcal{D} contains only TFIs with probability at least 1−δ1-\delta, for some user-specified δ\delta. Our method uses results from statistical learning theory involving the (empirical) VC-dimension of the problem at hand. This allows us to identify almost all the TFIs without including any false positive. We also experimentally compare our method with the direct mining of D\mathcal{D} at frequency θ\theta and with techniques based on widely-used standard bounds (i.e., the Chernoff bounds) of the binomial distribution, and show that our algorithm outperforms these methods and achieves even better results than what is guaranteed by the theoretical analysis.Comment: 13 pages, Extended version of work appeared in SIAM International Conference on Data Mining, 201
    • …
    corecore