32 research outputs found

    Computer Chess: From Idea to DeepMind

    Get PDF
    Computer chess has stimulated human imagination over some two hundred and fifty years. In 1769 Baron Wolfgang von Kempelen promised Empress Maria Theresia in public: “I will invent a machine for a more compelling spectacle [than the magnetism tricks by Pelletier] within half a year.” The idea of an intelligent chess machine was born. In 1770 the first demonstration was given.The real development of artificial intelligence (AI) began in 1950 and contains many well-known names, such as Turing and Shannon. One of the first AI research areas was chess. In 1997, a high point was to be reported: world champion Gary Kasparov had been defeated by Deep Blue. The techniques used included searching, knowledge representation, parallelism, and distributed systems. Adaptivity, machine learning and the recently developed deep learning mechanism were only later on added to the computer chess research techniques.The major breakthrough for games in general (including chess) took place in 2017 when (1) the AlphaGo Zero program defeated the world championship program AlphaGo by 100-0 and (2) the technique of deep learning also proved applicable to chess. In the autumn of 2017, the Stockfish program was beaten by AlphaZero by 28-0 (with 72 draws, resulting in a 64-36 victory). However, the end of the disruptive advance is not yet in reach. In fact, we have just started. The next milestone will be to determine the theoretical game value of chess (won, draw, or lost). This achievement will certainly be followed by other surprising developments.Algorithms and the Foundations of Software technolog

    An Off-center Density Peak in the Milky Way's Dark Matter Halo?

    Full text link
    We show that the position of the central dark matter density peak may be expected to differ from the dynamical center of the Galaxy by several hundred parsec. In Eris, a high resolution cosmological hydrodynamics simulation of a realistic Milky-Way-analog disk galaxy, this offset is 300 - 400 pc (~3 gravitational softening lengths) after z=1. In its dissipationless dark-matter-only twin simulation ErisDark, as well as in the Via Lactea II and GHalo simulations, the offset remains below one softening length for most of its evolution. The growth of the DM offset coincides with a flattening of the central DM density profile in Eris inwards of ~1 kpc, and the direction from the dynamical center to the point of maximum DM density is correlated with the orientation of the stellar bar, suggesting a bar-halo interaction as a possible explanation. A dark matter density offset of several hundred parsec greatly affects expectations of the dark matter annihilation signals from the Galactic Center. It may also support a dark matter annihilation interpretation of recent reports by Weniger (2012) and Su & Finkbeiner (2012) of highly significant 130 GeV gamma-ray line emission from a region 1.5 degrees (~200 parsec projected) away from Sgr A* in the Galactic plane.Comment: 12 pages, 11 figures, replaced with version accepted for publication in Ap

    Artificial intelligence and its application in architectural design

    Get PDF
    No abstract available.No abstract available

    Temoral Difference Learning in Complex Domains

    Get PDF
    Submitted to the University of London for the Degree of Doctor of Philosophy in Computer Scienc

    Temporal Difference Learning in Complex Domains

    Get PDF
    PhDThis thesis adapts and improves on the methods of TD(k) (Sutton 1988) that were successfully used for backgammon (Tesauro 1994) and applies them to other complex games that are less amenable to simple pattem-matching approaches. The games investigated are chess and shogi, both of which (unlike backgammon) require significant amounts of computational effort to be expended on search in order to achieve expert play. The improved methods are also tested in a non-game domain. In the chess domain, the adapted TD(k) method is shown to successfully learn the relative values of the pieces, and matches using these learnt piece values indicate that they perform at least as well as piece values widely quoted in elementary chess books. The adapted TD(X) method is also shown to work well in shogi, considered by many researchers to be the next challenge for computer game-playing, and for which there is no standardised set of piece values. An original method to automatically set and adjust the major control parameters used by TD(k) is presented. The main performance advantage comes from the learning rate adjustment, which is based on a new concept called temporal coherence. Experiments in both chess and a random-walk domain show that the temporal coherence algorithm produces both faster learning and more stable values than both human-chosen parameters and an earlier method for learning rate adjustment. The methods presented in this thesis allow programs to learn with as little input of external knowledge as possible, exploring the domain on their own rather than by being taught. Further experiments show that the method is capable of handling many hundreds of weights, and that it is not necessary to perform deep searches during the leaming phase in order to learn effective weight

    Acquiring and using knowledge in computer chess

    Get PDF
    corecore