1,239 research outputs found
Development of a Large-Scale Integrated Neurocognitive Architecture - Part 2: Design and Architecture
In Part 1 of this report, we outlined a framework for creating an intelligent agent
based upon modeling the large-scale functionality of the human brain. Building on
those results, we begin Part 2 by specifying the behavioral requirements of a
large-scale neurocognitive architecture. The core of our long-term approach remains
focused on creating a network of neuromorphic regions that provide the mechanisms
needed to meet these requirements. However, for the short term of the next few years,
it is likely that optimal results will be obtained by using a hybrid design that
also includes symbolic methods from AI/cognitive science and control processes from the
field of artificial life. We accordingly propose a three-tiered architecture that
integrates these different methods, and describe an ongoing computational study of a
prototype 'mini-Roboscout' based on this architecture. We also examine the implications
of some non-standard computational methods for developing a neurocognitive agent.
This examination included computational experiments assessing the effectiveness of
genetic programming as a design tool for recurrent neural networks for sequence
processing, and experiments measuring the speed-up obtained for adaptive neural
networks when they are executed on a graphical processing unit (GPU) rather than a
conventional CPU. We conclude that the implementation of a large-scale neurocognitive
architecture is feasible, and outline a roadmap for achieving this goal
Recommended from our members
Opponent modeling and exploitation in poker using evolved recurrent neural networks
As a classic example of imperfect information games, poker, in particular, Heads-Up No-Limit Texas Holdem (HUNL), has been studied extensively in recent years. A number of computer poker agents have been built with increasingly higher quality. While agents based on approximated Nash equilibrium have been successful, they lack the ability to exploit their opponents effectively. In addition, the performance of equilibrium strategies cannot be guaranteed in games with more than two players and multiple Nash equilibria. This dissertation focuses on devising an evolutionary method to discover opponent models based on recurrent neural networks.
A series of computer poker agents called Adaptive System for Hold’Em (ASHE) were evolved for HUNL. ASHE models the opponent explicitly using Pattern Recognition Trees (PRTs) and LSTM estimators. The default and board-texture-based PRTs maintain statistical data on the opponent strategies at different game states. The Opponent Action Rate Estimator predicts the opponent’s moves, and the Hand Range Estimator evaluates the showdown value of ASHE’s hand. Recursive Utility Estimation is used to evaluate the expected utility/reward for each available action.
Experimental results show that (1) ASHE exploits opponents with high to moderate level of exploitability more effectively than Nash-equilibrium-based agents, and (2) ASHE can defeat top-ranking equilibrium-based poker agents. Thus, the dissertation introduces an effective new method to building high-performance computer agents for poker and other imperfect information games. It also provides a promising direction for future research in imperfect information games beyond the equilibrium-based approach.Computer Science
An Evolutionary Approach to Adaptive Image Analysis for Retrieving and Long-term Monitoring Historical Land Use from Spatiotemporally Heterogeneous Map Sources
Land use changes have become a major contributor to the anthropogenic global change. The ongoing dispersion and concentration of the human species, being at their orders unprecedented, have indisputably altered Earth’s surface and atmosphere. The effects are so salient and irreversible that a new geological epoch, following the interglacial Holocene, has been announced: the Anthropocene. While its onset is by some scholars dated back to the Neolithic revolution, it is commonly referred to the late 18th century. The rapid development since the industrial revolution and its implications gave rise to an increasing awareness of the extensive anthropogenic land change and led to an urgent need for sustainable strategies for land use and land management. By preserving of landscape and settlement patterns at discrete points in time, archival geospatial data sources such as remote sensing imagery and historical geotopographic maps, in particular, could give evidence of the dynamic land use change during this crucial period.
In this context, this thesis set out to explore the potentials of retrospective geoinformation for monitoring, communicating, modeling and eventually understanding the complex and gradually evolving processes of land cover and land use change. Currently, large amounts of geospatial data sources such as archival maps are being worldwide made online accessible by libraries and national mapping agencies. Despite their abundance and relevance, the usage of historical land use and land cover information in research is still often hindered by the laborious visual interpretation, limiting the temporal and spatial coverage of studies. Thus, the core of the thesis is dedicated to the computational acquisition of geoinformation from archival map sources by means of digital image analysis. Based on a comprehensive review of literature as well as the data and proposed algorithms, two major challenges for long-term retrospective information acquisition and change detection were identified: first, the diversity of geographical entity representations over space and time, and second, the uncertainty inherent to both the data source itself and its utilization for land change detection.
To address the former challenge, image segmentation is considered a global non-linear optimization problem. The segmentation methods and parameters are adjusted using a metaheuristic, evolutionary approach. For preserving adaptability in high level image analysis, a hybrid model- and data-driven strategy, combining a knowledge-based and a neural net classifier, is recommended. To address the second challenge, a probabilistic object- and field-based change detection approach for modeling the positional, thematic, and temporal uncertainty adherent to both data and processing, is developed. Experimental results indicate the suitability of the methodology in support of land change monitoring. In conclusion, potentials of application and directions for further research are given
Emergent Rhythmic Structures as Cultural Phenomena Driven by Social Pressure in a Society of Artificial Agents
This thesis studies rhythm from an evolutionary computation perspective. Rhythm is the most fundamental dimension of music and can be used as a ground to describe the evolution of music. More specifically, the main goal of the thesis is to investigate how complex rhythmic structures evolve, subject to the cultural transmission between individuals in a society. The study is developed by means of computer modelling and simulations informed by evolutionary computation and artificial life (A-Life). In this process, self-organisation plays a fundamental role. The evolutionary process is steered by the evaluation of rhythmic complexity and by the exposure to rhythmic material.
In this thesis, composers and musicologists will find the description of a system named A-Rhythm, which explores the emerged behaviours in a community of artificial autonomous agents that interact in a virtual environment. The interaction between the agents takes the form of imitation games.
A set of necessary criteria was established for the construction of a compositional system in which cultural transmission is observed. These criteria allowed the comparison with related work in the field of evolutionary computation and music.
In the development of the system, rhythmic representation is discussed. The proposed representation enabled the development of complexity and similarity based measures, and the recombination of rhythms in a creative manner. A-Rhythm produced results in the form of simulation data which were evaluated in terms of the coherence of repertoires of the agents. The data shows how rhythmic sequences are changed and sustained in the population, displaying synchronic and diachronic diversity. Finally, this tool was used as a generative mechanism for composition and several examples are presented.Leverhulme Trus
Artificial general intelligence: Proceedings of the Second Conference on Artificial General Intelligence, AGI 2009, Arlington, Virginia, USA, March 6-9, 2009
Artificial General Intelligence (AGI) research focuses on the original and ultimate goal of AI – to create broad human-like and transhuman intelligence, by exploring all available paths, including theoretical and experimental computer science, cognitive science, neuroscience, and innovative interdisciplinary methodologies. Due to the difficulty of this task, for the last few decades the majority of AI researchers have focused on what has been called narrow AI – the production of AI systems displaying intelligence regarding specific, highly constrained tasks. In
recent years, however, more and more researchers have recognized the necessity – and feasibility – of returning to the original goals of the field. Increasingly, there is a call for a transition back to confronting the more difficult issues of human level intelligence and more broadly artificial general intelligence
Building a poker playing agent based on game logs using supervised learning
Tese de mestrado integrado. Engenharia Informática e Computação. Faculdade de Engenharia. Universidade do Porto. 201
Using MapReduce Streaming for Distributed Life Simulation on the Cloud
Distributed software simulations are indispensable in the study of large-scale life models but often require the use of technically complex lower-level distributed computing frameworks, such as MPI. We propose to overcome the complexity challenge by applying the emerging MapReduce (MR) model to distributed life simulations and by running such simulations on the cloud. Technically, we design optimized MR streaming algorithms for discrete and continuous versions of Conway’s life according to a general MR streaming pattern. We chose life because it is simple enough as a testbed for MR’s applicability to a-life simulations and general enough to make our results applicable to various lattice-based a-life models. We implement and empirically evaluate our algorithms’ performance on Amazon’s Elastic MR cloud. Our experiments demonstrate that a single MR optimization technique called strip partitioning can reduce the execution time of continuous life simulations by 64%. To the best of our knowledge, we are the first to propose and evaluate MR streaming algorithms for lattice-based simulations. Our algorithms can serve as prototypes in the development of novel MR simulation algorithms for large-scale lattice-based a-life models.https://digitalcommons.chapman.edu/scs_books/1014/thumbnail.jp
- …