22 research outputs found

    Controlled self-organisation using learning classifier systems

    Get PDF
    The complexity of technical systems increases, breakdowns occur quite often. The mission of organic computing is to tame these challenges by providing degrees of freedom for self-organised behaviour. To achieve these goals, new methods have to be developed. The proposed observer/controller architecture constitutes one way to achieve controlled self-organisation. To improve its design, multi-agent scenarios are investigated. Especially, learning using learning classifier systems is addressed

    Controlled self-organisation using learning classifier systems

    Get PDF
    The complexity of technical systems increases, breakdowns occur quite often. The mission of organic computing is to tame these challenges by providing degrees of freedom for self-organised behaviour. To achieve these goals, new methods have to be developed. The proposed observer/controller architecture constitutes one way to achieve controlled self-organisation. To improve its design, multi-agent scenarios are investigated. Especially, learning using learning classifier systems is addressed

    Parallel evaluation of Pittsburgh rule-based classifiers on GPUs

    Get PDF
    Individuals from Pittsburgh rule-based classifiers represent a complete solution to the classification problem and each individual is a variable-length set of rules. Therefore, these systems usually demand a high level of computational resources and run-time, which increases as the complexity and the size of the data sets. It is known that this computational cost is mainly due to the recurring evaluation process of the rules and the individuals as rule sets. In this paper we propose a parallel evaluation model of rules and rule sets on GPUs based on the NVIDIA CUDA programming model which significantly allows reducing the run-time and speeding up the algorithm. The results obtained from the experimental study support the great efficiency and high performance of the GPU model, which is scalable to multiple GPU devices. The GPU model achieves a rule interpreter performance of up to 64 billion operations per second and the evaluation of the individuals is speeded up of up to 3.461× when compared to the CPU model. This provides a significant advantage of the GPU model, especially addressing large and complex problems within reasonable time, where the CPU run-time is not acceptabl

    Learning classifier systems from first principles: A probabilistic reformulation of learning classifier systems from the perspective of machine learning

    Get PDF
    Learning Classifier Systems (LCS) are a family of rule-based machine learning methods. They aim at the autonomous production of potentially human readable results that are the most compact generalised representation whilst also maintaining high predictive accuracy, with a wide range of application areas, such as autonomous robotics, economics, and multi-agent systems. Their design is mainly approached heuristically and, even though their performance is competitive in regression and classification tasks, they do not meet their expected performance in sequential decision tasks despite being initially designed for such tasks. It is out contention that improvement is hindered by a lack of theoretical understanding of their underlying mechanisms and dynamics.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Three-cornered coevolution learning classifier systems for classification

    No full text
    This thesis introduces a Three-Cornered Coevolution System that is capable of addressing classification tasks through coevolution (coadaptive evolution) where three different agents (i.e. a generation agent and two classification agents) learn and adapt to the changes of the problems without human involvement. In existing pattern classification systems, humans usually play a major role in creating and controlling the problem domain. In particular, humans set up and tune the problem’s difficulty. A motivation of the work for this thesis is to design and develop an automatic pattern generation and classification system that can generate various sets of exemplars to be learned from and perform the classification tasks autonomously. The system should be able to automatically adjust the problem’s difficulty based on the learners’ ability to learn (e.g. determining features in the problem that affect the learners’ performance in order to generate various problems for classification at different levels of difficulty). Further, the system should be capable of addressing the classification tasks through coevolution (coadaptive evolution), where the participating agents learn and adapt to the changes of the problems without human participation. Ultimately, Learning Classifier System (LCS) is chosen to be implemented in the participating agents. LCS has several potential characteristics, such as interpretability, generalisation capability and variations in representation, that are suitable for the system. The work can be broken down into three main phases. Phase 1 is to develop an automated evolvable problem generator to autonomously generate various problems for classification, Phase 2 is to develop the Two-Cornered Coevolution System for classification, and Phase 3 is to develop the Three-Cornered Coevolution System for classification. Phase 1 is necessary in order to create a set of problem domains for classification (i.e. image-based data or artificial data) that can be generated automatically, where the difficulty levels of the problem can be adjusted and tuned. Phase 2 is needed to investigate the generation agent’s ability to autonomously tune and adjust the problem’s difficulty based on the classification agent’s performance. Phase 2 is a standard coevolution system, where two different agents evolve to adapt to the changes of the problem. The classification agent evolves to learn various classification problems, while the generation agent evolves to tune and adjust the problem’s difficulty based on the learner’s ability to learn. Phase 3 is the final research goal. This phase develops a new coevolution system where three different agents evolve to adapt to the changes of the problem. Both of the classification agents evolve to learn various classification problems, while the generation agent evolves to tune and adjust the problem’s difficulty based on the classification agents’ ability to learn. The classification agents use different styles of learning techniques (i.e. supervised or reinforcement learning techniques) to learn the problems. Based on the classification agents’ ability (i.e. the difference in performance between the classification agents) the generation agent adjusts and creates various problems for classification at different levels of difficulty (i.e. various ‘hard’ problems). The Three-Cornered Coevolution System offers a great potential for autonomous learning and provides useful insight into coevolution learning over the standard studies of pattern recognition. The system is capable of autonomously generating various problems, learning and providing insight into each learning system’s ability by determining the problem domains where they perform relatively well. This is in contrast to humans having to determine the problem domains

    Learning classifier systems from first principles

    Get PDF

    Reinforcement Learning

    Get PDF
    Brains rule the world, and brain-like computation is increasingly used in computers and electronic devices. Brain-like computation is about processing and interpreting data or directly putting forward and performing actions. Learning is a very important aspect. This book is on reinforcement learning which involves performing actions to achieve a goal. The first 11 chapters of this book describe and extend the scope of reinforcement learning. The remaining 11 chapters show that there is already wide usage in numerous fields. Reinforcement learning can tackle control tasks that are too complex for traditional, hand-designed, non-learning controllers. As learning computers can deal with technical complexities, the tasks of human operators remain to specify goals on increasingly higher levels. This book shows that reinforcement learning is a very dynamic area in terms of theory and applications and it shall stimulate and encourage new research in this field

    Modeling and Generating Strategy Games Mechanics

    Get PDF

    Policy Search Based Relational Reinforcement Learning using the Cross-Entropy Method

    Get PDF
    Relational Reinforcement Learning (RRL) is a subfield of machine learning in which a learning agent seeks to maximise a numerical reward within an environment, represented as collections of objects and relations, by performing actions that interact with the environment. The relational representation allows more dynamic environment states than an attribute-based representation of reinforcement learning, but this flexibility also creates new problems such as a potentially infinite number of states. This thesis describes an RRL algorithm named Cerrla that creates policies directly from a set of learned relational “condition-action” rules using the Cross-Entropy Method (CEM) to control policy creation. The CEM assigns each rule a sampling probability and gradually modifies these probabilities such that the randomly sampled policies consist of ‘better’ rules, resulting in larger rewards received. Rule creation is guided by an inferred partial model of the environment that defines: the minimal conditions needed to take an action, the possible specialisation conditions per rule, and a set of simplification rules to remove redundant and illegal rule conditions, resulting in compact, efficient, and comprehensible policies. Cerrla is evaluated on four separate environments, where each environment has several different goals. Results show that compared to existing RRL algorithms, Cerrla is able to learn equal or better behaviour in less time on the standard RRL environment. On other larger, more complex environments, it can learn behaviour that is competitive to specialised approaches. The simplified rules and CEM’s bias towards compact policies result in comprehensive and effective relational policies created in a relatively short amount of time

    New Fundamental Technologies in Data Mining

    Get PDF
    The progress of data mining technology and large public popularity establish a need for a comprehensive text on the subject. The series of books entitled by "Data Mining" address the need by presenting in-depth description of novel mining algorithms and many useful applications. In addition to understanding each section deeply, the two books present useful hints and strategies to solving problems in the following chapters. The contributing authors have highlighted many future research directions that will foster multi-disciplinary collaborations and hence will lead to significant development in the field of data mining
    corecore