21,808 research outputs found

    A creative exchange” for enterprise and employability -

    Get PDF
    In the Artificial Bee Colony (ABC) algorithm, the employed bee and the onlooker bee phase involve updating the candidate solutions by changing a value in one dimension, dubbed one-dimension update process. For some problems which the number of dimensions is very high, the one-dimension update process can cause the solution quality and convergence speed drop. This paper proposes a new algorithm, using reinforcement learning for solution updating in ABC algorithm, called R-ABC. After updating a solution by an employed bee, the new solution results in positive or negative reinforcement applied to the solution dimensions in the onlooker bee phase. Positive reinforcement is given when the candidate solution from the employed bee phase provides a better fitness value. The more often a dimension provides a better fitness value when changed, the higher the value of update becomes in the onlooker bee phase. Conversely, negative reinforcement is given when the candidate solution does not provide a better fitness value. The performance of the proposed algorithm is assessed on eight basic numerical benchmark functions in four categories with 100, 500, 700, and 900 dimensions, seven CEC2005's shifted functions with 100, 500, 700, and 900 dimensions, and six CEC2014's hybrid functions with 100 dimensions. The results show that the proposed algorithm provides solutions which are significantly better than all other algorithms for all tested dimensions on basic benchmark functions. The number of solutions provided by the R-ABC algorithm which are significantly better than those of other algorithms increases when the number of dimensions increases on the CEC2005's shifted functions. The R-ABC algorithm is at least comparable to the state-of-the-art ABC variants on the CEC2014's hybrid functions

    Smart Inertial Particles

    Full text link
    We performed a numerical study to train smart inertial particles to target specific flow regions with high vorticity through the use of reinforcement learning algorithms. The particles are able to actively change their size to modify their inertia and density. In short, using local measurements of the flow vorticity, the smart particle explores the interplay between its choices of size and its dynamical behaviour in the flow environment. This allows it to accumulate experience and learn approximately optimal strategies of how to modulate its size in order to reach the target high-vorticity regions. We consider flows with different complexities: a two-dimensional stationary Taylor-Green like configuration, a two-dimensional time-dependent flow, and finally a three-dimensional flow given by the stationary Arnold-Beltrami-Childress helical flow. We show that smart particles are able to learn how to reach extremely intense vortical structures in all the tackled cases.Comment: Published on Phys. Rev. Fluids (August 6, 2018

    Simple trees in complex forests: Growing Take The Best by Approximate Bayesian Computation

    Get PDF
    How can heuristic strategies emerge from smaller building blocks? We propose Approximate Bayesian Computation as a computational solution to this problem. As a first proof of concept, we demonstrate how a heuristic decision strategy such as Take The Best (TTB) can be learned from smaller, probabilistically updated building blocks. Based on a self-reinforcing sampling scheme, different building blocks are combined and, over time, tree-like non-compensatory heuristics emerge. This new algorithm, coined Approximately Bayesian Computed Take The Best (ABC-TTB), is able to recover a data set that was generated by TTB, leads to sensible inferences about cue importance and cue directions, can outperform traditional TTB, and allows to trade-off performance and computational effort explicitly

    The motivating operation and negatively reinforced problem behavior. A systematic review.

    Get PDF
    The concept of motivational operations exerts an increasing influence on the understanding and assessment of problem behavior in people with intellectual and developmental disability. In this systematic review of 59 methodologically robust studies of the influence of motivational operations in negative reinforcement paradigms in this population, we identify themes related to situational and biological variables that have implications for assessment, intervention, and further research. There is now good evidence that motivational operations of differing origins influence negatively reinforced problem behavior, and that these might be subject to manipulation to facilitate favorable outcomes. There is also good evidence that some biological variables warrant consideration in assessment procedures as they predispose the person's behavior to be influenced by specific motivational operations. The implications for assessment and intervention are made explicit with reference to variables that are open to manipulation or that require further research and conceptualization within causal models

    Simulation of associative learning with the replaced elements model

    No full text
    Associative learning theories can be categorised according to whether they treat the representation of stimulus compounds in an elemental or configural manner. Since it is clear that a simple elemental approach to stimulus representation is inadequate there have been several attempts to produce more elaborate elemental models. One recent approach, the Replaced Elements Model (Wagner, 2003), reproduces many results that have until recently been uniquely predicted by Pearce’s Configural Theory (Pearce, 1994). Although it is possible to simulate the Replaced Elements Model using “standard” simulation programs the generation of the correct stimulus representation is complex. The current paper describes a method for simulation of the Replaced Elements Model and presents the results of two example simulations that show differential predictions of Replaced Elements and Pearce’s Configural Theor
    corecore