7 research outputs found

    Children must be protected from the tobacco industry's marketing tactics.

    Get PDF

    Cognitive Learning System for Sequential Aliasing Patterns of States in Multistep Decision-Making

    No full text
    Perceptual aliasing is a cognitive problem for a learning agent where the robot cannot distinguish its state via its immediate observations, leading to poor decision-making. Previous work addresses this issue by storing the agent's path to learn the optimal policy. In particular, FoRsXCS utilises a fundamental and unique path to identify and disambiguate all aliased states and learn optimal policies in an environment with aliased states. However, it is hard to identify the aliased states in sequential aliasing patterns of states where the aliased states occur sequentially within a regular pattern. This work proposes a new cognitive learning system to identify such sequential aliasing patterns of states by extending FoRsXCS. The experimental results show that the proposed system performs equal to or greater than the existing systems in nine mazes for navigation tasks and significantly outperforms existing techniques in mazes with sequential aliasing patterns. Concretely, the proposed method improves its performance by 0.48 steps compared with FoRsXCS.</p

    How XCS can prevent misdistinguishing rule accuracy:A preliminary study

    No full text
    On the XCS classifier system, an ideal assumption in the latest XCS learning theory means that it is impossible for XCS to distinguish accurate rules from any other rules with 100% success rate in practical use. This paper presents a preliminary work to remove this assumption. Furthermore, it reveals a dilemma in setting a crucial XCS parameter. That is, to guarantee 100% success rate, the learning rate should be greater than 0.5. However, a rule fitness updated with such a high learning rate would not converge to its true value so rule discovery would not act properly.</p

    How should Learning Classifier Systems cover a state-action space?

    No full text
    A learning strategy in Learning Classifier Systems (LCSs) defines how classifiers cover a state-action space in a problem. Previous analyses in classification problems have empirically claimed an adequate learning strategy can be decided depending on the types of noise in the problem. This issue is still arguable from two aspects. First, there lacks comparison of learning strategies in reinforcement learning problems with different types of noise. Second, when we can claim so, a further issue is how should classifiers cover the state-action space in order to improve the stability of LCS performance on as many types of noise as possible? This paper first attempts to empirically conclude these issues on a version of LCSs (i.e., the XCS classifier system). That is, we present a new concept of learning strategy for LCSs, and complement that claim by comparing it with the existing learning strategies on a reinforcement learning problem. Our learning strategy covers all state-action pairs but assigns more classifiers to the highest-return action at each state than other actions. Our results support that claim that existing learning strategies have dependencies on the types of noise in reinforcement learning problems. However, our learning strategy improves the stability of XCS performance compared with the existing strategies on all types of noise employed in this paper

    Analysis of Outcomes in Ischemic vs Nonischemic Cardiomyopathy in Patients With Atrial Fibrillation A Report From the GARFIELD-AF Registry

    No full text
    IMPORTANCE Congestive heart failure (CHF) is commonly associated with nonvalvular atrial fibrillation (AF), and their combination may affect treatment strategies and outcomes
    corecore