23,238 research outputs found

    Improved Reinforcement-Based Profile Learning For Document Filtering

    Get PDF
    Today the amount of accessible information is overwhelming. A personalized information filtering system must be able to tailor to current interests of the user and to adapt as they change over time. This system has to monitor a stream of incoming documents to learn the user’s information requirements, which is the user profile. The research has proposed a content-based personal information system learns the user’s preferences by analyzing the document contents and building a user profile. This system is called RePLS; an agent-based Reinforcement Profile Learning System with adaptive information filtering. The research focuses on an improved terms weighting to measure the importance of the terms represent each profile called “purity term weighting”. The top selected terms are then used to filter the incoming documents to the learned user profiles. The agent approach is used because of its autonomous and adaptive capabilities to perform the filtering. The proposed method was evaluated and compared with three Information Filtering methods, namely Rocchio, Okapi/BSS Basic Search System and Reinf, the incremental profile learning method. Based on the proposed method, a profile learning system is developed using Microsoft VC++ connected to Microsoft Access database through an ODBC. AFC kit is used to implement the proposed agents under RETSINA architecture. The experiments are carried out on the TREC 2002 Filtering Track dataset provided by the National Institute of Standards and Technology (NIST). This research has proven that RePLS is able to filter the stream of incoming documents according to the user interests (profiles) learned by the proposed Purity term weighting method. Based on the experiments results, Purity weighting shows better terms weighting and profile learning than the other methods. The outcome of a considerably good accuracy is mainly due to the right weighting of the profile’s terms during the learning phase. This research opens a wide range of future works to be considered, including the investigation of the dependency between the selected terms for each profile, investigating the quality of the method on different datasets, and finally, the possibility to apply the proposed method in other area like the recommendation systems

    Deep Reinforcement Learning from Self-Play in Imperfect-Information Games

    Get PDF
    Many real-world applications can be described as large-scale games of imperfect information. To deal with these challenging domains, prior work has focused on computing Nash equilibria in a handcrafted abstraction of the domain. In this paper we introduce the first scalable end-to-end approach to learning approximate Nash equilibria without prior domain knowledge. Our method combines fictitious self-play with deep reinforcement learning. When applied to Leduc poker, Neural Fictitious Self-Play (NFSP) approached a Nash equilibrium, whereas common reinforcement learning methods diverged. In Limit Texas Holdem, a poker game of real-world scale, NFSP learnt a strategy that approached the performance of state-of-the-art, superhuman algorithms based on significant domain expertise.Comment: updated version, incorporating conference feedbac

    Comparing policy gradient and value function based reinforcement learning methods in simulated electrical power trade

    Get PDF
    In electrical power engineering, reinforcement learning algorithms can be used to model the strategies of electricity market participants. However, traditional value function based reinforcement learning algorithms suffer from convergence issues when used with value function approximators. Function approximation is required in this domain to capture the characteristics of the complex and continuous multivariate problem space. The contribution of this paper is the comparison of policy gradient reinforcement learning methods, using artificial neural networks for policy function approximation, with traditional value function based methods in simulations of electricity trade. The methods are compared using an AC optimal power flow based power exchange auction market model and a reference electric power system model

    Learning Task Constraints from Demonstration for Hybrid Force/Position Control

    Full text link
    We present a novel method for learning hybrid force/position control from demonstration. We learn a dynamic constraint frame aligned to the direction of desired force using Cartesian Dynamic Movement Primitives. In contrast to approaches that utilize a fixed constraint frame, our approach easily accommodates tasks with rapidly changing task constraints over time. We activate only one degree of freedom for force control at any given time, ensuring motion is always possible orthogonal to the direction of desired force. Since we utilize demonstrated forces to learn the constraint frame, we are able to compensate for forces not detected by methods that learn only from the demonstrated kinematic motion, such as frictional forces between the end-effector and the contact surface. We additionally propose novel extensions to the Dynamic Movement Primitive (DMP) framework that encourage robust transition from free-space motion to in-contact motion in spite of environment uncertainty. We incorporate force feedback and a dynamically shifting goal to reduce forces applied to the environment and retain stable contact while enabling force control. Our methods exhibit low impact forces on contact and low steady-state tracking error.Comment: Under revie
    corecore