3,836 research outputs found

    A Learning Theoretic Approach to Energy Harvesting Communication System Optimization

    Full text link
    A point-to-point wireless communication system in which the transmitter is equipped with an energy harvesting device and a rechargeable battery, is studied. Both the energy and the data arrivals at the transmitter are modeled as Markov processes. Delay-limited communication is considered assuming that the underlying channel is block fading with memory, and the instantaneous channel state information is available at both the transmitter and the receiver. The expected total transmitted data during the transmitter's activation time is maximized under three different sets of assumptions regarding the information available at the transmitter about the underlying stochastic processes. A learning theoretic approach is introduced, which does not assume any a priori information on the Markov processes governing the communication system. In addition, online and offline optimization problems are studied for the same setting. Full statistical knowledge and causal information on the realizations of the underlying stochastic processes are assumed in the online optimization problem, while the offline optimization problem assumes non-causal knowledge of the realizations in advance. Comparing the optimal solutions in all three frameworks, the performance loss due to the lack of the transmitter's information regarding the behaviors of the underlying Markov processes is quantified

    Reinforcement Learning: A Survey

    Full text link
    This paper surveys the field of reinforcement learning from a computer-science perspective. It is written to be accessible to researchers familiar with machine learning. Both the historical basis of the field and a broad selection of current work are summarized. Reinforcement learning is the problem faced by an agent that learns behavior through trial-and-error interactions with a dynamic environment. The work described here has a resemblance to work in psychology, but differs considerably in the details and in the use of the word ``reinforcement.'' The paper discusses central issues of reinforcement learning, including trading off exploration and exploitation, establishing the foundations of the field via Markov decision theory, learning from delayed reinforcement, constructing empirical models to accelerate learning, making use of generalization and hierarchy, and coping with hidden state. It concludes with a survey of some implemented systems and an assessment of the practical utility of current methods for reinforcement learning.Comment: See http://www.jair.org/ for any accompanying file

    Identifying the Underlying Components of Delay Discounting Using Latent Factor Modeling

    Get PDF
    Many problematic behaviors can be conceptualized as choosing a smaller, immediate outcome over a larger, delayed outcome. For example, drug abuse involves choosing between the immediate euphoric effects of the drug and the delayed health and legal consequences of drug abuse. Individuals that consistently choose the smaller outcome are said to behavior “impulsively.” The goal of this dissertation was to understand how to change impulsive choice. Chapters 2 and 3 successfully demonstrate that impulsive choice can be altered by reframing how the choice is presented. For example, framing a delayed outcome using a specific date instead of a duration of time (e.g., 1 year) reduced impulsive choice. However, these findings do not explain why impulsive choice changed. The goal of Chapter 4 was to identify the underlying processes that result in impulsive choice with the hopes that by understanding these processes, impulsive choice can be reduced. Latent factor modeling was used to understand the role if three proposed processes in impulsive choice: marginal utility, cardinal utility, and nonlinear time perception. The results of the latent factor model indicated that nonlinear time perception does relate to how delayed outcomes are valued but not marginal utility and cardinal utility

    Traffic Light Control Using Deep Policy-Gradient and Value-Function Based Reinforcement Learning

    Full text link
    Recent advances in combining deep neural network architectures with reinforcement learning techniques have shown promising potential results in solving complex control problems with high dimensional state and action spaces. Inspired by these successes, in this paper, we build two kinds of reinforcement learning algorithms: deep policy-gradient and value-function based agents which can predict the best possible traffic signal for a traffic intersection. At each time step, these adaptive traffic light control agents receive a snapshot of the current state of a graphical traffic simulator and produce control signals. The policy-gradient based agent maps its observation directly to the control signal, however the value-function based agent first estimates values for all legal control signals. The agent then selects the optimal control action with the highest value. Our methods show promising results in a traffic network simulated in the SUMO traffic simulator, without suffering from instability issues during the training process

    Enhanced robot learning using Fuzzy Q-Learning & context-aware middleware

    Get PDF
    corecore