8,262 research outputs found
Online Reinforcement Learning for Dynamic Multimedia Systems
In our previous work, we proposed a systematic cross-layer framework for
dynamic multimedia systems, which allows each layer to make autonomous and
foresighted decisions that maximize the system's long-term performance, while
meeting the application's real-time delay constraints. The proposed solution
solved the cross-layer optimization offline, under the assumption that the
multimedia system's probabilistic dynamics were known a priori. In practice,
however, these dynamics are unknown a priori and therefore must be learned
online. In this paper, we address this problem by allowing the multimedia
system layers to learn, through repeated interactions with each other, to
autonomously optimize the system's long-term performance at run-time. We
propose two reinforcement learning algorithms for optimizing the system under
different design constraints: the first algorithm solves the cross-layer
optimization in a centralized manner, and the second solves it in a
decentralized manner. We analyze both algorithms in terms of their required
computation, memory, and inter-layer communication overheads. After noting that
the proposed reinforcement learning algorithms learn too slowly, we introduce a
complementary accelerated learning algorithm that exploits partial knowledge
about the system's dynamics in order to dramatically improve the system's
performance. In our experiments, we demonstrate that decentralized learning can
perform as well as centralized learning, while enabling the layers to act
autonomously. Additionally, we show that existing application-independent
reinforcement learning algorithms, and existing myopic learning algorithms
deployed in multimedia systems, perform significantly worse than our proposed
application-aware and foresighted learning methods.Comment: 35 pages, 11 figures, 10 table
DeepPR: Progressive Recovery for Interdependent VNFs with Deep Reinforcement Learning
The increasing reliance upon cloud services entails more flexible networks
that are realized by virtualized network equipment and functions. When such
advanced network systems face a massive failure by natural disasters or
attacks, the recovery of the entire system may be conducted in a progressive
way due to limited repair resources. The prioritization of network equipment in
the recovery phase influences the interim computation and communication
capability of systems, since the systems are operated under partial
functionality. Hence, finding the best recovery order is a critical problem,
which is further complicated by virtualization due to dependency among network
nodes and layers. This paper deals with a progressive recovery problem under
limited resources in networks with VNFs, where some dependent network layers
exist. We prove the NP-hardness of the progressive recovery problem and
approach the optimum solution by introducing DeepPR, a progressive recovery
technique based on Deep Reinforcement Learning (Deep RL). Our simulation
results indicate that DeepPR can achieve the near-optimal solutions in certain
networks and is more robust to adversarial failures, compared to a baseline
heuristic algorithm.Comment: Technical Report, 12 page
Q Learning Behavior on Autonomous Navigation of Physical Robot
Behavior based architecture gives robot fast and reliable action. If there are many behaviors in robot, behavior coordination is needed. Subsumption architecture is behavior coordination method that give quick and robust response. Learning mechanism improve robot’s performance in handling uncertainty. Q learning is popular reinforcement learning method that has been used in robot learning because it is simple, convergent and off
policy. In this paper, Q learning will be used as learning mechanism for obstacle avoidance behavior in autonomous robot navigation. Learning rate of Q learning affect robot’s performance in learning phase. As the result,
Q learning algorithm is successfully implemented in a physical robot with its imperfect environment
Hi-Val: Iterative Learning of Hierarchical Value Functions for Policy Generation
Task decomposition is effective in manifold applications where the global complexity of a problem makes planning and decision-making too demanding. This is true, for example, in high-dimensional robotics domains, where (1) unpredictabilities and modeling limitations typically prevent the manual specification of robust behaviors, and (2) learning an action policy is challenging due to the curse of dimensionality. In this work, we borrow the concept of Hierarchical Task Networks (HTNs) to decompose the learning procedure, and we exploit Upper Confidence Tree (UCT) search to introduce HOP, a novel iterative algorithm for hierarchical optimistic planning with learned value functions. To obtain better generalization and generate policies, HOP simultaneously learns and uses action values. These are used to formalize constraints within the search space and to reduce the dimensionality of the problem. We evaluate our algorithm both on a fetching task using a simulated 7-DOF KUKA light weight arm and, on a pick and delivery task with a Pioneer robot
- …