346 research outputs found

    Hierarchical Learning Algorithms for Multi-scale Expert Problems

    Get PDF
    In this paper, we study the multi-scale expert problem, where the rewards of different experts vary in different reward ranges. The performance of existing algorithms for the multi-scale expert problem degrades linearly proportional to the maximum reward range of any expert or the best expert and does not capture the non-uniform heterogeneity in the reward ranges among experts. In this work, we propose learning algorithms that construct a hierarchical tree structure based on the heterogeneity of the reward range of experts and then determine differentiated learning rates based on the reward upper bounds and cumulative empirical feedback over time. We then characterize the regret of the proposed algorithms as a function of non-uniform reward ranges and show that their regrets outperform prior algorithms when the rewards of experts exhibit non-uniform heterogeneity in different ranges. Last, our numerical experiments verify our algorithms' efficiency compared to previous algorithms

    Burst reduction properties of rate-based flow control schemes : downstream queue behavior

    Get PDF
    In this paper we considerer rate-based flow control throttles feeding a sequence of single server infinite capacity queues. Specifically, we consider two types of throttles, the token bank and the leaky bucket. We show that the cell waiting times at the downstream queues are increasing functions of the token buffer capacity. These results are established when the rate-based throttles have finite capacity data buffers as well as infinite capacity buffers. In the case that the data buffer has finite capacity, we require that the sum of the capacities of the data buffer and token buffer be a constant. Last, we establish similar results for the process of number of losses at the last downstream queue in the case that the waiting buffer has finite capacity

    Network loss tomography using striped unicast probes

    Full text link

    Use coupled LSTM networks to solve constrained optimization problems

    Get PDF
    Gradient-based iterative algorithms have been widely used to solve optimization problems, including resource sharing and network management. When system parameters change, it requires a new solution independent of the previous parameter settings from the iterative methods. Therefore, we propose a learning approach that can quickly produce optimal solutions over a range of system parameters for constrained optimization problems. Two Coupled Long Short-Term Memory networks (CLSTMs) are proposed to find the optimal solution. The advantages of this framework include: (1) near-optimal solution for a given problem instance can be obtained in few iterations during the inference, (2) enhanced robustness as the CLSTMs can be trained using system parameters with distributions different from those used during inference to generate solutions. In this work, we analyze the relationship between minimizing the loss functions and solving the original constrained optimization problem for certain parameter settings. Extensive numerical experiments using datasets from Alibaba reveal that the solutions to a set of nonconvex optimization problems obtained by the CLSTMs reach within 90% or better of the corresponding optimum after 11 iterations, where the number of iterations and CPU time consumption are reduced by 81% and 33%, respectively, when compared with the gradient descent with momentum
    • …
    corecore