62 research outputs found

    Improved Best-of-Both-Worlds Guarantees for Multi-Armed Bandits: FTRL with General Regularizers and Multiple Optimal Arms

    Full text link
    We study the problem of designing adaptive multi-armed bandit algorithms that perform optimally in both the stochastic setting and the adversarial setting simultaneously (often known as a best-of-both-world guarantee). A line of recent works shows that when configured and analyzed properly, the Follow-the-Regularized-Leader (FTRL) algorithm, originally designed for the adversarial setting, can in fact optimally adapt to the stochastic setting as well. Such results, however, critically rely on an assumption that there exists one unique optimal arm. Recently, Ito (2021) took the first step to remove such an undesirable uniqueness assumption for one particular FTRL algorithm with the 12\frac{1}{2}-Tsallis entropy regularizer. In this work, we significantly improve and generalize this result, showing that uniqueness is unnecessary for FTRL with a broad family of regularizers and a new learning rate schedule. For some regularizers, our regret bounds also improve upon prior results even when uniqueness holds. We further provide an application of our results to the decoupled exploration and exploitation problem, demonstrating that our techniques are broadly applicable.Comment: Update the camera-ready version for NeurIPS 202

    Manipulating dc currents with bilayer bulk natural materials

    Full text link
    The principle of transformation optics has been applied to various wave phenomena (e.g., optics, electromagnetics, acoustics and thermodynamics). Recently, metamaterial devices manipulating dc currents have received increasing attention which usually adopted the analogue of transformation optics using complicated resistor networks to mimic the inhomogeneous and anisotropic conductivities. We propose a distinct and general principle of manipulating dc currents by directly solving electric conduction equations, which only needs to utilize two layers of bulk natural materials. We experimentally demonstrate dc bilayer cloak and fan-shaped concentrator, derived from the generalized account for cloaking sensor. The proposed schemes have been validated as exact devices and this opens a facile way towards complete spatial control of dc currents. The proposed schemes may have vast potentials in various applications not only in dc, but also in other fields of manipulating magnetic field, thermal heat, elastic mechanics, and matter waves

    No-Regret Online Reinforcement Learning with Adversarial Losses and Transitions

    Full text link
    Existing online learning algorithms for adversarial Markov Decision Processes achieve O(T){O}(\sqrt{T}) regret after TT rounds of interactions even if the loss functions are chosen arbitrarily by an adversary, with the caveat that the transition function has to be fixed. This is because it has been shown that adversarial transition functions make no-regret learning impossible. Despite such impossibility results, in this work, we develop algorithms that can handle both adversarial losses and adversarial transitions, with regret increasing smoothly in the degree of maliciousness of the adversary. More concretely, we first propose an algorithm that enjoys O~(T+CP)\widetilde{{O}}(\sqrt{T} + C^{\textsf{P}}) regret where CPC^{\textsf{P}} measures how adversarial the transition functions are and can be at most O(T){O}(T). While this algorithm itself requires knowledge of CPC^{\textsf{P}}, we further develop a black-box reduction approach that removes this requirement. Moreover, we also show that further refinements of the algorithm not only maintains the same regret bound, but also simultaneously adapts to easier environments (where losses are generated in a certain stochastically constrained manner as in Jin et al. [2021]) and achieves O~(U+UCL+CP)\widetilde{{O}}(U + \sqrt{UC^{\textsf{L}}} + C^{\textsf{P}}) regret, where UU is some standard gap-dependent coefficient and CLC^{\textsf{L}} is the amount of corruption on losses.Comment: 66 page
    corecore