5,423 research outputs found

    Open Access Scheduling: A Medical Director\u27s View from the Trenches

    Get PDF
    No abstract available

    Simple regret for infinitely many armed bandits

    Get PDF
    We consider a stochastic bandit problem with infinitely many arms. In this setting, the learner has no chance of trying all the arms even once and has to dedicate its limited number of samples only to a certain number of arms. All previous algorithms for this setting were designed for minimizing the cumulative regret of the learner. In this paper, we propose an algorithm aiming at minimizing the simple regret. As in the cumulative regret setting of infinitely many armed bandits, the rate of the simple regret will depend on a parameter β\beta characterizing the distribution of the near-optimal arms. We prove that depending on β\beta, our algorithm is minimax optimal either up to a multiplicative constant or up to a log(n)\log(n) factor. We also provide extensions to several important cases: when β\beta is unknown, in a natural setting where the near-optimal arms have a small variance, and in the case of unknown time horizon.Comment: in 32th International Conference on Machine Learning (ICML 2015

    Changes in the Cell Squad of Iliac Lymph Nodes of White Rats in Case of Longterm Influence of Nalbufin

    Full text link
    The article presents data on the change in the cellular composition of the lymph nodes of the white rats, males of reproductive age, who received intramuscular opioid analgesics - nalbuphine every day for six weeks. The weekly dose of nalbuphine was gradually increased, creating a model of physical opioid dependence according to the patent of Ukraine No. 76564 U. All experimental animals were divided into 8 groups.Morphometric method was used to determine the relative number of cells of the lymphoid series - small, medium and large lymphocytes, blasts and plasmocytes in the cloak zone and the embryonic center of the secondary lymphoid nodes and brain strands of the lymph nodes. Morphometric studies were performed using a system of visual analysis of histological preparations.It was established that nalbuphine in the lymph nodes causes reactive and destructive changes: the number of large lymphocytes increases in all structural components of the lymph node with a maximum after 4 weeks, respectively, the relative number of small lymphocytes decreases in the nucleus centers and brain tracts, the relative number of plasmocytes in the brain strains increases sharply . In all structural components of the lymph nodes hemocapillaries and venules are dilated and full-blooded, around vascular edema and partial damage to the walls of the microvessels.One week after the discontinuation of nalbuphine, the relative number of lymphoid cells in the structural components of the lymph nodes does not return to the indicators of intact animals, no reversible changes are noted

    Second-Order Kernel Online Convex Optimization with Adaptive Sketching

    Get PDF
    Kernel online convex optimization (KOCO) is a framework combining the expressiveness of non-parametric kernel models with the regret guarantees of online learning. First-order KOCO methods such as functional gradient descent require only O(t)\mathcal{O}(t) time and space per iteration, and, when the only information on the losses is their convexity, achieve a minimax optimal O(T)\mathcal{O}(\sqrt{T}) regret. Nonetheless, many common losses in kernel problems, such as squared loss, logistic loss, and squared hinge loss posses stronger curvature that can be exploited. In this case, second-order KOCO methods achieve O(log(Det(K)))\mathcal{O}(\log(\text{Det}(\boldsymbol{K}))) regret, which we show scales as O(defflogT)\mathcal{O}(d_{\text{eff}}\log T), where deffd_{\text{eff}} is the effective dimension of the problem and is usually much smaller than O(T)\mathcal{O}(\sqrt{T}). The main drawback of second-order methods is their much higher O(t2)\mathcal{O}(t^2) space and time complexity. In this paper, we introduce kernel online Newton step (KONS), a new second-order KOCO method that also achieves O(defflogT)\mathcal{O}(d_{\text{eff}}\log T) regret. To address the computational complexity of second-order methods, we introduce a new matrix sketching algorithm for the kernel matrix Kt\boldsymbol{K}_t, and show that for a chosen parameter γ1\gamma \leq 1 our Sketched-KONS reduces the space and time complexity by a factor of γ2\gamma^2 to O(t2γ2)\mathcal{O}(t^2\gamma^2) space and time per iteration, while incurring only 1/γ1/\gamma times more regret
    corecore