5,423 research outputs found
Open Access Scheduling: A Medical Director\u27s View from the Trenches
No abstract available
Simple regret for infinitely many armed bandits
We consider a stochastic bandit problem with infinitely many arms. In this
setting, the learner has no chance of trying all the arms even once and has to
dedicate its limited number of samples only to a certain number of arms. All
previous algorithms for this setting were designed for minimizing the
cumulative regret of the learner. In this paper, we propose an algorithm aiming
at minimizing the simple regret. As in the cumulative regret setting of
infinitely many armed bandits, the rate of the simple regret will depend on a
parameter characterizing the distribution of the near-optimal arms. We
prove that depending on , our algorithm is minimax optimal either up to
a multiplicative constant or up to a factor. We also provide
extensions to several important cases: when is unknown, in a natural
setting where the near-optimal arms have a small variance, and in the case of
unknown time horizon.Comment: in 32th International Conference on Machine Learning (ICML 2015
Changes in the Cell Squad of Iliac Lymph Nodes of White Rats in Case of Longterm Influence of Nalbufin
The article presents data on the change in the cellular composition of the lymph nodes of the white rats, males of reproductive age, who received intramuscular opioid analgesics - nalbuphine every day for six weeks. The weekly dose of nalbuphine was gradually increased, creating a model of physical opioid dependence according to the patent of Ukraine No. 76564 U. All experimental animals were divided into 8 groups.Morphometric method was used to determine the relative number of cells of the lymphoid series - small, medium and large lymphocytes, blasts and plasmocytes in the cloak zone and the embryonic center of the secondary lymphoid nodes and brain strands of the lymph nodes. Morphometric studies were performed using a system of visual analysis of histological preparations.It was established that nalbuphine in the lymph nodes causes reactive and destructive changes: the number of large lymphocytes increases in all structural components of the lymph node with a maximum after 4 weeks, respectively, the relative number of small lymphocytes decreases in the nucleus centers and brain tracts, the relative number of plasmocytes in the brain strains increases sharply . In all structural components of the lymph nodes hemocapillaries and venules are dilated and full-blooded, around vascular edema and partial damage to the walls of the microvessels.One week after the discontinuation of nalbuphine, the relative number of lymphoid cells in the structural components of the lymph nodes does not return to the indicators of intact animals, no reversible changes are noted
Second-Order Kernel Online Convex Optimization with Adaptive Sketching
Kernel online convex optimization (KOCO) is a framework combining the
expressiveness of non-parametric kernel models with the regret guarantees of
online learning. First-order KOCO methods such as functional gradient descent
require only time and space per iteration, and, when the only
information on the losses is their convexity, achieve a minimax optimal
regret. Nonetheless, many common losses in kernel
problems, such as squared loss, logistic loss, and squared hinge loss posses
stronger curvature that can be exploited. In this case, second-order KOCO
methods achieve regret, which
we show scales as , where
is the effective dimension of the problem and is usually much smaller than
. The main drawback of second-order methods is their
much higher space and time complexity. In this paper, we
introduce kernel online Newton step (KONS), a new second-order KOCO method that
also achieves regret. To address the
computational complexity of second-order methods, we introduce a new matrix
sketching algorithm for the kernel matrix , and show that for
a chosen parameter our Sketched-KONS reduces the space and time
complexity by a factor of to space and
time per iteration, while incurring only times more regret
- …