1,340 research outputs found

    Approximation Algorithms for Correlated Knapsacks and Non-Martingale Bandits

    Full text link
    In the stochastic knapsack problem, we are given a knapsack of size B, and a set of jobs whose sizes and rewards are drawn from a known probability distribution. However, we know the actual size and reward only when the job completes. How should we schedule jobs to maximize the expected total reward? We know O(1)-approximations when we assume that (i) rewards and sizes are independent random variables, and (ii) we cannot prematurely cancel jobs. What can we say when either or both of these assumptions are changed? The stochastic knapsack problem is of interest in its own right, but techniques developed for it are applicable to other stochastic packing problems. Indeed, ideas for this problem have been useful for budgeted learning problems, where one is given several arms which evolve in a specified stochastic fashion with each pull, and the goal is to pull the arms a total of B times to maximize the reward obtained. Much recent work on this problem focus on the case when the evolution of the arms follows a martingale, i.e., when the expected reward from the future is the same as the reward at the current state. What can we say when the rewards do not form a martingale? In this paper, we give constant-factor approximation algorithms for the stochastic knapsack problem with correlations and/or cancellations, and also for budgeted learning problems where the martingale condition is not satisfied. Indeed, we can show that previously proposed LP relaxations have large integrality gaps. We propose new time-indexed LP relaxations, and convert the fractional solutions into distributions over strategies, and then use the LP values and the time ordering information from these strategies to devise a randomized adaptive scheduling algorithm. We hope our LP formulation and decomposition methods may provide a new way to address other correlated bandit problems with more general contexts

    Quenched lattice calculation of the B --> D l nu decay rate

    Full text link
    We calculate, in the continuum limit of quenched lattice QCD, the form factor that enters in the decay rate of the semileptonic decay B --> D l nu. Making use of the step scaling method (SSM), previously introduced to handle two scale problems in lattice QCD, and of flavour twisted boundary conditions we extract G(w) at finite momentum transfer and at the physical values of the heavy quark masses. Our results can be used in order to extract the CKM matrix element Vcb by the experimental decay rate without model dependent extrapolations.Comment: 5 pages, 4 figures, accepted for publication on Phys. Lett. B, corrected one typ

    Colour and stellar population gradients in galaxies

    Full text link
    We discuss the colour, age and metallicity gradients in a wide sample of local SDSS early- and late-type galaxies. From the fitting of stellar population models we find that metallicity is the main driver of colour gradients and the age in the central regions is a dominant parameter which rules the scatter in both metallicity and age gradients. We find a consistency with independent observations and a set of simulations. From the comparison with simulations and theoretical considerations we are able to depict a general picture of a formation scenario.Comment: 4 pages, 4 figures. Proceedings of 54th Congresso Nazionale della SAIt, Napoli 4-7 May 201

    Geometry of Online Packing Linear Programs

    Full text link
    We consider packing LP's with mm rows where all constraint coefficients are normalized to be in the unit interval. The n columns arrive in random order and the goal is to set the corresponding decision variables irrevocably when they arrive so as to obtain a feasible solution maximizing the expected reward. Previous (1 - \epsilon)-competitive algorithms require the right-hand side of the LP to be Omega((m/\epsilon^2) log (n/\epsilon)), a bound that worsens with the number of columns and rows. However, the dependence on the number of columns is not required in the single-row case and known lower bounds for the general case are also independent of n. Our goal is to understand whether the dependence on n is required in the multi-row case, making it fundamentally harder than the single-row version. We refute this by exhibiting an algorithm which is (1 - \epsilon)-competitive as long as the right-hand sides are Omega((m^2/\epsilon^2) log (m/\epsilon)). Our techniques refine previous PAC-learning based approaches which interpret the online decisions as linear classifications of the columns based on sampled dual prices. The key ingredient of our improvement comes from a non-standard covering argument together with the realization that only when the columns of the LP belong to few 1-d subspaces we can obtain small such covers; bounding the size of the cover constructed also relies on the geometry of linear classifiers. General packing LP's are handled by perturbing the input columns, which can be seen as making the learning problem more robust
    • …
    corecore