35 research outputs found

    Approximation Algorithms for Resource Allocation

    Get PDF
    This thesis is devoted to designing new techniques and algorithms for combinatorial optimization problems arising in various applications of resource allocation. Resource allocation refers to a class of problems where scarce resources must be distributed among competing agents maintaining certain optimization criteria. Examples include scheduling jobs on one/multiple machines maintaining system performance; assigning advertisements to bidders, or items to people maximizing profit/social fairness; allocating servers or channels satisfying networking requirements etc. Altogether they comprise a wide variety of combinatorial optimization problems. However, a majority of these problems are NP-hard in nature and therefore, the goal herein is to develop approximation algorithms that approximate the optimal solution as best as possible in polynomial time. The thesis addresses two main directions. First, we develop several new techniques, predominantly, a new linear programming rounding methodology and a constructive aspect of a well-known probabilistic method, the Lov\'{a}sz Local Lemma (LLL). Second, we employ these techniques to applications of resource allocation obtaining substantial improvements over known results. Our research also spurs new direction of study; we introduce new models for achieving energy efficiency in scheduling and a novel framework for assigning advertisements in cellular networks. Both of these lead to a variety of interesting questions. Our linear programming rounding methodology is a significant generalization of two major rounding approaches in the theory of approximation algorithms, namely the dependent rounding and the iterative relaxation procedure. Our constructive version of LLL leads to first algorithmic results for many combinatorial problems. In addition, it settles a major open question of obtaining a constant factor approximation algorithm for the Santa Claus problem. The Santa Claus problem is a NPNP-hard resource allocation problem that received much attention in the last several years. Through out this thesis, we study a number of applications related to scheduling jobs on unrelated parallel machines, such as provisionally shutting down machines to save energy, selectively dropping outliers to improve system performance, handling machines with hard capacity bounds on the number of jobs they can process etc. Hard capacity constraints arise naturally in many other applications and often render a hitherto simple combinatorial optimization problem difficult. In this thesis, we encounter many such instances of hard capacity constraints, namely in budgeted allocation of advertisements for cellular networks, overlay network design, and in classical problems like vertex cover, set cover and k-median

    Inference And Learning: Computational Difficulty And Efficiency

    Get PDF
    In this thesis, we mainly investigate two collections of problems: statistical network inference and model selection in regression. The common feature shared by these two types of problems is that they typically exhibit an interesting phenomenon in terms of computational difficulty and efficiency. For statistical network inference, our goal is to infer the network structure based on a noisy observation of the network. Statistically, we model the network as generated from the structural information with the presence of noise, for example, planted submatrix model (for bipartite weighted graph), stochastic block model, and Watts-Strogatz model. As the relative amount of ``signal-to-noise\u27\u27 varies, the problems exhibit different stages of computational difficulty. On the theoretical side, we investigate these stages through characterizing the transition thresholds on the ``signal-to-noise\u27\u27 ratio, for the aforementioned models. On the methodological side, we provide new computationally efficient procedures to reconstruct the network structure for each model. For model selection in regression, our goal is to learn a ``good\u27\u27 model based on a certain model class from the observed data sequences (feature and response pairs), when the model can be misspecified. More concretely, we study two model selection problems: to learn from general classes of functions based on i.i.d. data with minimal assumptions, and to select from the sparse linear model class based on possibly adversarially chosen data in a sequential fashion. We develop new theoretical and algorithmic tools beyond empirical risk minimization to study these problems from a learning theory point of view
    corecore