347 research outputs found

    Approximation algorithms for clustering and facility location problems

    Get PDF
    In this thesis we design and analyze algorithms for various facility location and clustering problems. The problems we study are NP-Hard and therefore, assuming P is not equal NP, there do not exist polynomial time algorithms to solve them optimally. One approach to cope with the intractability of these problems is to design approximation algorithms which run in polynomial-time and output a near-optimal solution for all instances of the problem. However these algorithms do not always work well in practice. Often heuristics with no explicit approximation guarantee perform quite well. To bridge this gap between theory and practice, and to design algorithms that are tuned for instances arising in practice, there is an increasing emphasis on beyond worst-case analysis. In this thesis we consider both these approaches. In the first part we design worst case approximation algorithms for Uniform Submodular Facility Location (USFL), and Capacitated k-center (CapKCenter) problems. USFL is a generalization of the well-known Uncapacitated Facility Location problem. In USFL the cost of opening a facility is a submodular function of the clients assigned to it (the function is identical for all facilities). We show that a natural greedy algorithm (which gives constant factor approximation for Uncapacitated Facility Location and other facility location problems) has a lower bound of log(n), where n is the number of clients. We present an O(log^2 k) approximation algorithm where k is the number of facilities. The algorithm is based on rounding a convex relaxation. We further consider several special cases of the problem and give improved approximation bounds for them. The CapKCenter problem is an extension of the well-known k-center problem: each facility has a maximum capacity on the number of clients that can be assigned to it. We obtain a 9-approximation for this problem via a linear programming (LP) rounding procedure. Our result, combined with previously known lower bounds, almost settles the integrality gap for a natural LP relaxation. In the second part we consider several well-known clustering problems like k-center, k-median, k-means and their corresponding outlier variants. We use beyond worst-case analysis due to the practical relevance of these problems. In particular we show that when the input instances are 2-perturbation resilient (i.e. the optimal solution does not change when the distances change by a multiplicative factor of 2), the LP integrality gap for k-center (and also asymmetric k-center) is 1. We further introduce a model of perturbation resilience for clustering with outliers. Under this new model, we show that previous results (including our LP integrality result) known for clustering under perturbation resilience also extend for clustering with outliers. This leads to a dynamic programming based heuristic for k-means with outliers (k-means-outlier) which gives an optimal solution when the instance is 2-perturbation resilient. We propose two more algorithms for k-means-outlier — a sampling based algorithm which gives an O(1) approximation when the optimal clusters are not “too small”, and an LP rounding algorithm which gives an O(1) approximation at the expense of violating the number of clusters and outliers by a small constant. We empirically study our proposed algorithms on several clustering datasets

    Regulation and Capacity Competition in Health Care: Evidence from U.S. Dialysis Markets

    Get PDF
    This paper studies entry and capacity decisions by dialysis providers in the United States. We estimate a structural model where providers make continuous strategic choices of capacity based on their private information about own costs and knowledge of the distribution of competitors’ private information. We evaluate the impact on the market structure and providers’ profits under counterfactual regulatory policies that increase the costs or reduce the payment per unit of capacity. We find that these policies reduce the market capacity as measured by the number of dialysis stations. However, the downward-sloping reaction curve shields some providers from negative profit shocks in certain markets. The paper also has a methodological contribution in that it proposes new estimators for Bayesian games with continuous actions

    Data-Driven Methods for Demand-Side Flexibility in Energy Systems

    Get PDF

    Jogos de localização de instalações não cooperativos e percepção de custos

    Get PDF
    Orientadores: Eduardo Candido Xavier, Guido SchäferTese (doutorado) - Universidade Estadual de Campinas, Instituto de ComputaçãoResumo: Esta tese de doutorado cobre a interseção entre problemas de localização de instalações e teoria dos jogos algorítmica não cooperativa, com ênfase em alterações da percepção de custos de cada jogador e seu efeito na qualidade de equilíbrios. O problema de localização de instalações é um dos problemas fundamentais em otimização combinatória. Em sua versão clássica, existe um conjunto de terminais e um conjunto de instalações, e cada terminal necessita ser conectado a uma instalação, para que esta providencie bens ou serviços. O objetivo é minimizar o total dos custos associados à abertura das instalações e à conexão dos terminais a essas instalações. Na prática, existem diversos cenários onde é inviável ou não é desejável que uma autoridade central única decida como clientes devem escolher as instalações às quais se conectam. Dessa forma, é importante estudar como a independência desses terminais pode afetar a eficiência social e a complexidade computacional para esses cenários. A teoria dos jogos algorítmica pode ser útil para tais cenários, em particular sua parte não cooperativa. A teoria dos jogos algorítmica preenche uma lacuna entre a ciência da computação teórica e a teoria dos jogos, e está interessada em questões como a complexidade computacional de se encontrar equilíbrios, o quanto o bem-estar social pode ser perdido devido ao egoísmo de jogadores e como desenvolver mecanismos para garantir que o melhor interesse dos jogadores se alinhe com o ótimo social. Nesta tese, estudamos jogos de localização de instalações não cooperativos e algumas de suas variantes. Focamos em responder questões relativas à existência de equilíbrios de Nash puros e sobre as principais medidas de perda de eficiência, o preço da anarquia e preço da estabilidade. Apresentamos uma revisão das descobertas mais importantes para as variantes básicas, com novos resultados nos casos onde nenhum era conhecido. Para a versão capacitada desses jogos, mostramos que, enquanto a simultaneidade pode levar a uma perda de eficiência ilimitada, quando se admite a sequencialidade de jogadores, é possível mostrar que a perda de eficiência tem limites. Também investigamos como mudanças na percepção de custo podem afetar a qualidade de equilíbrios de duas maneiras: através de jogadores altruístas e de esquemas de taxação. No primeiro, adaptamos resultados de jogos de compartilhamento justo de custos e apresentamos novos resultados sobre uma versão sem regras de compartilhamento. No último, propomos um modelo de mudança na percepção de custos, onde os jogadores consideram um pedágio adicional em suas conexões ao calcular seus custos. Apresentamos limitantes para o custo total das taxas no problema de pedágios mínimos, onde o objetivo é encontrar o valor mínimo de pedágio necessário para garantir que um determinado perfil de estratégia socialmente ótimo seja escolhido pelos jogadores. Mostramos algoritmos para encontrar pedágios ótimos para tal problema em casos especiais e relacionamos esse problema a um problema de emparelhamento NP-difícilAbstract: This Ph.D. thesis covers the intersection between facility location problems and non-cooperative algorithmic game theory, with emphasis on possible changes in cost perception and its effects in regards to quality of equilibria. The facility location problem is one of the fundamental problems in the combinatorial optimization field of study. In its classic version, there exists a set of terminals and a set of facilities, and each terminal must be connected to a facility, in order for goods or services to be provided. The objective is to minimize the total costs associated with opening the facilities and connecting all the terminals to these facilities. In practice, there are multiple scenarios where it is either infeasible or not desirable for a single central authority to decide which facilities terminals connect to. Thus, it is important to study how the independence of these terminals may affect social efficiency and computational complexity in these scenarios. For this analysis algorithmic game theory can be of use, in particular its non-cooperative part. Algorithmic game theory bridges a gap between theoretical computer science and game theory, and is interested in questions such as how hard it is computationally to find equilibria, how much social welfare can be lost due to player selfishness and how to develop mechanisms to ensure that players' best interest align with the social optimum. In this thesis we study non-cooperative facility location games and several of its variants. We focus on answering the questions concerning the existence of pure Nash equilibria and the main measures of efficiency loss, the price of anarchy and the price of stability. We present a review of the most important findings for the basic variants and show new results where none were known. For the capacitated version of these games, we show that while simultaneity may lead to unbounded loss of efficiency, when sequentiality is allowed, it is possible to bound the efficiency loss. We also investigate how changes in players' perception of cost can affect the efficiency loss of these games in two ways: through altruistic players and through tolling schemes. In the former we adapt results from fair cost sharing games and present new results concerning a version with no cost sharing rules. In the latter, we propose a model for change in cost perception where players consider an additional toll in their connections when calculating their best responses. We present bounds for total toll cost in the minimum toll problem, where the objective is to find the minimum amount of tolls needed to ensure that a certain socially optimal strategy profile will be chosen by players. We show algorithms for finding optimal tolls for the minimum toll problem in special cases and provide some insight into this problem by connecting it to a matching problem which we prove is NP-hardDoutoradoCiência da ComputaçãoDoutor em Ciência da Computação147141/2016-8CAPESCNP

    Long Range Automated Persistent Surveillance

    Get PDF
    This dissertation addresses long range automated persistent surveillance with focus on three topics: sensor planning, size preserving tracking, and high magnification imaging. field of view should be reserved so that camera handoff can be executed successfully before the object of interest becomes unidentifiable or untraceable. We design a sensor planning algorithm that not only maximizes coverage but also ensures uniform and sufficient overlapped camera’s field of view for an optimal handoff success rate. This algorithm works for environments with multiple dynamic targets using different types of cameras. Significantly improved handoff success rates are illustrated via experiments using floor plans of various scales. Size preserving tracking automatically adjusts the camera’s zoom for a consistent view of the object of interest. Target scale estimation is carried out based on the paraperspective projection model which compensates for the center offset and considers system latency and tracking errors. A computationally efficient foreground segmentation strategy, 3D affine shapes, is proposed. The 3D affine shapes feature direct and real-time implementation and improved flexibility in accommodating the target’s 3D motion, including off-plane rotations. The effectiveness of the scale estimation and foreground segmentation algorithms is validated via both offline and real-time tracking of pedestrians at various resolution levels. Face image quality assessment and enhancement compensate for the performance degradations in face recognition rates caused by high system magnifications and long observation distances. A class of adaptive sharpness measures is proposed to evaluate and predict this degradation. A wavelet based enhancement algorithm with automated frame selection is developed and proves efficient by a considerably elevated face recognition rate for severely blurred long range face images

    Tools and analysis of spatio-temporal dynamics in heterogeneous aquifers: applications to artificial recharge and forced-gradient solute transport

    Get PDF
    This thesis deals with the development of tools and analysis to characterize and predict artificial recharge and radial convergent solute transport processes in heterogeneous media. The goal is to provide new insights to understand how heterogeneity, which is the main natural source of uncertainty in decision-making processes related with groundwater applications, can be controlled and its effects predicted for practical purposes in these topics. For hydrogeological applications, accurate modeling of phenomena is needed, but it is uncertain. Uncertainty is derived from the spatio-temporal random distribution of hydrodynamic (physical, chemical and biological) variables affecting groundwater processes, which is translated into random distribution of modeling parameters and equations. Such randomness is of two types: epistemic, when it can be reduced increasing the sample frequency of an experiment; aleatory, when it cannot be reduced when more information is analyzed. Sometimes hydrodynamic processes occur at so small scales that they become impossible to characterize with traditional methods, and from a practical perspective, this is analogous to deal with aleatoric model parameters. However, if some constitutive relationship (either empirically, theoretically or physically based) can be built between processes across different scales, then small-scale processes can be reproduced by equivalent large-scale model parameters. Uncertainty becomes amenable to be treated as epistemic randomness, and large-scale characterization techniques can be used to improve the description, interpretation or prediction of these processes. This thesis deals with these topics. The manuscript is composed by two main parts (the first on artificial recharge and the second on solute transport), each of them divided into three chapters. In chapter 1 of each part, a tool is developed to obtain quantitative information to model a selected variable at coarse grid resolutions. In the case of artificial recharge, satellite images are used to model the spatial variability of the infiltration capacity on top soils with a metric-scale detail. In the case of solute transport, a new method to estimate density from particle distribution is shown. In chapters 2, it is explored what processes occurring at the fine scales can affect the interpretation of artificial recharge and solute transport processes at larger scales. In the first part, a combined method that joins satellite images and field data along with a simple clogging model is used to display the equally-possible spatio-temporal mapping of the infiltration capacity of topsoil during artificial pond flooding activities. In the second part, numerical three-dimensional models are used to simulate transport in heterogeneous media under convergent radial flow to a well at fine scale. It is shown that an appropriate model framework can reproduce similar observations on contaminant temporal distribution at controlling section similar to those obtained in the field tracer tests. It is also provided a physical explanation to describe the so-called anomalous late-time behavior on breakthrough curves which is sometimes observed in the reality at larger scales. In the chapters 3, models are used to define the uncertainty around operating parameters in the optic of prediction and management on artificial recharge and solute transport. In the first case, a probability framework is built to define the engineering risk of management of artificial recharge ponds due to random variability of the initial distribution of infiltration, which controls several important clogging factors based on theoretical approaches. In the case of solute transport, it is discussed how equivalent parameters based on mass-transfer models can be related with the geometrical distribution of hydraulic parameters in anisotropic formation, when convergent flow tracer tests are used

    The deep space network

    Get PDF
    A Deep Space Network progress report is presented dealing with in flight project support, tracking and data acquisition research and technology, network engineering, hardware and software implementation, and operations

    Multiscale Simulation of Thermocline Energy Storage for Concentrating Solar Power

    Get PDF
    Concentrating solar power (CSP) is a renewable and demonstrated technology for large-scale power generation but requires multiple engineering advancements to achieve grid parity with conventional fossil fuels. Part of this advancement includes novel and inexpensive thermal energy systems to decouple daily power production from intermittent solar collection. Dual-media thermocline tanks, composed of molten salt and solid rock filler, offer low-cost storage capability but the concept has experienced limited deployment in CSP plants due to unresolved concerns about long-term thermal and structural stability. The main objective of the present work is to advance the understanding of thermocline storage design and operation necessary for future commercial implementations. A multiscale numerical approach is conducted to investigate tank behavior at both a device level for comprehensive short-term analysis and at a system-level for reduced-order long-term analysis. A computational fluid dynamics (CFD) model is first developed to simulate molten-salt thermocline tanks in response to cyclic charge and discharge modes of operation. The model builds upon previous work in the literature with an expanded study of the internal solid filler size as well as added consideration for practical limits on tank height. Reducing the internal filler size improves thermal stratification inside the tank but decreases the bed permeability, resulting in a design tradeoff between storage performance and required pumping power. An effective rock diameter of 1 cm is found to be the most practical selection among the sizes considered. Also of interest is the structural stability of the thermocline tank wall in response to large temperature fluctuations associated with repeated charging and discharging. If sufficient hoop stress is generated from storage cycles, the tank becomes susceptible to failure via thermal ratcheting. The thermocline tank model is therefore extended to predict wall stress associated with operation and determine if ratcheting is expected to occur. Analysis is first performed with a multilayer structure to identify stable tank wall designs. Inclusion of internal thermal insulation between the porous bed and the steel wall is found to best prevent thermal ratcheting by decoupling the thermal response of the wall from the interior salt behavior. The structural modeling approach is then validated with a simulation of the 182 MWht thermocline tank installed at the historic Solar One power tower plant. The hoop stress predictions are found to show reasonable agreement with reported strain gage data along the tank wall and verify that the tank was not susceptible to ratcheting. The preceding use of commercial CFD software for thermocline tank simulation provides comprehensive solutions but the ease of application of this approach with respect to different operating scenarios is constrained by high computing costs. A new reduced-order model of energy transport inside a thermocline tank is therefore developed to provide thermal solutions at much lower computational cost. The storage model is first validated with past experimental data and then integrated into a system model of a 100 MWe molten-salt power tower plant, such that the thermocline tank is subjected to realistic solar collection and power production processes. Results from the system-level approach verify that a thermocline tank remains an effective and viable energy storage system over long-term operation within a CSP plant. The system-level analysis is then extended with an economic assessment of thermocline storage in a power tower plant. A parametric study of the plant solar multiple and thermocline tank size highlights suitable plant designs to minimize the levelized cost of electricity. Among the cases considered, a minimum levelized cost of 12.2 cent/kWhe is achieved, indicating that cost reductions outside of thermal energy storage remain necessary to obtain grid parity. As a sensible heat storage method, dual-media thermocline tanks remains subject to low energy densities and require large tank volumes. A possible design modification to reduce tank size is a substitution of the internal rock filler with an encapsulated phase-change material (PCM), which adds a high density latent heat storage mechanism to the tank assembly. The reduced-order thermocline tank model is first updated to include capsules of a hypothetical PCM and then reintegrated into the power tower plant system model. Implementation of a single PCM inside the tank does not yield significant energy storage gains because of an inherent tradeoff between the thermodynamic quality (i.e., melting temperature and heat of fusion) of the added latent heat and its utilization in storage operations. This problem may be circumvented with a cascaded filler structure composed of multiple PCMs with their melting temperatures tuned along the tank height. However, the benefit of a cascade structure is highly sensitive to appropriate selection of the PCM melting points relative to the thermocline tank operating temperatures

    35th Symposium on Theoretical Aspects of Computer Science: STACS 2018, February 28-March 3, 2018, Caen, France

    Get PDF
    corecore