10,049 research outputs found

    Unreliable point facility location problems on networks

    Get PDF
    In this paper we study facility location problems on graphs under the most common criteria, such as, median, center and centdian, but we incorporate in the objective function some reliability aspects. Assuming that facilities may become unavailable with a certain probability, the problem consists of locating facilities minimizing the overall or the maximum expected service cost in the long run, or a convex combination of the two. We show that the k-facility problem on general networks is NP-hard. Then, we provide efficient algorithms for these problems for the cases of k = 1, 2, both on general networks and on trees. We also explain how our methodology extends to handle a more general class of unreliable point facility location problems related to the ordered median objective function.Ministerio de Ciencia y TecnologíaJunta de Andalucí

    Median problems in networks

    Get PDF
    The P-median problem is a classical location model “par excellence”. In this paper we, first examine the early origins of the problem, formulated independently by Louis Hakimi and Charles ReVelle, two of the fathers of the burgeoning multidisciplinary field of research known today as Facility Location Theory and Modelling. We then examine some of the traditional heuristic and exact methods developed to solve the problem. In the third section we analyze the impact of the model in the field. We end the paper by proposing new lines of research related to such a classical problem.P-median, location modelling

    On the Parameterized Complexity of the Expected Coverage Problem

    Get PDF
    The MAXIMUM COVERING LOCATION PROBLEM (MCLP) is a well-studied problem in the field of operations research. Given a network with positive or negative demands on the nodes, a positive integer k, the MCLP seeks to find k potential facility centers in the network such that the neighborhood coverage is maximized. We study the variant of MCLP where edges of the network are subject to random failures due to some disruptive events. One of the popular models capturing the unreliable nature of the facility location is the linear reliability ordering (LRO) model. In this model, with every edge e of the network, we associate its survival probability 0 ≤ pe ≤ 1, or equivalently, its failure probability 1 − pe. The failure correlation in LRO is the following: If an edge e fails then every edge e′ with pe′≤pe surely fails. The task is to identify the positions of k facilities that maximize the expected coverage. We refer to this problem as EXPECTED COVERAGE problem. We study the EXPECTED COVERAGE problem from the parameterized complexity perspective and obtain the following results. 1. For the parameter pathwidth, we show that the EXPECTED COVERAGE problem is W[1]-hard. We find this result a bit surprising, because the variant of the problem with non-negative demands is fixed-parameter tractable (FPT) parameterized by the treewidth of the input graph. 2. We complement the lower bound by the proof that EXPECTED COVERAGE is FPT being parameterized by the treewidth and the maximum vertex degree. We give an algorithm that solves the problem in time 2O(twlogΔ)nO(1), where tw is the treewidth, Δ is the maximum vertex degree, and n the number of vertices of the input graph. In particular, since Δ ≤ n, it means the problem is solvable in time nO(tw), that is, is in XP parameterized by treewidth.publishedVersio

    Certified Computation from Unreliable Datasets

    Full text link
    A wide range of learning tasks require human input in labeling massive data. The collected data though are usually low quality and contain inaccuracies and errors. As a result, modern science and business face the problem of learning from unreliable data sets. In this work, we provide a generic approach that is based on \textit{verification} of only few records of the data set to guarantee high quality learning outcomes for various optimization objectives. Our method, identifies small sets of critical records and verifies their validity. We show that many problems only need poly(1/ε)\text{poly}(1/\varepsilon) verifications, to ensure that the output of the computation is at most a factor of (1±ε)(1 \pm \varepsilon) away from the truth. For any given instance, we provide an \textit{instance optimal} solution that verifies the minimum possible number of records to approximately certify correctness. Then using this instance optimal formulation of the problem we prove our main result: "every function that satisfies some Lipschitz continuity condition can be certified with a small number of verifications". We show that the required Lipschitz continuity condition is satisfied even by some NP-complete problems, which illustrates the generality and importance of this theorem. In case this certification step fails, an invalid record will be identified. Removing these records and repeating until success, guarantees that the result will be accurate and will depend only on the verified records. Surprisingly, as we show, for several computation tasks more efficient methods are possible. These methods always guarantee that the produced result is not affected by the invalid records, since any invalid record that affects the output will be detected and verified

    Locating and Protecting Facilities Subject to Random Disruptions and Attacks

    Get PDF
    Recent events such as the 2011 Tohoku earthquake and tsunami in Japan have revealed the vulnerability of networks such as supply chains to disruptive events. In particular, it has become apparent that the failure of a few elements of an infrastructure system can cause a system-wide disruption. Thus, it is important to learn more about which elements of infrastructure systems are most critical and how to protect an infrastructure system from the effects of a disruption. This dissertation seeks to enhance the understanding of how to design and protect networked infrastructure systems from disruptions by developing new mathematical models and solution techniques and using them to help decision-makers by discovering new decision-making insights. Several gaps exist in the body of knowledge concerning how to design and protect networks that are subject to disruptions. First, there is a lack of insights on how to make equitable decisions related to designing networks subject to disruptions. This is important in public-sector decision-making where it is important to generate solutions that are equitable across multiple stakeholders. Second, there is a lack of models that integrate system design and system protection decisions. These models are needed so that we can understand the benefit of integrating design and protection decisions. Finally, most of the literature makes several key assumptions: 1) protection of infrastructure elements is perfect, 2) an element is either fully protected or fully unprotected, and 3) after a disruption facilities are either completely operational or completely failed. While these may be reasonable assumptions in some contexts, there may exist contexts in which these assumptions are limiting. There are several difficulties with filling these gaps in the literature. This dissertation describes the discovery of mathematical formulations needed to fill these gaps as well as the identification of appropriate solution strategies

    Budgeted Dominating Sets in Uncertain Graphs

    Get PDF
    We study the Budgeted Dominating Set (BDS) problem on uncertain graphs, namely, graphs with a probability distribution p associated with the edges, such that an edge e exists in the graph with probability p(e). The input to the problem consists of a vertex-weighted uncertain graph ? = (V, E, p, ?) and an integer budget (or solution size) k, and the objective is to compute a vertex set S of size k that maximizes the expected total domination (or total weight) of vertices in the closed neighborhood of S. We refer to the problem as the Probabilistic Budgeted Dominating Set (PBDS) problem. In this article, we present the following results on the complexity of the PBDS problem. 1) We show that the PBDS problem is NP-complete even when restricted to uncertain trees of diameter at most four. This is in sharp contrast with the well-known fact that the BDS problem is solvable in polynomial time in trees. We further show that PBDS is ?[1]-hard for the budget parameter k, and under the Exponential time hypothesis it cannot be solved in n^o(k) time. 2) We show that if one is willing to settle for (1-?) approximation, then there exists a PTAS for PBDS on trees. Moreover, for the scenario of uniform edge-probabilities, the problem can be solved optimally in polynomial time. 3) We consider the parameterized complexity of the PBDS problem, and show that Uni-PBDS (where all edge probabilities are identical) is ?[1]-hard for the parameter pathwidth. On the other hand, we show that it is FPT in the combined parameters of the budget k and the treewidth. 4) Finally, we extend some of our parameterized results to planar and apex-minor-free graphs. Our first hardness proof (Thm. 1) makes use of the new problem of k-Subset ?-? Maximization (k-SPM), which we believe is of independent interest. We prove its NP-hardness by a reduction from the well-known k-SUM problem, presenting a close relationship between the two problems

    05031 Abstracts Collection -- Algorithms for Optimization with Incomplete Information

    Get PDF
    From 16.01.05 to 21.01.05, the Dagstuhl Seminar 05031 ``Algorithms for Optimization with Incomplete Information\u27\u27 was held in the International Conference and Research Center (IBFI), Schloss Dagstuhl. During the seminar, several participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas are put together in this paper. The first section describes the seminar topics and goals in general. Links to extended abstracts or full papers are provided, if available

    Scheduling over Scenarios on Two Machines

    Get PDF
    We consider scheduling problems over scenarios where the goal is to find a single assignment of the jobs to the machines which performs well over all possible scenarios. Each scenario is a subset of jobs that must be executed in that scenario and all scenarios are given explicitly. The two objectives that we consider are minimizing the maximum makespan over all scenarios and minimizing the sum of the makespans of all scenarios. For both versions, we give several approximation algorithms and lower bounds on their approximability. With this research into optimization problems over scenarios, we have opened a new and rich field of interesting problems.Comment: To appear in COCOON 2014. The final publication is available at link.springer.co
    corecore