1,881 research outputs found

    The Geometric Maximum Traveling Salesman Problem

    Get PDF
    We consider the traveling salesman problem when the cities are points in R^d for some fixed d and distances are computed according to geometric distances, determined by some norm. We show that for any polyhedral norm, the problem of finding a tour of maximum length can be solved in polynomial time. If arithmetic operations are assumed to take unit time, our algorithms run in time O(n^{f-2} log n), where f is the number of facets of the polyhedron determining the polyhedral norm. Thus for example we have O(n^2 log n) algorithms for the cases of points in the plane under the Rectilinear and Sup norms. This is in contrast to the fact that finding a minimum length tour in each case is NP-hard. Our approach can be extended to the more general case of quasi-norms with not necessarily symmetric unit ball, where we get a complexity of O(n^{2f-2} log n). For the special case of two-dimensional metrics with f=4 (which includes the Rectilinear and Sup norms), we present a simple algorithm with O(n) running time. The algorithm does not use any indirect addressing, so its running time remains valid even in comparison based models in which sorting requires Omega(n \log n) time. The basic mechanism of the algorithm provides some intuition on why polyhedral norms allow fast algorithms. Complementing the results on simplicity for polyhedral norms, we prove that for the case of Euclidean distances in R^d for d>2, the Maximum TSP is NP-hard. This sheds new light on the well-studied difficulties of Euclidean distances.Comment: 24 pages, 6 figures; revised to appear in Journal of the ACM. (clarified some minor points, fixed typos

    Unified Power Management in Wireless Sensor Networks, Doctoral Dissertation, August 2006

    Get PDF
    Radio power management is of paramount concern in wireless sensor networks (WSNs) that must achieve long lifetimes on scarce amount of energy. Previous work has treated communication and sensing separately, which is insufficient for a common class of sensor networks that must satisfy both sensing and communication requirements. Furthermore, previous approaches focused on reducing energy consumption in individual radio states resulting in suboptimal solutions. Finally, existing power management protocols often assume simplistic models that cannot accurately reflect the sensing and communication properties of real-world WSNs. We develop a unified power management approach to address these issues. We first analyze the relationship between sensing and communication performance of WSNs. We show that sensing coverage often leads to good network connectivity and geographic routing performance, which provides insights into unified power management under both sensing and communication performance requirements. We then develop a novel approach called Minimum Power Configuration that ingegrates the power consumption in different radio states into a unified optimization framework. Finally, we develop two power management protocols that account for realistic communication and sensing properties of WSNs. Configurable Topology Control can configure a network topology to achieve desired path quality in presence of asymmetric and lossy links. Co-Grid is a coverage maintenance protocol that adopts a probabilistic sensing model. Co-Grid can satisfy desirable sensing QoS requirements (i.e., detection probability and false alarm rate) based on a distributed data fusion model

    Power assignment problems in wireless communication

    No full text
    A fundamental class of problems in wireless communication is concerned with the assignment of suitable transmission powers to wireless devices/stations such that the resulting communication graph satisfies certain desired properties and the overall energy consumed is minimized. Many concrete communication tasks in a wireless network like broadcast, multicast, point-to-point routing, creation of a communication backbone, etc. can be regarded as such a power assignment problem. This paper considers several problems of that kind; for example one problem studied before in (Vittorio Bil{\`o} et al: Geometric Clustering to Minimize the Sum of Cluster Sizes, ESA 2005) and (Helmut Alt et al.: Minimum-cost coverage of point sets by disks, SCG 2006) aims to select and assign powers to kk of the stations such that all other stations are within reach of at least one of the selected stations. We improve the running time for obtaining a (1+ϵ)(1+\epsilon)-approximate solution for this problem from n((α/ϵ)O(d))n^{((\alpha/\epsilon)^{O(d)})} as reported by Bil{\`o} et al. (see Vittorio Bil{\`o} et al: Geometric Clustering to Minimize the Sum of Cluster Sizes, ESA 2005) to O(n+(k2d+1ϵd)min{  2k,    (α/ϵ)O(d)  })O\left( n+ {\left(\frac{k^{2d+1}}{\epsilon^d}\right)}^{ \min{\{\; 2k,\;\; (\alpha/\epsilon)^{O(d)} \;\}} } \right) that is, we obtain a running time that is \emph{linear} in the network size. Further results include a constant approximation algorithm for the TSP problem under squared (non-metric!) edge costs, which can be employed to implement a novel data aggregation protocol, as well as efficient schemes to perform kk-hop multicasts

    Approximation Algorithms for Distributionally Robust Stochastic Optimization

    Get PDF
    Two-stage stochastic optimization is a widely used framework for modeling uncertainty, where we have a probability distribution over possible realizations of the data, called scenarios, and decisions are taken in two stages: we take first-stage actions knowing only the underlying distribution and before a scenario is realized, and may take additional second-stage recourse actions after a scenario is realized. The goal is typically to minimize the total expected cost. A common criticism levied at this model is that the underlying probability distribution is itself often imprecise. To address this, an approach that is quite versatile and has gained popularity in the stochastic-optimization literature is the two-stage distributionally robust stochastic model: given a collection D of probability distributions, our goal now is to minimize the maximum expected total cost with respect to a distribution in D. There has been almost no prior work however on developing approximation algorithms for distributionally robust problems where the underlying scenario collection is discrete, as is the case with discrete-optimization problems. We provide frameworks for designing approximation algorithms in such settings when the collection D is a ball around a central distribution, defined relative to two notions of distance between probability distributions: Wasserstein metrics (which include the L_1 metric) and the L_infinity metric. Our frameworks yield efficient algorithms even in settings with an exponential number of scenarios, where the central distribution may only be accessed via a sampling oracle. For distributionally robust optimization under a Wasserstein ball, we first show that one can utilize the sample average approximation (SAA) method (solve the distributionally robust problem with an empirical estimate of the central distribution) to reduce the problem to the case where the central distribution has a polynomial-size support, and is represented explicitly. This follows because we argue that a distributionally robust problem can be reduced in a novel way to a standard two-stage stochastic problem with bounded inflation factor, which enables one to use the SAA machinery developed for two-stage stochastic problems. Complementing this, we show how to approximately solve a fractional relaxation of the SAA problem (i.e., the distributionally robust problem obtained by replacing the original central distribution with its empirical estimate). Unlike in two-stage {stochastic, robust} optimization with polynomially many scenarios, this turns out to be quite challenging. We utilize a variant of the ellipsoid method for convex optimization in conjunction with several new ideas to show that the SAA problem can be approximately solved provided that we have an (approximation) algorithm for a certain max-min problem that is akin to, and generalizes, the k-max-min problem (find the worst-case scenario consisting of at most k elements) encountered in two-stage robust optimization. We obtain such an algorithm for various discrete-optimization problems; by complementing this via rounding algorithms that provide local (i.e., per-scenario) approximation guarantees, we obtain the first approximation algorithms for the distributionally robust versions of a variety of discrete-optimization problems including set cover, vertex cover, edge cover, facility location, and Steiner tree, with guarantees that are, except for set cover, within O(1)-factors of the guarantees known for the deterministic version of the problem. For distributionally robust optimization under an L_infinity ball, we consider a fractional relaxation of the problem, and replace its objective function with a proxy function that is pointwise close to the true objective function (within a factor of 2). We then show that we can efficiently compute approximate subgradients of the proxy function, provided that we have an algorithm for the problem of computing the t worst scenarios under a given first-stage decision, given an integer t. We can then approximately minimize the proxy function via a variant of the ellipsoid method, and thus obtain an approximate solution for the fractional relaxation of the distributionally robust problem. Complementing this via rounding algorithms with local guarantees, we obtain approximation algorithms for distributionally robust versions of various covering problems, including set cover, vertex cover, edge cover, and facility location, with guarantees that are within O(1)-factors of the guarantees known for their deterministic versions

    Constrained Planarity and Augmentation Problems

    Get PDF
    A clustered graph C=(G,T) consists of an undirected graph G and a rooted tree T in which the leaves of T correspond to the vertices of G=(V,E). Each vertex m in T corresponds to a subset of the vertices of the graph called ``cluster''. c-planarity is a natural extension of graph planarity for clustered graphs, and plays an important role in automatic graph drawing. The complexity status of c-planarity testing is unknown. It has been shown by Dahlhaus, Eades, Feng, Cohen that c-planarity can be tested in linear time for c-connected graphs, i.e., graphs in which the cluster induced subgraphs are connected. In the first part of the thesis, we provide a polynomial time algorithms for c-planarity testing of specific planar clustered graphs: Graphs for which - all nodes corresponding to the non-c-connected clusters lie on the same path in T starting at the root of T, or graphs in which for each non-connected cluster its super-cluster and all its siblings in T are connected, - for all clusters m G-G(m) is connected. The algorithms are based on the concepts for the subgraph induced planar connectivity augmentation problem, also presented in this thesis. Furthermore, we give some characterizations of c-planar clustered graphs using minors and dual graphs and introduce a c-planar augmentation method. Parts II deals with edge deletion and bimodal crossing minimization. We prove that the maximum planar subgraph problem remains NP-complete even for non-planar graphs without a minor isomorphic to either K(5) or K(3,3), respectively. Further, we investigate the problem of finding a minimum weighted set of edges whose removal results in a graph without minors that are contractible onto a prespecified set of vertices. Finally, we investigate the problem of drawing a directed graph in two dimensions with a minimal number of crossings such that for every node the incoming and outgoing edges are separated consecutively in the cyclic adjacency lists. It turns out that the planarization method can be adapted such that the number of crossings can be expected to grow only slightly for practical instances

    Multiwavelength study of the transient X-ray binary IGR J01583+6713

    Full text link
    We have investigated multiband optical photometric variability and stability of the Hα\alpha line profile of the transient X-ray binary IGR J01583+6713. We set an upper limit of 0.05 mag on photometric variations in the {\it V} band over a time scale of 3 months. The Hα\alpha line is found to consist of non-Gaussian profile and quite stable for a duration of 2 months. We have identified the spectral type of the companion star to be B2 IVe while distance to the source is estimated to be \sim 4.0 kpc. Along with the optical observations, we have also carried out analysis of X-ray data from three short observations of the source, two with the {\it Swift}--XRT and one with the {\it RXTE}--PCA. We have detected a variation in the absorption column density, from a value of 22.0 ×\times 1022^{22} cm2^{-2} immediately after the outburst down to 2.6 ×\times 1022^{22} cm2^{-2} four months afterwards. In the quiescent state, the X-ray absorption is consistent with the optical reddening measurement of E(B - V) = 1.46 mag. From one of the {\it Swift} observations, during which the X-ray intensity was higher, we have a possible pulse detection with a period of 469.2 s. For a Be X-ray binary, this indicates an orbital period in the range of 216--561 days for this binary system.Comment: 22 pages, 8 figures, accepted for publication in MNRA
    corecore