7,363 research outputs found

    Shortest Distances as Enumeration Problem

    Full text link
    We investigate the single source shortest distance (SSSD) and all pairs shortest distance (APSD) problems as enumeration problems (on unweighted and integer weighted graphs), meaning that the elements (u,v,d(u,v))(u, v, d(u, v)) -- where uu and vv are vertices with shortest distance d(u,v)d(u, v) -- are produced and listed one by one without repetition. The performance is measured in the RAM model of computation with respect to preprocessing time and delay, i.e., the maximum time that elapses between two consecutive outputs. This point of view reveals that specific types of output (e.g., excluding the non-reachable pairs (u,v,)(u, v, \infty), or excluding the self-distances (u,u,0)(u, u, 0)) and the order of enumeration (e.g., sorted by distance, sorted row-wise with respect to the distance matrix) have a huge impact on the complexity of APSD while they appear to have no effect on SSSD. In particular, we show for APSD that enumeration without output restrictions is possible with delay in the order of the average degree. Excluding non-reachable pairs, or requesting the output to be sorted by distance, increases this delay to the order of the maximum degree. Further, for weighted graphs, a delay in the order of the average degree is also not possible without preprocessing or considering self-distances as output. In contrast, for SSSD we find that a delay in the order of the maximum degree without preprocessing is attainable and unavoidable for any of these requirements.Comment: Updated version adds the study of space complexit

    The matching relaxation for a class of generalized set partitioning problems

    Full text link
    This paper introduces a discrete relaxation for the class of combinatorial optimization problems which can be described by a set partitioning formulation under packing constraints. We present two combinatorial relaxations based on computing maximum weighted matchings in suitable graphs. Besides providing dual bounds, the relaxations are also used on a variable reduction technique and a matheuristic. We show how that general method can be tailored to sample applications, and also perform a successful computational evaluation with benchmark instances of a problem in maritime logistics.Comment: 33 pages. A preliminary (4-page) version of this paper was presented at CTW 2016 (Cologne-Twente Workshop on Graphs and Combinatorial Optimization), with proceedings on Electronic Notes in Discrete Mathematic

    Dynamic Facility Location via Exponential Clocks

    Full text link
    The \emph{dynamic facility location problem} is a generalization of the classic facility location problem proposed by Eisenstat, Mathieu, and Schabanel to model the dynamics of evolving social/infrastructure networks. The generalization lies in that the distance metric between clients and facilities changes over time. This leads to a trade-off between optimizing the classic objective function and the "stability" of the solution: there is a switching cost charged every time a client changes the facility to which it is connected. While the standard linear program (LP) relaxation for the classic problem naturally extends to this problem, traditional LP-rounding techniques do not, as they are often sensitive to small changes in the metric resulting in frequent switches. We present a new LP-rounding algorithm for facility location problems, which yields the first constant approximation algorithm for the dynamic facility location problem. Our algorithm installs competing exponential clocks on the clients and facilities, and connect every client by the path that repeatedly follows the smallest clock in the neighborhood. The use of exponential clocks gives rise to several properties that distinguish our approach from previous LP-roundings for facility location problems. In particular, we use \emph{no clustering} and we allow clients to connect through paths of \emph{arbitrary lengths}. In fact, the clustering-free nature of our algorithm is crucial for applying our LP-rounding approach to the dynamic problem

    The Data Big Bang and the Expanding Digital Universe: High-Dimensional, Complex and Massive Data Sets in an Inflationary Epoch

    Get PDF
    Recent and forthcoming advances in instrumentation, and giant new surveys, are creating astronomical data sets that are not amenable to the methods of analysis familiar to astronomers. Traditional methods are often inadequate not merely because of the size in bytes of the data sets, but also because of the complexity of modern data sets. Mathematical limitations of familiar algorithms and techniques in dealing with such data sets create a critical need for new paradigms for the representation, analysis and scientific visualization (as opposed to illustrative visualization) of heterogeneous, multiresolution data across application domains. Some of the problems presented by the new data sets have been addressed by other disciplines such as applied mathematics, statistics and machine learning and have been utilized by other sciences such as space-based geosciences. Unfortunately, valuable results pertaining to these problems are mostly to be found only in publications outside of astronomy. Here we offer brief overviews of a number of concepts, techniques and developments, some "old" and some new. These are generally unknown to most of the astronomical community, but are vital to the analysis and visualization of complex datasets and images. In order for astronomers to take advantage of the richness and complexity of the new era of data, and to be able to identify, adopt, and apply new solutions, the astronomical community needs a certain degree of awareness and understanding of the new concepts. One of the goals of this paper is to help bridge the gap between applied mathematics, artificial intelligence and computer science on the one side and astronomy on the other.Comment: 24 pages, 8 Figures, 1 Table. Accepted for publication: "Advances in Astronomy, special issue "Robotic Astronomy

    Guarantees and Limits of Preprocessing in Constraint Satisfaction and Reasoning

    Full text link
    We present a first theoretical analysis of the power of polynomial-time preprocessing for important combinatorial problems from various areas in AI. We consider problems from Constraint Satisfaction, Global Constraints, Satisfiability, Nonmonotonic and Bayesian Reasoning under structural restrictions. All these problems involve two tasks: (i) identifying the structure in the input as required by the restriction, and (ii) using the identified structure to solve the reasoning task efficiently. We show that for most of the considered problems, task (i) admits a polynomial-time preprocessing to a problem kernel whose size is polynomial in a structural problem parameter of the input, in contrast to task (ii) which does not admit such a reduction to a problem kernel of polynomial size, subject to a complexity theoretic assumption. As a notable exception we show that the consistency problem for the AtMost-NValue constraint admits a polynomial kernel consisting of a quadratic number of variables and domain values. Our results provide a firm worst-case guarantees and theoretical boundaries for the performance of polynomial-time preprocessing algorithms for the considered problems.Comment: arXiv admin note: substantial text overlap with arXiv:1104.2541, arXiv:1104.556

    Grad and Classes with Bounded Expansion II. Algorithmic Aspects

    Full text link
    Classes of graphs with bounded expansion are a generalization of both proper minor closed classes and degree bounded classes. Such classes are based on a new invariant, the greatest reduced average density (grad) of G with rank r, ∇r(G). These classes are also characterized by the existence of several partition results such as the existence of low tree-width and low tree-depth colorings. These results lead to several new linear time algorithms, such as an algorithm for counting all the isomorphs of a fixed graph in an input graph or an algorithm for checking whether there exists a subset of vertices of a priori bounded size such that the subgraph induced by this subset satisfies some arbirtrary but fixed first order sentence. We also show that for fixed p, computing the distances between two vertices up to distance p may be performed in constant time per query after a linear time preprocessing. We also show, extending several earlier results, that a class of graphs has sublinear separators if it has sub-exponential expansion. This result result is best possible in general
    corecore