437 research outputs found

    The Inverse 1-Median Problem on Tree Networks with Variable Real Edge Lengths

    Get PDF
    Location problems exist in the real world and they mainly deal with finding optimal locations for facilities in a network, such as net servers, hospitals, and shopping centers. The inverse location problem is also often met in practice and has been intensively investigated in the literature. As a typical inverse location problem, the inverse 1-median problem on tree networks with variable real edge lengths is discussed in this paper, which is to modify the edge lengths at minimum total cost such that a given vertex becomes a 1-median of the tree network with respect to the new edge lengths. First, this problem is shown to be solvable in linear time with variable nonnegative edge lengths. For the case when negative edge lengths are allowable, the NP-hardness is proved under Hamming distance, and strongly polynomial time algorithms are presented under l1 and l∞ norms, respectively

    Studies in Efficient Discrete Algorithms

    Get PDF
    This thesis consists of five papers within the design and analysis of efficient algorithms.In the first paper, we consider the problem of computing all-pairs shortest paths in a directed graph with real weights assigned to vertices. We develop a combinatorial randomized algorithm that runs in subcubic time for a special class of graphs.In the second paper, we present a polynomial-time dynamic programming algorithm for optimal partitions of a complete edge-weighted graph, where the edges are weighted by the length of the unique shortest path connecting those vertices in the a priori given tree (shortest path metric induced by a tree). Our result resolves, in particular, the complexity status of the optimal partition problems in one-dimensional geometric (Euclidean) setting.In the third paper, we study the NP-hard problem of partitioning an orthogonal polyhedron P into a minimum number of 3D rectangles. We present an approximation algorithm with the approximation ratio 4 for the special case of the problem in which P is a so-called 3D histogram. We then apply it to compute the exact arithmetic matrix product of two matrices with non-negative integer entries. The computation is time-efficient if the 3D histograms induced by the input matrices can be partitioned into relatively few 3D rectangles.In the fourth paper, we present the first quasi-polynomial approximation schemes for the base of the number of triangulations of a planar point set and the base of the number of crossing-free spanning trees on a planar point set, respectively.In the fifth paper, we study the complexity of detecting monomials with special properties in the sum-product expansion of a polynomial represented by an arithmetic circuit of size polynomial in the number of input variables and using only multiplication and addition. We present a fixed-parameter tractable algorithms for the detection of monomial having at least k distinct variables, parametrized with respect to k. Furthermore, we derive several hardness results on the detection of monomials with such properties within exact, parametrized and approximation complexity

    27th Annual European Symposium on Algorithms: ESA 2019, September 9-11, 2019, Munich/Garching, Germany

    Get PDF

    Hypercube-Based Topologies With Incremental Link Redundancy.

    Get PDF
    Hypercube structures have received a great deal of attention due to the attractive properties inherent to their topology. Parallel algorithms targeted at this topology can be partitioned into many tasks, each of which running on one node processor. A high degree of performance is achievable by running every task individually and concurrently on each node processor available in the hypercube. Nevertheless, the performance can be greatly degraded if the node processors spend much time just communicating with one another. The goal in designing hypercubes is, therefore, to achieve a high ratio of computation time to communication time. The dissertation addresses primarily ways to enhance system performance by minimizing the communication time among processors. The need for improving the performance of hypercube networks is clearly explained. Three novel topologies related to hypercubes with improved performance are proposed and analyzed. Firstly, the Bridged Hypercube (BHC) is introduced. It is shown that this design is remarkably more efficient and cost-effective than the standard hypercube due to its low diameter. Basic routing algorithms such as one to one and broadcasting are developed for the BHC and proven optimal. Shortcomings of the BHC such as its asymmetry and limited application are clearly discussed. The Folded Hypercube (FHC), a symmetric network with low diameter and low degree of the node, is introduced. This new topology is shown to support highly efficient communications among the processors. For the FHC, optimal routing algorithms are developed and proven to be remarkably more efficient than those of the conventional hypercube. For both BHC and FHC, network parameters such as average distance, message traffic density, and communication delay are derived and comparatively analyzed. Lastly, to enhance the fault tolerance of the hypercube, a new design called Fault Tolerant Hypercube (FTH) is proposed. The FTH is shown to exhibit a graceful degradation in performance with the existence of faults. Probabilistic models based on Markov chain are employed to characterize the fault tolerance of the FTH. The results are verified by Monte Carlo simulation. The most attractive feature of all new topologies is the asymptotically zero overhead associated with them. The designs are simple and implementable. These designs can lead themselves to many parallel processing applications requiring high degree of performance

    Regular expression constrained sequence alignment revisited

    Get PDF
    International audienceImposing constraints in the form of a finite automaton or a regular expression is an effective way to incorporate additional a priori knowledge into sequence alignment procedures. With this motivation, the Regular Expression Constrained Sequence Alignment Problem was introduced, which proposed an O(n^2t^4) time and O(n^2t^2) space algorithm for solving it, where n is the length of the input strings and t is the number of states in the input non-deterministic automaton. A faster O(n^2t^3) time algorithm for the same problem was subsequently proposed. In this article, we further speed up the algorithms for Regular Language Constrained Sequence Alignment by reducing their worst case time complexity bound to O(n^2t^3/log t). This is done by establishing an optimal bound on the size of Straight-Line Programs solving the maxima computation subproblem of the basic dynamic programming algorithm. We also study another solution based on a Steiner Tree computation. While it does not improve worst case, our simulations show that both approaches are efficient in practice, especially when the input automata are dense

    Solutions to Facility Location–Network Design Problems

    Get PDF
    This doctoral thesis presents new solution strategies for facility location–network design (FLND) problems. FLND is a combination of facility location and network design: the overall goal is to improve clients’ access to facilities and the means of reaching this goal include both building facilities (as in facility location) and building travelable links (as in network design). We measure clients’ access to facilities by the sum of the travel costs, and our objective is to minimize this sum. FLND problems have facility location problems and network design problems, both of which are NP-hard, as subproblems and are therefore themselves theoretically difficult problems. We approach the search for optimal solutions from both above and below, contributing techniques for finding good upper bounds as well as good lower bounds on an optimal solution. On the upper bound side, we present the first heuristics in the literature for this problem. We have developed a variety of heuristics: simple greedy heuristics, a local search heuristic, metaheuristics including simulated annealing and variable neighborhood search, as well as a custom heuristic based on the problem-specific structure of FLND. Our computational results compare the performance of these heuristics and show that the basic variable neighborhood search performs the best, achieving a solution within 0.6% of optimality on average for our test cases. On the lower bound side, we work with an existing IP formulation whose LP relaxation provides good lower bounds. We present a separation routine for a new class of inequalities that further improve the lower bound, in some cases even obtaining the optimal solution. Putting all this together, we develop a branch-and-cut approach that uses heuristic solutions as upper bounds, and cutting planes for increasing the lower bound at each node of the problem tree, thus reducing the number of nodes needed to solve to optimality. We also present an alternate IP formulation that uses fewer variables than the one accepted in the literature. This formulation allows some problems to be solved more quickly, although its LP relaxation is not as tight. To aid in the visualization of FLND problem instances and their solutions, we have developed a piece of software, FLND Visualizer. Using this application one can create and modify problem instances, solve using a variety of heuristic methods, and view the solutions. Finally, we consider a case study: improving access to health facilities in the Nouna health district of Burkina Faso. We demonstrate the solution techniques developed here on this real-world problem and show the remarkable improvements in accessibility that are possible

    Scalable Inference for Multi-Target Tracking of Proliferating Cells

    Get PDF
    With the continuous advancements in microscopy techniques such as improved image quality, faster acquisition and reduced photo-toxicity, the amount of data recorded in the life sciences is rapidly growing. Clearly, the size of the data renders manual analysis intractable, calling for automated cell tracking methods. Cell tracking – in contrast to other tracking scenarios – exhibits several difficulties: low signal to noise ratio in the images, high cell density and sometimes cell clusters, radical morphology changes, but most importantly cells divide – which is often the focus of the experiment. These peculiarities have been targeted by tracking-byassignment methods that first extract a set of detection hypotheses and then track those over time. Improving the general quality of these cell tracking methods is difficult, because every cell type, surrounding medium, and microscopy setting leads to recordings with specific properties and problems. This unfortunately implies that automated approaches will not become perfect any time soon but manual proof reading by experts will remain necessary for the time being. In this thesis we focus on two different aspects, firstly on scaling previous and developing new solvers to deal with longer videos and more cells, and secondly on developing a specialized pipeline for detecting and tracking tuberculosis bacteria. The most powerful tracking-by-assignment methods are formulated as probabilistic graphical models and solved as integer linear programs. Because those integer linear programs are in general NP-hard, increasing the problem size will lead to an explosion of computational cost. We begin by reformulating one of these models in terms of a constrained network flow, and show that it can be solved more efficiently. Building on the successful application of network flow algorithms in the pedestrian tracking literature, we develop a heuristic to integrate constraints – here for divisions – into such a network flow method. This allows us to obtain high quality approximations to the tracking solution while providing a polynomial runtime guarantee. Our experiments confirm this much better scaling behavior to larger problems. However, this approach is single threaded and does not utilize available resources of multi-core machines yet. To parallelize the tracking problem we present a simple yet effective way of splitting long videos into intervals that can be tracked independently, followed by a sparse global stitching step that resolves disagreements at the cuts. Going one step further, we propose a microservices based software design for ilastik that allows to distribute all required computation for segmentation, object feature extraction, object classification and tracking across the nodes of a cluster or in the cloud. Finally, we discuss the use case of detecting and tracking tuberculosis bacteria in more detail, because no satisfying automated method to this important problem existed before. One peculiarity of these elongated cells is that they build dense clusters in which it is hard to outline individuals. To cope with that we employ a tracking-by-assignment model that allows competing detection hypotheses and selects the best set of detections while considering the temporal context during tracking. To obtain these hypotheses, we develop a novel algorithm that finds diverseM- best solutions of tree-shaped graphical models by dynamic programming. First experiments with the pipeline indicate that it can greatly reduce the required amount of human intervention for analyzing tuberculosis treatment
    • …
    corecore