12 research outputs found

    Scalable Parameterised Algorithms for two Steiner Problems

    Get PDF
    In the Steiner Problem, we are given as input (i) a connected graph with nonnegative integer weights associated with the edges; and (ii) a subset of vertices called terminals. The task is to find a minimum-weight subgraph connecting all the terminals. In the Group Steiner Problem, we are given as input (i) a connected graph with nonnegative integer weights associated with the edges; and (ii) a collection of subsets of vertices called groups. The task is to find a minimum-weight subgraph that contains at least one vertex from each group. Even though the Steiner Problem and the Group Steiner Problem are NP-complete, they are known to admit parameterised algorithms that run in linear time in the size of the input graph and the exponential part can be restricted to the number of terminals and the number of groups, respectively. In this thesis, we discuss two parameterised algorithms for solving the Steiner Problem, and by reduction, the Group Steiner Problem: (a) a dynamic programming algorithm presented by Dreyfus and Wagner in 1971; and (b) an improvement of the Dreyfus-Wagner algorithm presented by Erickson, Monma and Veinott in 1987 that runs in linear time in the size of the input graph. We develop a parallel implementation of the Erickson-Monma-Veinott algorithm, and carry out extensive experiments to study the scalability of our implementation with respect to its runtime, memory bandwidth, and memory usage. Our experimental results demonstrate that the implementation can scale up to a billion edges on a single modern compute node provided that the number of terminals is small. For example, using our parallel implementation a Steiner tree for a graph with hundred million edges and ten terminals can be found in approximately twenty minutes. For an input graph with one hundred million edges and ten terminals, our parallel implementation is at least fifteen times faster than its serial counterpart on a Haswell compute node with two processors and twelve cores in each processor. Our implementation of the Erickson-Monma-Veinott algorithm is available as open source

    Control for Localization and Visibility Maintenance of an Independent Agent using Robotic Teams

    Get PDF
    Given a non-cooperative agent, we seek to formulate a control strategy to enable a team of robots to localize and track the agent in a complex but known environment while maintaining a continuously optimized line-of-sight communication chain to a fixed base station. We focus on two aspects of the problem. First, we investigate the estimation of the agent\u27s location by using nonlinear sensing modalities, in particular that of range-only sensing, and formulate a control strategy based on improving this estimation using one or more robots working to independently gather information. Second, we develop methods to plan and sequence robot deployments that will establish and maintain line-of-sight chains for communication between the independent agent and the fixed base station using a minimum number of robots. These methods will lead to feedback control laws that can realize this plan and ensure proper navigation and collision avoidance

    Combinatorial Optimization

    Get PDF
    This report summarizes the meeting on Combinatorial Optimization where new and promising developments in the field were discussed. Th

    Algorithmic Approaches to the Steiner Problem in Networks

    Full text link
    Das Steinerproblem in Netzwerken ist das Problem, in einem gewichteten Graphen eine gegebene Menge von Knoten kostenminimal zu verbinden. Es ist ein klassisches NP-schweres Problem und ein fundamentales Problem bei der Netzwerkoptimierung mit vielen praktischen Anwendungen. Wir nehmen dieses Problem mit verschiedenen Mitteln in Angriff: Relaxationen, die die Zulässigkeitsbedingungen lockern, um eine optimale Lösung annähern zu können; Heuristiken, um gute, aber nicht garantiert optimale Lösungen zu finden; und Reduktionen, um die Probleminstanzen zu vereinfachen, ohne eine optimale Lösung zu zerstören. In allen Fällen untersuchen und verbessern wir bestehende Methoden, stellen neue vor und evaluieren sie experimentell. Wir integrieren diese Bausteine in einen exakten Algorithmus, der den Stand der Algorithmik für die optimale Lösung dieses Problems darstellt. Viele der vorgestellten Methoden können auch für verwandte Probleme von Nutzen sein

    Flow Morphometry of Red Blood Cell Storage Quality Based on Neural Networks

    Get PDF
    Red blood cell transfusion is routinely performed to improve tissue oxygenation in patients with decreased hemoglobin levels and oxygen-carrying capacity. Generally, blood banks process and store packed red blood cells as RCCs. During storage, RBCs undergo progressive biochemical and morphological changes which are collectively described as storage lesion. According to regulatory guidelines, the quality of RCCs is assessed by quantifying hemolysis before transfusion. However, the hemolysis level only gives an indication of the already lysed erythrocytes; it does not indicate the degree of deterioration of aged cells, which are known to compromise the post-transfusion survival. Morphological analysis, a method that has the potential to provide a simple and practical diagnosis, is suitable for indicating the degradation of RBCs and thus has considerable power to predict actual post-transfusion survival. Microfluidic systems with suspended RBCs can enable fully automated morphological diagnosis based on image analysis with large cell statistics and high sample throughput. The previous version of the flow morphometry system, which was based on a binary decision tree was able to show in a first attempt that spherocytes are a suitable candidate for such a morphological storage lesion marker. However, due to the low classification resolution (three morphology classes), possible shear-induced morphology changes of the measurement system could not be evaluated. In this study, the image classification of the flow morphometry system was substantially enhanced by using a convolutional neural network to strongly improve the resolution and accuracy of the morphology classification. The resulting CNN-based classification achieved a high overall accuracy of 92% with RBCs being classified into nine morphology classes. Through this improved classification resolution, it was possible to assess degradation-induced morphologies at high resolution simultaneously with shear-induced morphologies in RCCs. The overall goal was to provide a robust and strong marker for storage lesion that reflects post-transfusion survival of RBCs. Therefore, it was necessary to analyze the extent to which the shear in the microfluidic system affected the morphological transients between RBC classes. Indeed, it could be shown that shear-induced morphology changes appear dependent on the position of the focal plane height in the flow chamber. The proportion of stomatocytes is increased near the surfaces of the laminar flow chamber. This temporary shear-induced morphology transformation can occur only in flexible erythrocytes with intact membrane properties. Therefore, these cells should be considered a subset of healthy erythrocytes that can reversibly alter from stomatocyte to discocyte morphology. The nine RBC morphology classes of the improved classification resolution were further analyzed to determine whether they exhibit a particular pattern based on their relative proportions during storage that could be used as a storage lesion marker. All individual RBC classes, except for the spherical morphologies, undergo reversible transitions among themselves that are related to the SDE sequence and result in a low signal-to-noise ratio. The proportions of the irreversible spherical morphologies, spheroechinocytes and spherocytes, were defined as the lesion index. This lesion index showed a strong correlation to hemolysis levels. In fact, the correlation between the hemolysis level and the lesion index was so good that it persisted at an individual RCC level. A preliminary lesion index threshold of 11.1% could be established, which is equivalent to a hemolysis threshold of 0.8% established in regulatory guidelines, to assess whether an RCC is of appropriate quality for transfusion. However, the lesion index, besides predicting the hemolysis level, can also be used to generate more information about post-transfusion survival, since it consists exclusively of the RBC morphologies that are removed by the body in a very short time after transfusion in the recipient. Finally, we translated the newly established lesion index and standard biochemical parameters into a quality assessment of RCC shipped and transported repeatedly on air rescue missions to assess an eventual deterioration of the RBCs. We showed that the quality of RCCs was not inferior to control samples after repeated air rescue missions during storage. German regulations allow RCCs to be stored for 42 days in a temperature range of +2°C to +6°C. Compliance with this regulation can be secured during air rescue missions by means of suitable logistics based on a rotation system. By using efficient cooling devices, the logistics and maintenance of the thermal conditions are both safe and feasible. A well-defined rotation system for the use of RCCs during routine air rescue missions offers a resource-saving option and enables the provision of RCCs in compliance with German transfusion guidelines. This innovative concept enables life-saving prehospital transfusions directly at the incident scene. CNN-based flow morphometry and the calculated lesion index allow a reliable assessment of RCC quality. The method also decreases the demand for complex laboratory procedures. Therefore, it is highly advisable to include the lesion index as an additional marker for storage lesion in routine clinical practice. Unlike hemolysis, the lesion index may serve as a good indicator of post-transfusion survival. Thus, both measurements together could provide increased safety and efficacy of stored RCCs

    The Euclidean Steiner Tree Problem: Simulated Annealing and Other Heuristics

    No full text
    In this thesis the Euclidean Steiner tree problem and the optimisation technique called simulated annealing are studied. In particular, there is an investigation of whether simulated annealing is a viable solution method for the problem. The Euclidean Steiner tree problem is a topological network design problem and is relevant to the design of communication, transportation and distribution networks. The problem is to find the shortest connection of a set of points in the Euclidean plane. Simulated annealing is a generally applicable method of finding solutions of combinatorial optimisation problems. The results of the investigation are very satisfactory. The quality of simulated annealing solutions compare favourably with those of the best known tailored heuristic method for the Euclidean Steiner tree proble

    Optimization of Contemporary Telecommunications Networks: Generalized Spanning Trees and WDM Optical Networks

    Get PDF
    We present a study of two NP-hard telecommunications network design problems - the prize-collecting generalized minimum spanning tree problem (PCGMST) and the design of optical networks with wavelength division multiplexing. The first problem, the PCGMST problem, involves the design of regional backbone networks, where a set of local area networks (LANs) need to be connected by a minimum cost tree network using exactly one gateway site from each LAN. We present several polynomial time heuristics for the PCGMST problem and show that these algorithms, at best, provide only modest quality solutions. We also present two metaheuristics - a local search procedure and a genetic algorithm, and show that these procedures provide compelling high-quality results on a large set of test problems. Our study of the PCGMST problem is concluded by a presentation of two exact solution procedures that can be used to find optimal solutions in networks of moderate size. The second problem studied in this dissertation is a more complex network design problem that involves optical networks with wavelength division multiplexing (WDM). These networks provide an abundance of transmission bandwidth, but require the use of expensive equipment, which, in turn, mandates careful use of the resources available for their design. The novel aspect of WDM optical networks is that they require simultaneous design of two network layers. The first layer is the virtual topology that requires routing of logical paths over the physical layer of optical fibers. The second layer involves routing and grooming of traffic requests over the logical paths established in the virtual topology. This problem has been extensively studied in the last 10 years, but due to its notoriously hard nature, only few exact solution procedures for relaxed versions of this problem were developed so far. We propose one exact and two approximate branch-and-price algorithms for two versions of the WDM optical network design problem and present results of the computational study involving two different design objectives. Finally, we propose two classes of valid inequalities for our branch-and-price algorithms, and discuss applicability of our algorithms to different versions of the WDM optical network design problem

    An extensive English language bibliography on graph theory and its applications

    Get PDF
    Bibliography on graph theory and its application
    corecore