15 research outputs found

    Optimal Flood Control

    Get PDF
    A mathematical model for optimal control of the water levels in a chain of reservoirs is studied. Some remarks regarding sensitivity with respect to the time horizon, terminal cost and forecast of inflow are made

    Routing for analog chip designs at NXP Semiconductors

    Get PDF
    During the study week 2011 we worked on the question of how to automate certain aspects of the design of analog chips. Here we focused on the task of connecting different blocks with electrical wiring, which is particularly tedious to do by hand. For digital chips there is a wealth of research available for this, as in this situation the amount of blocks makes it hopeless to do the design by hand. Hence, we set our task to finding solutions that are based on the previous research, as well as being tailored to the specific setting given by NXP. This resulted in an heuristic approach, which we presented at the end of the week in the form of a protoype tool. In this report we give a detailed account of the ideas we used, and describe possibilities to extend the approach

    Minimum d-dimensional arrangement with fixed points

    Full text link
    In the Minimum dd-Dimensional Arrangement Problem (d-dimAP) we are given a graph with edge weights, and the goal is to find a 1-1 map of the vertices into Zd\mathbb{Z}^d (for some fixed dimension d≥1d\geq 1) minimizing the total weighted stretch of the edges. This problem arises in VLSI placement and chip design. Motivated by these applications, we consider a generalization of d-dimAP, where the positions of some of the vertices (pins) is fixed and specified as part of the input. We are asked to extend this partial map to a map of all the vertices, again minimizing the weighted stretch of edges. This generalization, which we refer to as d-dimAP+, arises naturally in these application domains (since it can capture blocked-off parts of the board, or the requirement of power-carrying pins to be in certain locations, etc.). Perhaps surprisingly, very little is known about this problem from an approximation viewpoint. For dimension d=2d=2, we obtain an O(k1/2⋅log⁡n)O(k^{1/2} \cdot \log n)-approximation algorithm, based on a strengthening of the spreading-metric LP for 2-dimAP. The integrality gap for this LP is shown to be Ω(k1/4)\Omega(k^{1/4}). We also show that it is NP-hard to approximate 2-dimAP+ within a factor better than \Omega(k^{1/4-\eps}). We also consider a (conceptually harder, but practically even more interesting) variant of 2-dimAP+, where the target space is the grid Zn×Zn\mathbb{Z}_{\sqrt{n}} \times \mathbb{Z}_{\sqrt{n}}, instead of the entire integer lattice Z2\mathbb{Z}^2. For this problem, we obtain a O(k⋅log⁡2n)O(k \cdot \log^2{n})-approximation using the same LP relaxation. We complement this upper bound by showing an integrality gap of Ω(k1/2)\Omega(k^{1/2}), and an \Omega(k^{1/2-\eps})-inapproximability result. Our results naturally extend to the case of arbitrary fixed target dimension d≥1d\geq 1

    Travelling on Graphs with Small Highway Dimension

    Get PDF
    We study the Travelling Salesperson (TSP) and the Steiner Tree problem (STP) in graphs of low highway dimension. This graph parameter was introduced by Abraham et al. [SODA 2010] as a model for transportation networks, on which TSP and STP naturally occur for various applications in logistics. It was previously shown [Feldmann et al. ICALP 2015] that these problems admit a quasi-polynomial time approximation scheme (QPTAS) on graphs of constant highway dimension. We demonstrate that a significant improvement is possible in the special case when the highway dimension is 1, for which we present a fully-polynomial time approximation scheme (FPTAS). We also prove that STP is weakly NP-hard for these restricted graphs. For TSP we show NP-hardness for graphs of highway dimension 6, which answers an open problem posed in [Feldmann et al. ICALP 2015]

    Large-scale global optimization of ultra-high dimensional non-convex landscapes based on generative neural networks

    Full text link
    We present a non-convex optimization algorithm metaheuristic, based on the training of a deep generative network, which enables effective searching within continuous, ultra-high dimensional landscapes. During network training, populations of sampled local gradients are utilized within a customized loss function to evolve the network output distribution function towards one peak at high-performing optima. The deep network architecture is tailored to support progressive growth over the course of training, which allows the algorithm to manage the curse of dimensionality characteristic of high-dimensional landscapes. We apply our concept to a range of standard optimization problems with dimensions as high as one thousand and show that our method performs better with fewer function evaluations compared to state-of-the-art algorithm benchmarks. We also discuss the role of deep network over-parameterization, loss function engineering, and proper network architecture selection in optimization, and why the required batch size of sampled local gradients is independent of problem dimension. These concepts form the foundation for a new class of algorithms that utilize customizable and expressive deep generative networks to solve non-convex optimization problems

    On the Edge-length Ratio of Outerplanar Graphs

    Get PDF
    International audienceWe show that any outerplanar graph admits a planar straight-line drawing such that the length ratio of the longest to the shortest edges is strictly less than 2. This result is tight in the sense that for any ε > 0 there are outerplanar graphs that cannot be drawn with an edge-length ratio smaller than 2 −ε. We also show that this ratio cannot be bounded if the embeddings of the outerplanar graphs are given

    A PageRank-based heuristic for the minimization of open stacks problem.

    Get PDF
    The minimization of open stacks problem (MOSP) aims to determine the ideal production sequence to optimize the occupation of physical space in manufacturing settings. Most of current methods for solving the MOSP were not designed to work with large instances, precluding their use in specific cases of similar modeling problems. We therefore propose a PageRank-based heuristic to solve large instances modeled in graphs. In computational experiments, both data from the literature and new datasets up to 25 times fold larger in input size than current datasets, totaling 1330 instances, were analyzed to compare the proposed heuristic with state-of-the-art methods. The results showed the competitiveness of the proposed heuristic in terms of quality, as it found optimal solutions in several cases, and in terms of shorter run times compared with the fastest available method. Furthermore, based on specific graph densities, we found that the difference in the value of solutions between methods was small, thus justifying the use of the fastest method. The proposed heuristic is scalable and is more affected by graph density than by size

    Statistical mechanics approaches to optimization and inference

    Get PDF
    Nowadays, typical methodologies employed in statistical physics are successfully applied to a huge set of problems arising from different research fields. In this thesis I will propose several statistical mechanics based models able to deal with two types of problems: optimization and inference problems. The intrinsic difficulty that characterizes both problems is that, due to the hard combinatorial nature of optimization and inference, finding exact solutions would require hard and impractical computations. In fact, the time needed to perform these calculations, in almost all cases, scales exponentially with respect to relevant parameters of the system and thus cannot be accomplished in practice. As combinatorial optimization addresses the problem of finding a fair configuration of variables able to minimize/maximize an objective function, inference seeks a posteriori the most fair assignment of a set of variables given a partial knowledge of the system. These two problems can be re-phrased in a statistical mechanics framework where elementary components of a physical system interact according to the constraints of the original problem. The information at our disposal can be encoded in the Boltzmann distribution of the new variables which, if properly investigated, can provide the solutions to the original problems. As a consequence, the methodologies originally adopted in statistical mechanics to study and, eventually, approximate the Boltzmann distribution can be fruitfully applied for solving inference and optimization problems. The structure of the thesis follows the path covered during the three years of my Ph.D. At first, I will propose a set of combinatorial optimization problems on graphs, the Prize collecting and the Packing of Steiner trees problems. The tools used to face these hard problems rely on the zero-temperature implementation of the Belief Propagation algorithm, called Max Sum algorithm. The second set of problems proposed in this thesis falls under the name of linear estimation problems. One of them, the compressed sensing problem, will guide us in the modelling of these problems within a Bayesian framework along with the introduction of a powerful algorithm known as Expectation Propagation or Expectation Consistent in statistical physics. I will propose a similar approach to other challenging problems: the inference of metabolic fluxes, the inverse problem of the electro-encephalography and the reconstruction of tomographic images

    VLSI Routing for Advanced Technology

    Get PDF
    Routing is a major step in VLSI design, the design process of complex integrated circuits (commonly known as chips). The basic task in routing is to connect predetermined locations on a chip (pins) with wires which serve as electrical connections. One main challenge in routing for advanced chip technology is the increasing complexity of design rules which reflect manufacturing requirements. In this thesis we investigate various aspects of this challenge. First, we consider polygon decomposition problems in the context of VLSI design rules. We introduce different width notions for polygons which are important for width-dependent design rules in VLSI routing, and we present efficient algorithms for computing width-preserving decompositions of rectilinear polygons into rectangles. Such decompositions are used in routing to allow for fast design rule checking. A main contribution of this thesis is an O(n) time algorithm for computing a decomposition of a simple rectilinear polygon with n vertices into O(n) rectangles, preseverving two-dimensional width. Here the two-dimensional width at a point of the polygon is defined as the edge length of a largest square that contains the point and is contained in the polygon. In order to obtain these results we establish a connection between such decompositions and Voronoi diagrams. Furthermore, we consider implications of multiple patterning and other advanced design rules for VLSI routing. The main contribution in this context is the detailed description of a routing approach which is able to manage such advanced design rules. As a main algorithmic concept we use multi-label shortest paths where certain path properties (which model design rules) can be enforced by defining labels assigned to path vertices and allowing only certain label transitions. The described approach has been implemented in BonnRoute, a VLSI routing tool developed at the Research Institute for Discrete Mathematics, University of Bonn, in cooperation with IBM. We present experimental results confirming that a flow combining BonnRoute and an external cleanup step produces far superior results compared to an industry standard router. In particular, our proposed flow runs more than twice as fast, reduces the via count by more than 20%, the wiring length by more than 10%, and the number of remaining design rule errors by more than 60%. These results obtained by applying our multiple patterning approach to real-world chip instances provided by IBM are another main contribution of this thesis. We note that IBM uses our proposed combined BonnRoute flow as the default tool for signal routing
    corecore