24 research outputs found
Practical Minimum Cut Algorithms
The minimum cut problem for an undirected edge-weighted graph asks us to
divide its set of nodes into two blocks while minimizing the weight sum of the
cut edges. Here, we introduce a linear-time algorithm to compute near-minimum
cuts. Our algorithm is based on cluster contraction using label propagation and
Padberg and Rinaldi's contraction heuristics [SIAM Review, 1991]. We give both
sequential and shared-memory parallel implementations of our algorithm.
Extensive experiments on both real-world and generated instances show that our
algorithm finds the optimal cut on nearly all instances significantly faster
than other state-of-the-art algorithms while our error rate is lower than that
of other heuristic algorithms. In addition, our parallel algorithm shows good
scalability
Replication for Logic Bipartitioning
Logic replication, the duplication of logic in order to limit communication between partitions, is an effective part of a complete partitioning solution. In this paper we seek a better understanding of the important issues in logic replication. By developing new optimizations to existing algorithms we are able to significantly improve the quality of these techniques, achieving up to 12.5 % better results than the best existing replication techniques. When integrated into our already state-of-the-art partitioner, we improve overall cutsizes by 37.8%, while requiring the duplication of at most 7 % of the logic.
Hierarchical partitioning for field-programmable systems
This paper presents a new recursive bipartitioning algorithms targeted for a hierarchical field-programmable system. It draws new insights into relating the quality of bipartitioning algorithm to circuit structures by the use of the partitioning tree [11]. The final algorithm proposed not only forms the basis for the partitioning solution of a 1-million gate Field Programmable System [1] but can also be applied to general VLSI or multiple-FPGA parti-tioning problems. The reprogrammability of FPGAs has made possible a number of systems for rapid prototyping and emulation. These multiple-FPGA designs, primarily aimed at ASIC applications, tend to be severely pin limited. Since the pi
Evaluating inlining techniques
Abstract For eciency and ease of implementation, many compilers implicitly impose an``inlining policy'' to restrict the conditions under which a procedure may be inlined. An inlining technique consists of an inlining policy and a strategy for choosing a sequence of inlining operations that is consistent with the policy. The eectiveness of an inlining technique is aected by the restrictiveness of the inlining policy as well as the eectiveness of the (heuristic) inlining strategy. The focus of this paper is on the comparison of inlining policies and techniques, and the notions of power and¯exibility are introduced. As a major case study, we identify and compare policies based on the version of the inlined procedure that is used
A Hypergraph Framework for Optimal Model-Based Decomposition of Design Problems
Decomposition of large engineering system models is desirable sinceincreased model size reduces reliability and speed of numericalsolution algorithms. The article presents a methodology for optimalmodel-based decomposition (OMBD) of design problems, whether or notinitially cast as optimization problems. The overall model isrepresented by a hypergraph and is optimally partitioned into weaklyconnected subgraphs that satisfy decomposition constraints. Spectralgraph-partitioning methods together with iterative improvementtechniques are proposed for hypergraph partitioning. A known spectralK-partitioning formulation, which accounts for partition sizes andedge weights, is extended to graphs with also vertex weights. TheOMBD formulation is robust enough to account for computationaldemands and resources and strength of interdependencies between thecomputational modules contained in the model.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/44780/1/10589_2004_Article_136837.pd
Recommended from our members
The robust design of complex systems
Robust Engineering Design has evolved as an important methodology for the integration of quality with the process of design. The methodology encompasses the disciplines of experimental design, model building and optimization. First an experiment is conducted on a system (or a simulation of the system), second a model is built to emulate the system and finally the emulation model is used to optimize the system design. Applying these methods to large problems can be difficult and time-consuming because of the complexity of most design problems. It is the goal of this thesis to introduce methods which reduce problem complexity and so make the application of Robust Engineering Design (RED) methodology easier for large design problems.
By drawing from methods used in systems theory and circuit optimization several techniques are presented with the aim of reducing the complexity of performing experiments for Robust Engineering Design. A common framework for experimentation is created by combining a commercial circuit simulator with established methods for experimental design and model building. This provides the basis for experimentation in subsequent chapters. A method of design optimization with respect to quality is presented to complete the model-based Robust Engineering Design cycle.
Three approaches to reducing problem complexity are adopted. First a method of system decomposition is applied directly to an electronic circuit to reduce the size of experiment required for RED. Second a method of modelling system response functions is described which integrates the action of the circuit simulator with the model building process. Third information about system topology is used in the design of experiments to enhance the model-building process. Conclusions are drawn about the effectiveness of the approaches described with respect to the impact on problem complexity