10,324 research outputs found

    Visualizing three-dimensional graph drawings

    Get PDF
    viii, 110 leaves : ill. (some col.) ; 29 cm.The GLuskap system for interactive three-dimensional graph drawing applies techniques of scientific visualization and interactive systems to the construction, display, and analysis of graph drawings. Important features of the system include support for large-screen stereographic 3D display with immersive head-tracking and motion-tracked interactive 3D wand control. A distributed rendering architecture contributes to the portability of the system, with user control performed on a laptop computer without specialized graphics hardware. An interface for implementing graph drawing layout and analysis algorithms in the Python programming language is also provided. This thesis describes comprehensively the work on the system by the author—this work includes the design and implementation of the major features described above. Further directions for continued development and research in cognitive tools for graph drawing research are also suggested

    3-dimensional Channel Routing

    Get PDF
    Consider two parallel planar grids of size w × n . The vertices of these grids are called terminals and pairwise disjoint subsets of termi nals are called nets. We aim at routing all nets in a cubic grid between the two layers h olding the terminals. However, to ensure solvability, it is allowed to introduce a n empty row/column be- tween every two consecutive rows/columns containing the te rminals (in both grids). Hence the routing is to be realized in a cubic grid of size 2 n × 2 w × h . The objective is to minimize the height h . In this paper we generalize previous results of Recski and Szeszl ́er [10] and show that every problem instance is so lvable in polynomial time with height h = O (max( n, w )). This linear bound is best possible (apart from a constant factor)

    Embedded dynamic programming networks for networks-on-chip

    Get PDF
    PhD ThesisRelentless technology downscaling and recent technological advancements in three dimensional integrated circuit (3D-IC) provide a promising prospect to realize heterogeneous system-on-chip (SoC) and homogeneous chip multiprocessor (CMP) based on the networks-onchip (NoCs) paradigm with augmented scalability, modularity and performance. In many cases in such systems, scheduling and managing communication resources are the major design and implementation challenges instead of the computing resources. Past research efforts were mainly focused on complex design-time or simple heuristic run-time approaches to deal with the on-chip network resource management with only local or partial information about the network. This could yield poor communication resource utilizations and amortize the benefits of the emerging technologies and design methods. Thus, the provision for efficient run-time resource management in large-scale on-chip systems becomes critical. This thesis proposes a design methodology for a novel run-time resource management infrastructure that can be realized efficiently using a distributed architecture, which closely couples with the distributed NoC infrastructure. The proposed infrastructure exploits the global information and status of the network to optimize and manage the on-chip communication resources at run-time. There are four major contributions in this thesis. First, it presents a novel deadlock detection method that utilizes run-time transitive closure (TC) computation to discover the existence of deadlock-equivalence sets, which imply loops of requests in NoCs. This detection scheme, TC-network, guarantees the discovery of all true-deadlocks without false alarms in contrast to state-of-the-art approximation and heuristic approaches. Second, it investigates the advantages of implementing future on-chip systems using three dimensional (3D) integration and presents the design, fabrication and testing results of a TC-network implemented in a fully stacked three-layer 3D architecture using a through-silicon via (TSV) complementary metal-oxide semiconductor (CMOS) technology. Testing results demonstrate the effectiveness of such a TC-network for deadlock detection with minimal computational delay in a large-scale network. Third, it introduces an adaptive strategy to effectively diffuse heat throughout the three dimensional network-on-chip (3D-NoC) geometry. This strategy employs a dynamic programming technique to select and optimize the direction of data manoeuvre in NoC. It leads to a tool, which is based on the accurate HotSpot thermal model and SystemC cycle accurate model, to simulate the thermal system and evaluate the proposed approach. Fourth, it presents a new dynamic programming-based run-time thermal management (DPRTM) system, including reactive and proactive schemes, to effectively diffuse heat throughout NoC-based CMPs by routing packets through the coolest paths, when the temperature does not exceed chip’s thermal limit. When the thermal limit is exceeded, throttling is employed to mitigate heat in the chip and DPRTM changes its course to avoid throttled paths and to minimize the impact of throttling on chip performance. This thesis enables a new avenue to explore a novel run-time resource management infrastructure for NoCs, in which new methodologies and concepts are proposed to enhance the on-chip networks for future large-scale 3D integration.Iraqi Ministry of Higher Education and Scientific Research (MOHESR)

    A system-on-chip vector multiprocessor for transmission line modelling acceleration

    Get PDF
    We discuss a configurable, System-on-Chip vector multiprocessor for accelerating the Transmission Line Modeling (TLM) algorithm with an architecture capable of exploiting the two primary forms of parallelism in the code, thread and data level parallelism. Theoretical results demonstrate an order of magnitude reduction in the dynamic instruction count for a scalar-processor/vector-coprocessor configuration at a vector length of sixteen 32-bit singleprecision elements. Furthermore, a multi-vector SoC architecture consisting of ten such vector accelerators provides a near-linear theoretical performance benefit of the order of 88% in three out of four benchmark configurations which is orthogonal to the benefit realized by vectorization alone. We discuss in detail this potent architecture and present implementation data for the 2-way multi-processor VLSI macrocell

    OutFlank Routing: Increasing Throughput in Toroidal Interconnection Networks

    Full text link
    We present a new, deadlock-free, routing scheme for toroidal interconnection networks, called OutFlank Routing (OFR). OFR is an adaptive strategy which exploits non-minimal links, both in the source and in the destination nodes. When minimal links are congested, OFR deroutes packets to carefully chosen intermediate destinations, in order to obtain travel paths which are only an additive constant longer than the shortest ones. Since routing performance is very sensitive to changes in the traffic model or in the router parameters, an accurate discrete-event simulator of the toroidal network has been developed to empirically validate OFR, by comparing it against other relevant routing strategies, over a range of typical real-world traffic patterns. On the 16x16x16 (4096 nodes) simulated network OFR exhibits improvements of the maximum sustained throughput between 14% and 114%, with respect to Adaptive Bubble Routing.Comment: 9 pages, 5 figures, to be presented at ICPADS 201

    Intelligent approaches to VLSI routing

    Get PDF
    Very Large Scale Integrated-circuit (VLSI) routing involves many large-size and complex problems and most of them have been shown to be NP-hard or NP-complete. As a result, conventional approaches, which have been successfully used to handle relatively small-size routing problems, are not suitable to be used in tackling large-size routing problems because they lead to \u27combinatorial explosion\u27 in search space. Hence, there is a need for exploring more efficient routing approaches to be incorporated into today\u27s VLSI routing system. This thesis strives to use intelligent approaches, including symbolic intelligence and computational intelligence, to solve three VLSI routing problems: Three-Dimensional (3-D) Shortest Path Connection, Switchbox Routing and Constrained Via Minimization. The 3-D shortest path connection is a fundamental problem in VLSI routing. It aims to connect two terminals of a net that are distributed in a 3-D routing space subject to technological constraints and performance requirements. Aiming at increasing computation speed and decreasing storage space requirements, we present a new A* algorithm for the 3-D shortest path connection problem in this thesis. This new A*algorithm uses an economical representation and adopts a novel back- trace technique. It is shown that this algorithm can guarantee to find a path if one exists and the path found is the shortest one. In addition, its computation speed is fast, especially when routed nets are spare. The computational complexities of this A* algorithm at the best case and the worst case are O(Ɩ) and 0(Ɩ3), respectively, where Ɩ is the shortest path length between the two terminals. Most importantly, this A\u27 algorithm is superior to other shortest path connection algorithms as it is economical in terms of storage space requirement, i.e., 1 bit/grid. The switchbox routing problem aims to connect terminals at regular intervals on the four sides of a rectangle routing region. From a computational point of view, the problem is NP-hard. Furthermore, it is extremely complicated and as the consequence no existing algorithm can guarantee to find a solution even if one exists no matter how high the complexity of the algorithm is. Previous approaches to the switch box routing problem can be divided into algorithmic approaches and knowledge-based approaches. The algorithmic approaches are efficient in computational time, but they are unsucessful at achieving high routing completion rate, especially for some dense and complicated switchbox routing problems. On the other hand, the knowledge-based approaches can achieve high routing completion rate, but they are not efficient in computation speed. In this thesis we present a hybrid approach to the switchbox routing problem. This hybrid approach is based on a new knowledge-based routing technique, namely synchronized routing, and combines some efficient algorithmic routing techniques. Experimental results show it can achieve the high routing completion rate of the knowledge-based approaches and the high efficiency of the algorithmic approaches. The constrained via minimization is an important optimization problem in VLSI routing. Its objective is to minimize the number of vias introduced in VLSI routing. From computational perspective, the constrained via minimization is NP-complete. Although for a special case where the number of wire segments splits at a via candidate is not more than three, elegant theoretical results have been obtained. For a general case in which there exist more than three wire segment splits at a via candidate few approaches have been proposed, and those approaches are only suitable for tackling some particular routing styles and are difficult or impossible to adjust to meet practical requirements. In this thesis we propose a new graph-theoretic model, namely switching graph model, for the constrained via minimization problem. The switching graph model can represent both grid-based and grid less routing problems, and allows arbitrary wire segments split at a via candidate. Then on the basis of the model, we present the first genetic algorithm for the constrained via minimization problem. This genetic algorithm can tackle various kinds of routing styles and be configured to meet practical constraints. Experimental results show that the genetic algorithm can find the optimal solutions for most cases in reasonable time
    • 

    corecore