5,487 research outputs found

    Standard Transistor Array (STAR). Volume 1: Placement technique

    Get PDF
    A large scale integration (LSI) technology, the standard transistor array uses a prefabricated understructure of transistors and a comprehensive library of digital logic cells to allow efficient fabrication of semicustom digital LSI circuits. The cell placement technique for this technology involves formation of a one dimensional cell layout and "folding" of the one dimensional placement onto the chip. It was found that, by use of various folding methods, high quality chip layouts can be achieved. Methods developed to measure of the "goodness" of the generated placements include efficient means for estimating channel usage requirements and for via counting. The placement and rating techniques were incorporated into a placement program (CAPSTAR). By means of repetitive use of the folding methods and simple placement improvement strategies, this program provides near optimum placements in a reasonable amount of time. The program was tested on several typical LSI circuits to provide performance comparisons both with respect to input parameters and with respect to the performance of other placement techniques. The results of this testing indicate that near optimum placements can be achieved by use of the procedures incurring severe time penalties

    Limitations and opportunities for wire length prediction in gigascale integration

    Get PDF
    Wires have become a major source of bottleneck in current VLSI designs, and wire length prediction is therefore essential to overcome these bottlenecks. Wire length prediction is broadly classified into two types: macroscopic prediction, which is the prediction of wire length distribution, and microscopic prediction, which is the prediction of individual wire lengths. The objective of this thesis is to develop a clear understanding of limitations to both macroscopic and microscopic a priori, post-placement, pre-routing wire length predictions, and thereby develop better wire length prediction models. Investigations carried out to understand the limitations to macroscopic prediction reveal that, in a given design (i) the variability of the wire length distribution increases with length and (ii) the use of Rent s rule with a constant Rent s exponent p, to calculate the terminal count of a given block size, limits the accuracy of the results from a macroscopic model. Therefore, a new model for the parameter p is developed to more accurately reflect the terminal count of a given block size in placement, and using this, a new more accurate macroscopic model is developed. In addition, a model to predict the variability is also incorporated into the macroscopic model. Studies to understand limitations to microscopic prediction reveal that (i) only a fraction of the wires in a given design are predictable, and these are mostly from shorter nets with smaller degrees and (ii) the current microscopic prediction models are built based on the assumption that a single metric could be used to accurately predict the individual length of all the wires in a design. In this thesis, an alternative microscopic model is developed for the predicting the shorter wires based on a hypothesis that there are multiple metrics that influence the length of the wires. Three different metrics are developed and fitted into a heuristic classification tree framework to provide a unified and more accurate microscopic model.Ph.D.Committee Chair: Dr. Jeff Davis; Committee Member: Dr. James D. Meindl; Committee Member: Dr. Paul Kohl; Committee Member: Dr. Scott Wills; Committee Member: Dr. Sung Kyu Li

    Capture and reconstruction of the topology of undirected graphs from partial coordinates: a matrix completion based approach

    Get PDF
    2017 Spring.Includes bibliographical references.With the advancement in science and technology, new types of complex networks have become common place across varied domains such as computer networks, Internet, bio-technological studies, sociology, and condensed matter physics. The surge of interest in research towards graphs and topology can be attributed to important applications such as graph representation of words in computational linguistics, identification of terrorists for national security, studying complicated atomic structures, and modeling connectivity in condensed matter physics. Well-known social networks, Facebook, and twitter, have millions of users, while the science citation index is a repository of millions of records and citations. These examples indicate the importance of efficient techniques for measuring, characterizing and mining large and complex networks. Often analysis of graph attributes to understand the graph topology and embedded properties on these complex graphs becomes difficult due to causes such need to process huge data volumes, lack of compressed representation forms and lack of complete information. Due to improper or inadequate acquiring processes, inaccessibility, etc., often we end up with partial graph representational data. Thus there is immense significance in being able to extract this missing information from the available data. Therefore obtaining the topology of a graph, such as a communication network or a social network from incomplete information is our research focus. Specifically, this research addresses the problem of capturing and reconstructing the topology of a network from a small set of path length measurements. An accurate solution for this problem also provides means of describing graphs with a compressed representation. A technique to obtain the topology from only a partial set of information about network paths is presented. Specifically, we demonstrate the capture of the network topology from a small set of measurements corresponding to a) shortest hop distances of nodes with respect to small set of nodes called as anchors, or b) a set of pairwise hop distances between random node pairs. These two measurement sets can be related to the Distance matrix D, a common representation of the topology, where an entry contains the shortest hop distance between two nodes. In an anchor based method, the shortest hop distances of nodes to a set of M anchors constitute what is known as a Virtual Coordinate (VC) matrix. This is a submatrix of columns of D corresponding to the anchor nodes. Random pairwise measurements correspond to a random subset of elements of D. The proposed technique depends on a low rank matrix completion method based on extended Robust Principal Component Analysis to extract the unknown elements. The application of the principles of matrix completion relies on the conjecture that many natural data sets are inherently low dimensional and thus corresponding matrix is relatively low ranked. We demonstrate that this is applicable to D of many large-scale networks as well. Thus we are able to use results from the theory of matrix completion for capturing the topology. Two important types of graphs have been used for evaluation of the proposed technique, namely, Wireless Sensor Network (WSN) graphs and social network graphs. For WSN examples, we use the Topology Preserving Map (TPM), which is a homeomorphic representation of the original layout, to evaluate the effectiveness of the technique from partial sets of entries of VC matrix. A double centering based approach is used to evaluate the TPMs from VCs, in comparison with the existing non-centered approach. Results are presented for both random anchors and nodes that are farthest apart on the boundaries. The idea of obtaining topology is extended towards social network link prediction. The significance of this result lies in the fact that with increasing privacy concerns, obtaining the data in the form of VC matrix or as hop distance matrix becomes difficult. This approach of predicting the unknown entries of a matrix provides a novel approach for social network link predictions, and is supported by the fact that the distance matrices of most real world networks are naturally low ranked. The accuracy of the proposed techniques is evaluated using 4 different WSN and 3 different social networks. Two 2D and two 3D networks have been used for WSNs with the number of nodes ranging from 500 to 1600. We are able to obtain accurate TPMs for both random anchors and extreme anchors with only 20% to 40% of VC matrix entries. The mean error quantifies the error introduced in TPMs due to unknown entries. The results indicate that even with 80% of entries missing, the mean error is around 35% to 45%. The Facebook, Collaboration and Enron Email sub networks, with 744, 4158, 3892 nodes respectively, have been used for social network capture. The results obtained are very promising. With 80% of information missing in the hop-distance matrix, a maximum error of only around 6% is incurred. The error in prediction of hop distance is less than 0.5 hops. This has also opened up the idea of compressed representation of networks by its VC matrix

    Information visualization for DNA microarray data analysis: A critical review

    Get PDF
    Graphical representation may provide effective means of making sense of the complexity and sheer volume of data produced by DNA microarray experiments that monitor the expression patterns of thousands of genes simultaneously. The ability to use ldquoabstractrdquo graphical representation to draw attention to areas of interest, and more in-depth visualizations to answer focused questions, would enable biologists to move from a large amount of data to particular records they are interested in, and therefore, gain deeper insights in understanding the microarray experiment results. This paper starts by providing some background knowledge of microarray experiments, and then, explains how graphical representation can be applied in general to this problem domain, followed by exploring the role of visualization in gene expression data analysis. Having set the problem scene, the paper then examines various multivariate data visualization techniques that have been applied to microarray data analysis. These techniques are critically reviewed so that the strengths and weaknesses of each technique can be tabulated. Finally, several key problem areas as well as possible solutions to them are discussed as being a source for future work

    Towards Collaborative Intelligence: Routability Estimation based on Decentralized Private Data

    Full text link
    Applying machine learning (ML) in design flow is a popular trend in EDA with various applications from design quality predictions to optimizations. Despite its promise, which has been demonstrated in both academic researches and industrial tools, its effectiveness largely hinges on the availability of a large amount of high-quality training data. In reality, EDA developers have very limited access to the latest design data, which is owned by design companies and mostly confidential. Although one can commission ML model training to a design company, the data of a single company might be still inadequate or biased, especially for small companies. Such data availability problem is becoming the limiting constraint on future growth of ML for chip design. In this work, we propose an Federated-Learning based approach for well-studied ML applications in EDA. Our approach allows an ML model to be collaboratively trained with data from multiple clients but without explicit access to the data for respecting their data privacy. To further strengthen the results, we co-design a customized ML model FLNet and its personalization under the decentralized training scenario. Experiments on a comprehensive dataset show that collaborative training improves accuracy by 11% compared with individual local models, and our customized model FLNet significantly outperforms the best of previous routability estimators in this collaborative training flow.Comment: 6 pages, 2 figures, 5 tables, accepted by DAC'2

    Design methodology and productivity improvement in high speed VLSI circuits

    Get PDF
    2017 Spring.Includes bibliographical references.To view the abstract, please see the full text of the document

    Machine Learning Techniques to Evaluate the Approximation of Utilization Power in Circuits

    Get PDF
    The need for products that are more streamlined, more useful, and have longer battery lives is rising in today's culture. More components are being integrated onto smaller, more complex chips in order to do this. The outcome is higher total power consumption as a result of increased power dissipation brought on by dynamic and static currents in integrated circuits (ICs). For effective power planning and the precise application of power pads and strips by floor plan engineers, estimating power dissipation at an early stage is essential. With more information about the design attributes, power estimation accuracy increases. For a variety of applications, including function approximation, regularization, noisy interpolation, classification, and density estimation, they offer a coherent framework. RBFNN training is also quicker than training multi-layer perceptron networks. RBFNN learning typically comprises of a linear supervised phase for computing weights, followed by an unsupervised phase for determining the centers and widths of the Gaussian basis functions. This study investigates several learning techniques for estimating the synaptic weights, widths, and centers of RBFNNs. In this study, RBF networks—a traditional family of supervised learning algorithms—are examined.  Using centers found using k-means clustering and the square norm of the network coefficients, respectively, two popular regularization techniques are examined. It is demonstrated that each of these RBF techniques are capable of being rewritten as data-dependent kernels. Due to their adaptability and quicker training time when compared to multi-layer perceptron networks, RBFNNs present a compelling option to conventional neural network models. Along with experimental data, the research offers a theoretical analysis of these techniques, indicating competitive performance and a few advantages over traditional kernel techniques in terms of adaptability (ability to take into account unlabeled data) and computing complexity. The research also discusses current achievements in using soft k-means features for image identification and other tasks
    • 

    corecore