12 research outputs found

    Pruned Continuous Haar Transform of 2D Polygonal Patterns with Application to VLSI Layouts

    Full text link
    We introduce an algorithm for the efficient computation of the continuous Haar transform of 2D patterns that can be described by polygons. These patterns are ubiquitous in VLSI processes where they are used to describe design and mask layouts. There, speed is of paramount importance due to the magnitude of the problems to be solved and hence very fast algorithms are needed. We show that by techniques borrowed from computational geometry we are not only able to compute the continuous Haar transform directly, but also to do it quickly. This is achieved by massively pruning the transform tree and thus dramatically decreasing the computational load when the number of vertices is small, as is the case for VLSI layouts. We call this new algorithm the pruned continuous Haar transform. We implement this algorithm and show that for patterns found in VLSI layouts the proposed algorithm was in the worst case as fast as its discrete counterpart and up to 12 times faster.Comment: 4 pages, 5 figures, 1 algorith

    Pruned Continuous Haar Transform of 2D Polygonal Patterns with Application to VLSI Layouts

    Get PDF
    We introduce an algorithm for the efficient computation of the continuous Haar transform of 2D patterns that can be described by polygons. These patterns are ubiquitous in VLSI processes where they are used to describe design and mask layouts. There speed is of paramount importance due to the magnitude of the problems to be solved and hence very fast algorithms are needed. We show that by techniques borrowed from computational geometry we are not only able to compute the continuous Haar transform directly, but also to do it quickly. This is achieved by massively pruning the transform tree and thus dramatically decreasing the computational load when the number of vertices is small, as is the case for VLSI layouts. We call this new algorithm the pruned continuous Haar transform. We implement this algorithm and show that for patterns found in VLSI layouts the proposed algorithm was in the worst case as fast as its discrete counterpart and up to 12 times faster

    EDA Solutions for Double Patterning Lithography

    Get PDF
    Expanding the optical lithography to 32-nm node and beyond is impossible using existing single exposure systems. As such, double patterning lithography (DPL) is the most promising option to generate the required lithography resolution, where the target layout is printed with two separate imaging processes. Among different DPL techniques litho-etch-litho-etch (LELE) and self-aligned double patterning (SADP) methods are the most popular ones, which apply two complete exposure lithography steps and an exposure lithography followed by a chemical imaging process, respectively. To realize double patterning lithography, patterns located within a sub-resolution distance should be assigned to either of the imaging sub-processes, so-called layout decomposition. To achieve the optimal design yield, layout decomposition problem should be solved with respect to characteristics and limitations of the applied DPL method. For example, although patterns can be split between the two sub-masks in the LELE method to generate conflict free masks, this pattern split is not favorable due to its sensitivity to lithography imperfections such as the overlay error. On the other hand, pattern split is forbidden in SADP method because it results in non-resolvable gap failures in the final image. In addition to the functional yield, layout decomposition affects parametric yield of the designs printed by double patterning. To deal with both functional and parametric challenges of DPL in dense and large layouts, EDA solutions for DPL are addressed in this thesis. To this end, we proposed a statistical method to determine the interconnect width and space for the LELE method under the effect of random overlay error. In addition to yield maximization and achieving near-optimal trade-off between different parametric requirements, the proposed method provides valuable insight about the trend of parametric and functional yields in future technology nodes. Next, we focused on self-aligned double patterning and proposed layout design and decomposition methods to provide SADP-compatible layouts and litho-friendly decomposed layouts. Precisely, a grid-based ILP formulation of SADP decomposition was proposed to avoid decomposition conflicts and improve overall printability of layout patterns. To overcome the limited applicability of this ILP-based method to fully-decomposable layouts, a partitioning-based method is also proposed which is faster than the grid-based ILP decomposition method too. Moreover, an A∗-based SADP-aware detailed routing method was proposed which performs detailed routing and layout decomposition simultaneously to avoid litho-limited layout configurations. The proposed router preserves the uniformity of pattern density between the two sub-masks of the SADP process. We finally extended our decomposition method for double patterning to triple patterning and formulated SATP decomposition by integer linear programming. In addition to conventional minimum width and spacing constraints, the proposed decomposition method minimizes the mandrel-trim co-defined edges and maximizes the layout features printed by structural spacers to achieve the minimum pattern distortion. This thesis is one of the very early researches that investigates the concept of litho-friendliness in SADP-aware layout design and decomposition. Provided by experimental results, the proposed methods advance prior state-of-the-art algorithms in various aspects. Precisely, the suggested SADP decomposition methods improve total length of sensitive trim edges, total EPE and overall printability of attempted designs. Additionally, our SADP-detailed routing method provides SADP-decomposable layouts in which trim patterns are highly robust to lithography imperfections. The experimental results for SATP decomposition show that total length of overlay-sensitive layout patterns, total EPE and overall printability of the attempted designs are also improved considerably by the proposed decomposition method. Additionally, the methods in this PhD thesis reveal several insights for the upcoming technology nodes which can be considered for improving the manufacturability of these nodes

    Robustness Analysis of Controllable-Polarity Silicon Nanowire Devices and Circuits

    Get PDF
    Substantial downscaling of the feature size in current CMOS technology has confronted digital designers with serious challenges including short channel effect and high amount of leakage power. To address these problems, emerging nano-devices, e.g., Silicon NanoWire FET (SiNWFET), is being introduced by the research community. These devices keep on pursuing Mooreâs Law by improving channel electrostatic controllability, thereby reducing the Off âstate leakage current. In addition to these improvements, recent developments introduced devices with enhanced capabilities, such as Controllable-Polarity (CP) SiNWFETs, which make them very interesting for compact logic cell and arithmetic circuits. At advanced technology nodes, the amount of physical controls, during the fabrication process of nanometer devices, cannot be precisely determined because of technology fluctuations. Consequently, the structural parameters of fabricated circuits can be significantly different from their nominal values. Moreover, giving an a-priori conclusion on the variability of advanced technologies for emerging nanoscale devices, is a difficult task and novel estimation methodologies are required. This is a necessity to guarantee the performance and the reliability of future integrated circuits. Statistical analysis of process variation requires a great amount of numerical data for nanoscale devices. This introduces a serious challenge for variability analysis of emerging technologies due to the lack of fast simulation models. One the one hand, the development of accurate compact models entails numerous tests and costly measurements on fabricated devices. On the other hand, Technology Computer Aided Design (TCAD) simulations, that can provide precise information about devices behavior, are too slow to timely generate large enough data set. In this research, a fast methodology for generating data set for variability analysis is introduced. This methodology combines the TCAD simulations with a learning algorithm to alleviate the time complexity of data set generation. Another formidable challenge for variability analysis of the large circuits is growing number of process variation sources. Utilizing parameterized models is becoming a necessity for chip design and verification. However, the high dimensionality of parameter space imposes a serious problem. Unfortunately, the available dimensionality reduction techniques cannot be employed for three main reasons of lack of accuracy, distribution dependency of the data points, and finally incompatibility with device and circuit simulators. We propose a novel technique of parameter selection for modeling process and performance variation. The proposed technique efficiently addresses the aforementioned problems. Appropriate testing, to capture manufacturing defects, plays an important role on the quality of integrated circuits. Compared to conventional CMOS, emerging nano-devices such as CP-SiNWFETs have different fabrication process steps. In this case, current fault models must be extended for defect detection. In this research, we extracted the possible fabrication defects, and then proposed a fault model for this technology. We also provided a couple of test methods for detecting the manufacturing defects in various types of CP-SiNWFET logic gates. Finally, we used the obtained fault model to build fault tolerant arithmetic circuits with a bunch of superior properties compared to their competitors

    Transistor-Level Layout of Integrated Circuits

    Get PDF
    In this dissertation, we present the toolchain BonnCell and its underlying algorithms. It has been developed in close cooperation with the IBM Corporation and automatically generates the geometry for functional groups of 2 to approximately 50 transistors. Its input consists of a set of transistors, including properties like their sizes and their types, a specification of their connectivity, and parameters to flexibly control the technological framework as well as the algorithms' behavior. Using this data, the tool computes a detailed geometric realization of the circuit as polygonal shapes on 16 layers. To this end, a placement routine configures the transistors and arranges them in the plane, which is the main subject of this thesis. Subsequently, a routing engine determines wires connecting the transistors to ensure the circuit's desired functionality. We propose and analyze a family of algorithms that arranges sets of transistors in the plane such that a multi-criteria target function is optimized. The primary goal is to obtain solutions that are as compact as possible because chip area is a valuable resource in modern techologies. In addition to the core algorithms we formulate variants that handle particularly structured instances in a suitable way. We will show that for 90% of the instances in a representative test bed provided by IBM, BonnCell succeeds to generate fully functional layouts including the placement of the transistors and a routing of their interconnections. Moreover, BonnCell is in wide use within IBM's groups that are concerned with transistor-level layout - a task that has been performed manually before our automation was available. Beyond the processing of isolated test cases, two large-scale examples for applications of the tool in the industry will be presented: On the one hand the initial design phase of a large SRAM unit required only half of the expected 3 month period, on the other hand BonnCell could provide valuable input aiding central decisions in the early concept phase of the new 14 nm technology generation

    Rake, Peel, Sketch:The Signal Processing Pipeline Revisited

    Get PDF
    The prototypical signal processing pipeline can be divided into four blocks. Representation of the signal in a basis suitable for processing. Enhancement of the meaningful part of the signal and noise reduction. Estimation of important statistical properties of the signal. Adaptive processing to track and adapt to changes in the signal statistics. This thesis revisits each of these blocks and proposes new algorithms, borrowing ideas from information theory, theoretical computer science, or communications. First, we revisit the Walsh-Hadamard transform (WHT) for the case of a signal sparse in the transformed domain, namely that has only K †N non-zero coefficients. We show that an efficient algorithm exists that can compute these coefficients in O(K log2(K) log2(N/K)) and using only O(K log2(N/K)) samples. This algorithm relies on a fast hashing procedure that computes small linear combinations of transformed domain coefficients. A bipartite graph is formed with linear combinations on one side, and non-zero coefficients on the other. A peeling decoder is then used to recover the non-zero coefficients one by one. A detailed analysis of the algorithm based on error correcting codes over the binary erasure channel is given. The second chapter is about beamforming. Inspired by the rake receiver from wireless communications, we recognize that echoes in a room are an important source of extra signal diversity. We extend several classic beamforming algorithms to take advantage of echoes and also propose new optimal formulations. We explore formulations both in time and frequency domains. We show theoretically and in numerical simulations that the signal-to-interference-and-noise ratio increases proportionally to the number of echoes used. Finally, beyond objective measures, we show that echoes also directly improve speech intelligibility as measured by the perceptual evaluation of speech quality (PESQ) metric. Next, we attack the problem of direction of arrival of acoustic sources, to which we apply a robust finite rate of innovation reconstruction framework. FRIDA â the resulting algorithm â exploits wideband information coherently, works at very low signal-to-noise ratio, and can resolve very close sources. The algorithm can use either raw microphone signals or their cross- correlations. While the former lets us work with correlated sources, the latter creates a quadratic number of measurements that allows to locate many sources with few microphones. Thorough experiments on simulated and recorded data shows that FRIDA compares favorably with the state-of-the-art. We continue by revisiting the classic recursive least squares (RLS) adaptive filter with ideas borrowed from recent results on sketching least squares problems. The exact update of RLS is replaced by a few steps of conjugate gradient descent. We propose then two different precondi- tioners, obtained by sketching the data, to accelerate the convergence of the gradient descent. Experiments on artificial as well as natural signals show that the proposed algorithm has a performance very close to that of RLS at a lower computational burden. The fifth and final chapter is dedicated to the software and hardware tools developed for this thesis. We describe the pyroomacoustics Python package that contains routines for the evaluation of audio processing algorithms and reference implementations of popular algorithms. We then give an overview of the microphone arrays developed

    Clemson Catalog, 1993-1994, Volume 68

    Get PDF
    https://tigerprints.clemson.edu/clemson_catalog/1146/thumbnail.jp

    Clemson Catalog, 1994-1995, Volume 69

    Get PDF
    https://tigerprints.clemson.edu/clemson_catalog/1147/thumbnail.jp
    corecore