86 research outputs found

    Multilevel Modeling, Formal Analysis, and Characterization of Single Event Transients Propagation in Digital Systems

    Get PDF
    RÉSUMÉ La croissance exponentielle du nombre de transistors par puce a apportĂ© des progrĂšs considĂ©rables aux performances et fonctionnalitĂ©s des dispositifs semi-conducteurs avec une miniaturisation des dimensions physiques ainsi qu’une augmentation de vitesse. De nos jours, les appareils Ă©lectroniques utilisĂ©s dans un large Ă©ventail d’applications telles que les systĂšmes de divertissement personnels, l’industrie automobile, les systĂšmes Ă©lectroniques mĂ©dicaux, et le secteur financier ont changĂ© notre façon de vivre. Cependant, des Ă©tudes rĂ©centes ont dĂ©montrĂ© que le rĂ©trĂ©cissement permanent de la taille des transistors qui s’approchent des dimensions nanomĂ©triques fait surgir des dĂ©fis majeurs. La rĂ©duction de la fiabilitĂ© au sens large (c.-Ă -d., la capacitĂ© Ă  fournir la fonction attendue) est l’un d’entre eux. Lorsqu’un systĂšme est conçu avec une technologie avancĂ©e, on s’attend Ă  ce qu’ il connaĂźt plus de dĂ©faillances dans sa durĂ©e de vie. De telles dĂ©faillances peuvent avoir des consĂ©quences graves allant des pertes financiĂšres aux pertes humaines. Les erreurs douces induites par la radiation, qui sont apparues d’abord comme une source de panne plutĂŽt exotique causant des anomalies dans les satellites, sont devenues l’un des problĂšmes les plus difficiles qui influencent la fiabilitĂ© des systĂšmes microĂ©lectroniques modernes, y compris les dispositifs terrestres. Dans le secteur mĂ©dical par exemple, les erreurs douces ont Ă©tĂ© responsables de l’échec et du rappel de plusieurs stimulateurs cardiaques implantables. En fonction du transistor affectĂ© lors de la fabrication, le passage d’une particule peut induire des perturbations isolĂ©es qui se manifestent comme un basculement du contenu d’une cellule de mĂ©moire (c.-Ă -d., Single Event Upsets (SEU)) ou un changement temporaire de la sortie (sous forme de bruit) dans la logique combinatoire (c.-Ă -d., Single Event Transients (SETs)). Les SEU ont Ă©tĂ© largement Ă©tudiĂ©s au cours des trois derniĂšres dĂ©cennies, car ils Ă©taient considĂ©rĂ©s comme la cause principale des erreurs douces. NĂ©anmoins, des Ă©tudes expĂ©rimentales ont montrĂ© qu’avec plus de miniaturisation technologique, la contribution des SET au taux d’erreurs douces est remarquable et qu’elle peut mĂȘme dĂ©passer celui des SEU dans les systĂšmes Ă  haute frĂ©quence [1], [2]. Afin de minimiser l’impact des erreurs douces, l’effet des SET doit ĂȘtre modĂ©lisĂ©, prĂ©dit et attĂ©nuĂ©. Toutefois, malgrĂ© les progrĂšs considĂ©rables accomplis dans la vĂ©rification fonctionnelle des circuits numĂ©riques, il y a eu trĂšs peu de progrĂšs en matiĂ re de vĂ©rification non-fonctionnelle (par exemple, l’analyse des erreurs douces). Ceci est dĂ» au fait que la modĂ©lisation et l’analyse des propriĂ©tĂ©s non-fonctionnelles des SET pose un grand dĂ©fi. Cela est liĂ© Ă  la nature alĂ©atoire des dĂ©fauts et Ă  la difficultĂ© de modĂ©liser la variation de leurs caractĂ©ristiques lorsqu’ils se propagent.----------ABSTRACT The exponential growth in the number of transistors per chip brought tremendous progress in the performance and the functionality of semiconductor devices associated with reduced physical dimensions and higher speed. Electronic devices used in a wide range of applications such as personal entertainment systems, automotive industry, medical electronic systems, and financial sector changed the way we live nowadays. However, recent studies reveal that further downscaling of the transistor size at nano-scale technology leads to major challenges. Reliability (i.e., ability to provide intended functionality) is one of them, where a system designed in nano-scale nodes is expected to experience more failures in its lifetime than if it was designed using larger technology node size. Such failures can lead to serious consĂ©quences ranging from financial losses to even loss of human life. Soft errors induced by radiation, which were initially considered as a rather exotic failure mechanism causing anomalies in satellites, have become one of the most challenging issues that impact the reliability of modern microelectronic systems, including devices at terrestrial altitudes. For instance, in the medical industry, soft errors have been responsible of the failure and recall of many implantable cardiac pacemakers. Depending on the affected transistor in the design, a particle strike can manifest as a bit flip in a state element (i.e., Single Event Upset (SEU)) or temporally change the output of a combinational gate (i.e., Single Event Transients (SETs)). Initially, SEUs have been widely studied over the last three decades as they were considered to be the main source of soft errors. However, recent experiments show that with further technology downscaling, the contribution of SETs to the overall soft error rate is remarkable and in high frequency systems, it might exceed that of SEUs [1], [2]. In order to minimize the impact of soft errors, the impact of SETs needs to be modeled, predicted, and mitigated. However, despite considerable progress towards developing efficient methodologies for the functional verification of digital designs, advances in non-functional verification (e.g., soft error analysis) have been lagging. This is due to the fact that the modeling and analysis of non-functional properties related to SETs is very challenging. This can be related to the random nature of these faults and the difficulty of modeling the variation in its characteristics while propagating. Moreover, many details about the design structure and the SETs characteristics may not be available at high abstraction levels. Thus, in high level analysis, many assumptions about the SETs behavior are usually made, which impacts the accuracy of the generated results. Consequently, the lowcost detection of soft errors due to SETs is very challenging and requires more sophisticated techniques

    A Circuit Synthesis Flow for Controllable-Polarity Transistors

    Get PDF
    Double-Gate (DG) controllable-polarity Field-Effect Transistors (FETs) are devices whose n- or p- polarity is on-line configurable by adjusting the second gate voltage. Such emerging transistors have been fabricated in silicon nanowires, carbon nanotuges and graphene technologies. Thanks to their enhanced functionality, DG controllable-polarity FETs implement arith- metic functions, such as XOR and MAJ, with limited physical resources enabling compact and high-performance datapaths. In order to design digital circuits with this technology, automated design techniques are of paramount importance. In this paper, we describe a design automation framework for DG controllable- polarity transistors. First, we present a novel dedicated logic representation form capable to exploit the polarity control during logic synthesis. Then, we tackle challenges at the physical level, presenting a regular layout technique that alleviates the interconnection issue deriving from the second gate routing. We use logic and physical synthesis tools to form a complete design automation flow. Experimental results show that the proposed flow is able to reduce the area and delay of digital circuits, based on 22-nm DG controllable-polarity SiNWFETs, by 22% and 42%, respectively, as compared to a commercial synthesis tool. With respect to a 22-nm FinFET technology, the proposed flow produces circuits, based on 22-nm DG controllable-polarity SiNWFETs, with 2.9x smaller area-delay product

    Transistor-Level Layout of Integrated Circuits

    Get PDF
    In this dissertation, we present the toolchain BonnCell and its underlying algorithms. It has been developed in close cooperation with the IBM Corporation and automatically generates the geometry for functional groups of 2 to approximately 50 transistors. Its input consists of a set of transistors, including properties like their sizes and their types, a specification of their connectivity, and parameters to flexibly control the technological framework as well as the algorithms' behavior. Using this data, the tool computes a detailed geometric realization of the circuit as polygonal shapes on 16 layers. To this end, a placement routine configures the transistors and arranges them in the plane, which is the main subject of this thesis. Subsequently, a routing engine determines wires connecting the transistors to ensure the circuit's desired functionality. We propose and analyze a family of algorithms that arranges sets of transistors in the plane such that a multi-criteria target function is optimized. The primary goal is to obtain solutions that are as compact as possible because chip area is a valuable resource in modern techologies. In addition to the core algorithms we formulate variants that handle particularly structured instances in a suitable way. We will show that for 90% of the instances in a representative test bed provided by IBM, BonnCell succeeds to generate fully functional layouts including the placement of the transistors and a routing of their interconnections. Moreover, BonnCell is in wide use within IBM's groups that are concerned with transistor-level layout - a task that has been performed manually before our automation was available. Beyond the processing of isolated test cases, two large-scale examples for applications of the tool in the industry will be presented: On the one hand the initial design phase of a large SRAM unit required only half of the expected 3 month period, on the other hand BonnCell could provide valuable input aiding central decisions in the early concept phase of the new 14 nm technology generation

    VLSI Design

    Get PDF
    This book provides some recent advances in design nanometer VLSI chips. The selected topics try to present some open problems and challenges with important topics ranging from design tools, new post-silicon devices, GPU-based parallel computing, emerging 3D integration, and antenna design. The book consists of two parts, with chapters such as: VLSI design for multi-sensor smart systems on a chip, Three-dimensional integrated circuits design for thousand-core processors, Parallel symbolic analysis of large analog circuits on GPU platforms, Algorithms for CAD tools VLSI design, A multilevel memetic algorithm for large SAT-encoded problems, etc

    Algorithms for Cell Layout

    Get PDF
    Cell layout is a critical step in the design process of computer chips. A cell is a logic function or storage element implemented in CMOS technology by transistors connected with wires. As each cell is used many times on a chip, improvements of a single cell layout can have a large effect on the overall chip performance. In the past years increasing difficulty to manufacture small feature sizes has lead to growing complexity of design rules. Producing cell layouts which are compliant with design rules and at the same time optimized w.r.t. layout size has become a difficult task for human experts. In this thesis we present BonnCell, a cell layout generator which is able to fully automatically produce design rule compliant layouts. It is able to guarantee area minimality of its layouts for small and medium sized cells. For large cells it uses a heuristic which produces layouts with a significant area reduction compared to those created manually. The routing problem is based on the Vertex Disjoint Steiner Tree Packing Problem with a large number of additional design rules. In Chapter 4 we present the routing algorithm which is based on a mixed integer programming (MIP) formulation that guarantees compliance with all design rules. The algorithm can also handle instances in which only part of the transistors are placed to check whether this partial placement can be extended to a routable placement of all transistors. Chapter 5 contains the transistor placement algorithm. Based on a branch and bound approach, it places transistors in turn and achieves efficiency by pruning parts of the search tree which do not contain optimum solutions. One major contribution of this thesis is that BonnCell only outputs routable placements. Simply checking the routability for each full placement in the search tree is too slow in practice, therefore several speedup strategies are applied. Some cells are too large to be solved by a single call of the placement algorithm. In Chapter 7 we describe how these cells are split up into smaller subcells which are placed and routed individually and subsequently merged into a placement and routing of the original cell. Two approaches for dividing the original cell into subcells are presented, one based on estimating the subcell area and the other based on solving the Min Cut Linear Arrangement Problem. BonnCell has enabled our cooperation partner IBM to drastically improve their cell design and layout process. In particular, a team of human experts needed several weeks to find a layout for their largest cell, consisting of 128 transistors. BonnCell processed this cell without manual intervention in 3 days and its layout uses 15% less area than the layout found by the human experts

    Hardware Acceleration of Electronic Design Automation Algorithms

    Get PDF
    With the advances in very large scale integration (VLSI) technology, hardware is going parallel. Software, which was traditionally designed to execute on single core microprocessors, now faces the tough challenge of taking advantage of this parallelism, made available by the scaling of hardware. The work presented in this dissertation studies the acceleration of electronic design automation (EDA) software on several hardware platforms such as custom integrated circuits (ICs), field programmable gate arrays (FPGAs) and graphics processors. This dissertation concentrates on a subset of EDA algorithms which are heavily used in the VLSI design flow, and also have varying degrees of inherent parallelism in them. In particular, Boolean satisfiability, Monte Carlo based statistical static timing analysis, circuit simulation, fault simulation and fault table generation are explored. The architectural and performance tradeoffs of implementing the above applications on these alternative platforms (in comparison to their implementation on a single core microprocessor) are studied. In addition, this dissertation also presents an automated approach to accelerate uniprocessor code using a graphics processing unit (GPU). The key idea is to partition the software application into kernels in an automated fashion, such that multiple instances of these kernels, when executed in parallel on the GPU, can maximally benefit from the GPU?s hardware resources. The work presented in this dissertation demonstrates that several EDA algorithms can be successfully rearchitected to maximally harness their performance on alternative platforms such as custom designed ICs, FPGAs and graphic processors, and obtain speedups upto 800X. The approaches in this dissertation collectively aim to contribute towards enabling the computer aided design (CAD) community to accelerate EDA algorithms on arbitrary hardware platforms

    DESIGN AND SYNTHESIS OF HIGH DENSITY INTEGRATED CIRCUITS

    Get PDF
    Gordon E. Moore, a co-founder of Fairchild Semiconductor, and later of Intel, predicted that after 1980 the complexity of an Integrated Circuit would be expected to double every two years. The prevision made by Moore held for decades, for this reason it is also called \u201cMoore\u2019s law\u201d. The trend in ICs is driven by a reduction of area and power consumption. Today scaled CMOS technologies are the main solution for digital processing. However, the interconnection scaling is not optimal. At every new technology node, the number of metal layers and their thickness increases, exploiting the vertical direction. The reduction of the minimum distance between interconnections and the growth in vertical dimension increase the parasitic capacitance and consequently the dynamic power consumption. Moreover, due to the non-optimal scaling of the interconnections, signal routing is becoming more and more challenging at every technology node advancement. Very scaled technologies make possible to reach a great transistor density. However, the design must comply to strict rules for metal interconnections. The aim of this thesis is to find possible solutions to the disadvantages of scaled CMOS technologies. This goal is obtained in two different ways: using ad-hoc design techniques on today CMOS technologies and finding new approaches to logic synthesis of nanocrossbars, that are an emerging post-CMOS technology. The two approaches used corresponds to the two parts of this thesis. The first part presents the design of an Associative Memory focusing the attention on develop design and logic synthesis techniques to reduce power consumption. The field of applicability of AMs is real-time pattern-recognition tasks. The possible uses range from scientific calculations to image processing for intelligent autonomous devices to image reconstruction for electro-medical apparatuses. In particular AMs are used in High Energy Physics (HEP) experiments to detect particle tracks. HEP experiments generate a huge amount of data, but it is necessary to select and save only the most interesting tracks. Being the data compared in parallel, AMs are synchronous ICs that have a very peaked power consumption, and therefore it is necessary to minimize the power consumption. This AM is designed within the projects IMPART and HTT in 28 nm CMOS technology, using a fully-CMOS approach. The logic is based on the propagation of a \u201ckill signal\u201d that, if one of the bits in a word is not matching, inhibits the switching of the following cells. Thanks to this feature, the designed AM array consumes less than 0.7 fJ/bit. A prototype has been fabricated and it has proven to be functional. The final chip will be installed in the data acquisition chain of ATLAS experiment on HL-LHC at CERN. In the future nanocrossbars are expected to reduce device dimensions and interconnection complexity with respect to CMOS. Logic functions are obtained with switching lattices of four-terminal switches. The research activity on nanocrossbars is done within the project NANOxCOMP. To improve synthesis are used some algorithmic approaches based on Boolean function decomposition and regularities, in particular P-circuits, EXOR-Projected Sums of Products (EP-SOP), Dimension-reducible (D-red) functions and autosymmetric functions. The decomposed functions are implemented into lattices using internal and external decomposition methods. Experimental results show that this approaches reduce the complexity of the single synthesis problem and leads, in average, to a reduction of lattice area and synthesis time. Lattices are made of self-assembled structures and they have a non-negligible defectivity ratio. To cope with this limitation, some techniques to reduce sensitivity to defects have been studied
    • 

    corecore