166 research outputs found

    Enhancing Logic Synthesis of Switching Lattices by Generalized Shannon Decomposition Methods

    Get PDF
    In this paper we propose a novel approach to the synthesis of minimal-sized lattices, based on the decomposition of logic functions. Since the decomposition allows to obtain circuits with a smaller area, our idea is to decompose the Boolean functions according to generalizations of the classical Shannon decomposition, then generate the lattices for each component function, and finally implement the original function by a single composed lattice obtained by glueing together appropriately the lattices of the component functions. In particular we study the two decomposition schemes defining the bounded-level logic networks called P-circuits and EXOR-Projected Sums of Products (EP-SOPs). Experimental results show that about 34% of our benchmarks achieve a smaller area when implemented using the P-circuit decomposition for switching lattices, with an average gain of at least 25%, and about 27% of our benchmarks achieve a smaller area when implemented using the EP-SOP decomposition, with an average gain of at least 22%

    DESIGN AND SYNTHESIS OF HIGH DENSITY INTEGRATED CIRCUITS

    Get PDF
    Gordon E. Moore, a co-founder of Fairchild Semiconductor, and later of Intel, predicted that after 1980 the complexity of an Integrated Circuit would be expected to double every two years. The prevision made by Moore held for decades, for this reason it is also called \u201cMoore\u2019s law\u201d. The trend in ICs is driven by a reduction of area and power consumption. Today scaled CMOS technologies are the main solution for digital processing. However, the interconnection scaling is not optimal. At every new technology node, the number of metal layers and their thickness increases, exploiting the vertical direction. The reduction of the minimum distance between interconnections and the growth in vertical dimension increase the parasitic capacitance and consequently the dynamic power consumption. Moreover, due to the non-optimal scaling of the interconnections, signal routing is becoming more and more challenging at every technology node advancement. Very scaled technologies make possible to reach a great transistor density. However, the design must comply to strict rules for metal interconnections. The aim of this thesis is to find possible solutions to the disadvantages of scaled CMOS technologies. This goal is obtained in two different ways: using ad-hoc design techniques on today CMOS technologies and finding new approaches to logic synthesis of nanocrossbars, that are an emerging post-CMOS technology. The two approaches used corresponds to the two parts of this thesis. The first part presents the design of an Associative Memory focusing the attention on develop design and logic synthesis techniques to reduce power consumption. The field of applicability of AMs is real-time pattern-recognition tasks. The possible uses range from scientific calculations to image processing for intelligent autonomous devices to image reconstruction for electro-medical apparatuses. In particular AMs are used in High Energy Physics (HEP) experiments to detect particle tracks. HEP experiments generate a huge amount of data, but it is necessary to select and save only the most interesting tracks. Being the data compared in parallel, AMs are synchronous ICs that have a very peaked power consumption, and therefore it is necessary to minimize the power consumption. This AM is designed within the projects IMPART and HTT in 28 nm CMOS technology, using a fully-CMOS approach. The logic is based on the propagation of a \u201ckill signal\u201d that, if one of the bits in a word is not matching, inhibits the switching of the following cells. Thanks to this feature, the designed AM array consumes less than 0.7 fJ/bit. A prototype has been fabricated and it has proven to be functional. The final chip will be installed in the data acquisition chain of ATLAS experiment on HL-LHC at CERN. In the future nanocrossbars are expected to reduce device dimensions and interconnection complexity with respect to CMOS. Logic functions are obtained with switching lattices of four-terminal switches. The research activity on nanocrossbars is done within the project NANOxCOMP. To improve synthesis are used some algorithmic approaches based on Boolean function decomposition and regularities, in particular P-circuits, EXOR-Projected Sums of Products (EP-SOP), Dimension-reducible (D-red) functions and autosymmetric functions. The decomposed functions are implemented into lattices using internal and external decomposition methods. Experimental results show that this approaches reduce the complexity of the single synthesis problem and leads, in average, to a reduction of lattice area and synthesis time. Lattices are made of self-assembled structures and they have a non-negligible defectivity ratio. To cope with this limitation, some techniques to reduce sensitivity to defects have been studied

    ROBDD based path delay fault testable combinational circuit synthesis

    Get PDF

    Routing, Driven Placement for ATMEL 6000 Architecture FPGAs

    Get PDF
    Based on the concept of Cell Binary Tree (CBT), a new technique for mapping combination circuits into ATMEL 6000 Architecture FPGAs is presented in this thesis. Cell Binary Tree (CBT) is a net-list representation of combinational circuits. For each node of CBT there is a distinguished variable associated with it, the node itself represents a certain logic function, which is selected according to target FPGA architecture. The proposed CBT placement algorithms preserve local connectivity and allow better mapping into ATMEL FPGA. Experiments reveal that the new mapping technique achieved reduction in a number buses used for routing comparing with previously proposed Modified Squashed Binary Tree (MSBT) approach and possibly reduction of area as well. In general, the new technique is realized through following four major steps: 1. Grouping and generating CBT: This is a step to read blifformat file, which is the result of logic synthesis, into a CBT data structure through grouping algorithm, which is a process of gathering logic functions into nodes for mapping based on a targeted FPGA architecture. The main objective of creating CBT is to generate a minimum number of nodes (or cells) to be mapped. 2. CBT placement: Upon getting the minimum number of nodes in CBT to be mapped, the next step is to map those nodes into cells in FPGA. The significance of the placement method in this thesis is to lineup the cells with the same variable into the same row in the FPGA. 3. Bus Assignment: The process of assigning variables to local buses, which run in two possible directions; horizontal and vertical. ATMEL 6000 has two horizontal buses and two vertical buses for each cell. The assignment is based on the number of times a variable appears in a row or column. 4. Routing: The last stage of the process is the connecting cells which have the same input variable. One of the important steps in the routing process is to choose connection bridge cells with the minimum impact on the area

    The Fifth NASA Symposium on VLSI Design

    Get PDF
    The fifth annual NASA Symposium on VLSI Design had 13 sessions including Radiation Effects, Architectures, Mixed Signal, Design Techniques, Fault Testing, Synthesis, Signal Processing, and other Featured Presentations. The symposium provides insights into developments in VLSI and digital systems which can be used to increase data systems performance. The presentations share insights into next generation advances that will serve as a basis for future VLSI design

    Deriving a normal country: Italian capitalism and the political economy of financial derivatives

    Get PDF
    The financialisation literature is an invaluable resource to explore the expansion of finance in modern capitalism. However, the debate focuses on the US and the UK extensively, whilst being too general with regard to other contexts. This inattention hinders a proper understanding of financialisation in its di↔erential nature across societies. To rectify such limitation, this thesis advances a theoretically controlled and historically informed study about a striking instance of financial excess outside the Anglo-American scenario: derivatives in Italy. The work argues that scholars are inattentive to the heterogeneous nature of financialisation because they conceptualise the power of finance as entrenched in socio-economic structures. As a result, they underplay the actors who adopt financialised practices di↔erentially. Premised on this critique, the thesis advances an agency-centred approach that analyses power from the perspective of agents. In so doing, it examines the diverse traits of financialisation in relation to the specific power struggles in which actors are involved. Drawing on this method, the work shows that financialisation studies fail to appreciate how key social forces deployed derivatives for political-strategic purposes in the Italian context. During the 1990s, a neoliberal-reformist alliance of pro-market technocrats and centre-left politicians got to power and pushed for Italy to join EMU. This project functioned as an external limit on the domestic political-economic establishment which relied on high public debt, the vast state-owned enterprise and the opaque corporate-governance regime. In brief, citing a slogan widely used in those days, the neoliberal-reformist coalition attempted to make Italy a ‘normal country’ in Europe. Derivatives were crucial in this regard because they helped the Italian government comply with the EMU admission criteria. First, reformists encouraged hedge funds to arbitrage the interest-rate convergence between Italian and German bonds via OTC derivatives markets. Second, they arranged a currency swap that window-dressed the 1997 deficit. The thesis concludes by examining how other actors adopted derivatives to deal with the neoliberal-driven modernisation of Italy. It studies how the Agnelli family used equity swaps to secure ownership over FIAT and how municipalities manipulated budget restrictions through interest rate swaps

    Doctor of Philosophy

    Get PDF
    dissertationRecent breakthroughs in silicon photonics technology are enabling the integration of optical devices into silicon-based semiconductor processes. Photonics technology enables high-speed, high-bandwidth, and high-fidelity communications on the chip-scale-an important development in an increasingly communications-oriented semiconductor world. Significant developments in silicon photonic manufacturing and integration are also enabling investigations into applications beyond that of traditional telecom: sensing, filtering, signal processing, quantum technology-and even optical computing. In effect, we are now seeing a convergence of communications and computation, where the traditional roles of optics and microelectronics are becoming blurred. As the applications for opto-electronic integrated circuits (OEICs) are developed, and manufacturing capabilities expand, design support is necessary to fully exploit the potential of this optics technology. Such design support for moving beyond custom-design to automated synthesis and optimization is not well developed. Scalability requires abstractions, which in turn enables and requires the use of optimization algorithms and design methodology flows. Design automation represents an opportunity to take OEIC design to a larger scale, facilitating design-space exploration, and laying the foundation for current and future optical applications-thus fully realizing the potential of this technology. This dissertation proposes design automation for integrated optic system design. Using a buildingblock model for optical devices, we provide an EDA-inspired design flow and methodologies for optical design automation. Underlying these flows and methodologies are new supporting techniques in behavioral and physical synthesis, as well as device-resynthesis techniques for thermal-aware system integration. We also provide modeling for optical devices and determine optimization and constraint parameters that guide the automation techniques. Our techniques and methodologies are then applied to the design and optimization of optical circuits and devices. Experimental results are analyzed to evaluate their efficacy. We conclude with discussions on the contributions and limitations of the approaches in the context of optical design automation, and describe the tremendous opportunities for future research in design automation for integrated optics

    Reversible Computation: Extending Horizons of Computing

    Get PDF
    This open access State-of-the-Art Survey presents the main recent scientific outcomes in the area of reversible computation, focusing on those that have emerged during COST Action IC1405 "Reversible Computation - Extending Horizons of Computing", a European research network that operated from May 2015 to April 2019. Reversible computation is a new paradigm that extends the traditional forwards-only mode of computation with the ability to execute in reverse, so that computation can run backwards as easily and naturally as forwards. It aims to deliver novel computing devices and software, and to enhance existing systems by equipping them with reversibility. There are many potential applications of reversible computation, including languages and software tools for reliable and recovery-oriented distributed systems and revolutionary reversible logic gates and circuits, but they can only be realized and have lasting effect if conceptual and firm theoretical foundations are established first

    The implementation and applications of multiple-valued logic

    Get PDF
    Multiple-Valued Logic (MVL) takes two major forms. Multiple-valued circuits can implement the logic directly by using multiple-valued signals, or the logic can be implemented indirectly with binary circuits, by using more than one binary signal to represent a single multiple-valued signal. Techniques such as carry-save addition can be viewed as indirectly implemented MVL. Both direct and indirect techniques have been shown in the past to provide advantages over conventional arithmetic and logic techniques in algorithms required widely in computing for applications such as image and signal processing. It is possible to implement basic MVL building blocks at the transistor level. However, these circuits are difficult to design due to their non binary nature. In the design stage they are more like analogue circuits than binary circuits. Current integrated circuit technologies are biased towards binary circuitry. However, in spite of this, there is potential for power and area savings from MVL circuits, especially in technologies such as BiCMOS. This thesis shows that the use of voltage mode MVL will, in general not provide bandwidth increases on circuit buses because the buses become slower as the number of signal levels increases. Current mode MVL circuits however do have potential to reduce power and area requirements of arithmetic circuitry. The design of transistor level circuits is investigated in terms of a modern production technology. A novel methodology for the design of current mode MVL circuits is developed. The methodology is based upon the novel concept of the use of non-linear current encoding of signals, providing the opportunity for the efficient design of many previously unimplemented circuits in current mode MVL. This methodology is used to design a useful set of basic MVL building blocks, and fabrication results are reported. The creation of libraries of MVL circuits is also discussed. The CORDIC algorithm for two dimensional vector rotation is examined in detail as an example for indirect MVL implementation. The algorithm is extended to a set of three dimensional vector rotators using conventional arithmetic, redundant radix four arithmetic, and Taylor's series expansions. These algorithms can be used for two dimensional vector rotations in which no scale factor corrections are needed. The new algorithms are compared in terms of basic VLSI criteria against previously reported algorithms. A pipelined version of the redundant arithmetic algorithm is floorplanned and partially laid out to give indications of wiring overheads, and layout densities. An indirectly implemented MVL algorithm such as the CORDIC algorithm described in this thesis would clearly benefit from direct implementation in MVL

    LSST Science Book, Version 2.0

    Get PDF
    A survey that can cover the sky in optical bands over wide fields to faint magnitudes with a fast cadence will enable many of the exciting science opportunities of the next decade. The Large Synoptic Survey Telescope (LSST) will have an effective aperture of 6.7 meters and an imaging camera with field of view of 9.6 deg^2, and will be devoted to a ten-year imaging survey over 20,000 deg^2 south of +15 deg. Each pointing will be imaged 2000 times with fifteen second exposures in six broad bands from 0.35 to 1.1 microns, to a total point-source depth of r~27.5. The LSST Science Book describes the basic parameters of the LSST hardware, software, and observing plans. The book discusses educational and outreach opportunities, then goes on to describe a broad range of science that LSST will revolutionize: mapping the inner and outer Solar System, stellar populations in the Milky Way and nearby galaxies, the structure of the Milky Way disk and halo and other objects in the Local Volume, transient and variable objects both at low and high redshift, and the properties of normal and active galaxies at low and high redshift. It then turns to far-field cosmological topics, exploring properties of supernovae to z~1, strong and weak lensing, the large-scale distribution of galaxies and baryon oscillations, and how these different probes may be combined to constrain cosmological models and the physics of dark energy.Comment: 596 pages. Also available at full resolution at http://www.lsst.org/lsst/sciboo
    • 

    corecore