851 research outputs found

    TSV placement optimization for liquid cooled 3D-ICs with emerging NVMs

    Get PDF
    Three dimensional integrated circuits (3D-ICs) are a promising solution to the performance bottleneck in planar integrated circuits. One of the salient features of 3D-ICs is their ability to integrate heterogeneous technologies such as emerging non-volatile memories (NVMs) in a single chip. However, thermal management in 3D-ICs is a significant challenge, owing to the high heat flux (~ 250 W/cm2). Several research groups have focused either on run-time or design-time mechanisms to reduce the heat flux and did not consider 3D-ICs with heterogeneous stacks. The goal of this work is to achieve a balanced thermal gradient in 3D-ICs, while reducing the peak temperatures. In this research, placement algorithms for design-time optimization and choice of appropriate cooling mechanisms for run-time modulation of temperature are proposed. Specifically, an architectural framework which introduce weight-based simulated annealing (WSA) algorithm for thermal-aware placement of through silicon vias (TSVs) with inter-tier liquid cooling is proposed for design-time. In addition, integrating a dedicated stack of emerging NVMs such as RRAM, PCRAM and STTRAM, a run-time simulation framework is developed to analyze the thermal and performance impact of these NVMs in 3D-MPSoCs with inter-tier liquid cooling. Experimental results of WSA algorithm implemented on MCNC91 and GSRC benchmarks demonstrate up to 11 K reduction in the average temperature across the 3D-IC chip. In addition, power density arrangement in WSA improved the uniformity by 5%. Furthermore, simulation results of PARSEC benchmarks with NVM L2 cache demonstrates a temperature reduction of 12.5 K (RRAM) compared to SRAM in 3D-ICs. Especially, RRAM has proved to be thermally efficient replacement for SRAM with 34% lower energy delay product (EDP) and 9.7 K average temperature reduction

    Novel dual-Vth independent-gate FinFET circuits

    Get PDF
    This paper describes gate work function and oxide thickness tuning to realize novel circuits using dual-Vth independent-gate FinFETs. Dual-Vth FinFETs with independent gates enable series and parallel merge transformations in logic gates, realizing compact low power alternatives. Furthermore, they also enable the design of a new class of compact logic gates with higher expressive power and flexibility than conventional forms, e.g., implementing 12 unique Boolean functions using only four transistors. The gates are designed and calibrated using the University of Florida double-gate model into a technology library. Synthesis results for 14 benchmark circuits from the ISCAS and OpenSPARC suites indicate that on average, the enhanced library reduces delay, power, and area by 9%, 21%, and 27%, respectively, over a conventional library designed using FinFETs in 32nm technology.NSF CAREER Award CCF-074685

    Dual-Vth Independent-Gate FinFETs for Low Power Logic Circuits

    Get PDF
    This paper describes the electrode work-function, oxide thickness, gate-source/drain underlap, and silicon thickness optimization required to realize dual-Vth independent-gate FinFETs. Optimum values for these FinFET design parameters are derived using the physics-based University of Florida SPICE model for double-gate devices, and the optimized FinFETs are simulated and validated using Sentaurus TCAD simulations. Dual-Vth FinFETs with independent gates enable series and parallel merge transformations in logic gates, realizing compact low power alternative gates with competitive performance and reduced input capacitance in comparison to conventional FinFET gates. Furthermore, they also enable the design of a new class of compact logic gates with higher expressive power and flexibility than conventional CMOS gates, e.g., implementing 12 unique Boolean functions using only four transistors. Circuit designs that balance and improve the performance of the novel gates are described. The gates are designed and calibrated using the University of Florida double-gate model into conventional and enhanced technology libraries. Synthesis results for 16 benchmark circuits from the ISCAS and OpenSPARC suites indicate that on average at 2GHz, the enhanced library reduces total power and the number of fins by 36% and 37%, respectively, over a conventional library designed using shorted-gate FinFETs in 32 nm technology

    Why Did so Many Poor-Performing Firms Come to Market in the Late 1990s?: Nasdaq Listing Standards and the Bubble

    Get PDF
    This paper examines the impact of Nasdaq Listing Standards on the composition of new listings in the late 1990s. The Nasdaq has two types of listing standards: one based on profitability and the second based explicitly or implicitly on market capitalization. Specifically, unprofitable firms are allowed to list if either their pro-forma net tangible assets, which include the anticipated proceeds from their IPO, exceeds 18millionortheirmarketcapitalizationexceeds18 million or their market capitalization exceeds 75 million. We show that as the market bubble accelerated in the late 1990s, a vast majority of firms entered under a market capitalization based standard, and these firms became a substantial portion of the Nasdaq. Subsequently, these firms performed the poorest both in terms of financial performance, stock return performance as well as involuntary delistings, while firms that listed under the profitability standard performed much better. In addition, firms that entered under market capitalization standards also exhibited the greatest return volatility. These results illustrate the importance of a profitability standard and the danger of a market capitalization based standard (explicit or implicit) in a market that is in, what ex-post turns out to be, a bubble

    Stock Option Expense, Forward-Looking Information, and Implied Volatilities of Traded Options

    Get PDF
    Prior research generally finds that firms underreport option expense by managing assumptions underlying option valuation (e.g. they shorten the expected option lives), but it fails to document management of a key assumption, the one concerning expected stock-price volatility. Using a new methodology, we address two questions: (1) To what extent do companies follow the guidance in FAS 123 and use forward looking information in addition to the readily available historical volatility in estimating expected volatility? (2) What determines the cross-sectional variation in the reliance on forward looking information? We find that firms use both historical and forward-looking information in deriving expected volatility. We also find, however, that the reliance on forward-looking information is limited to situations where this reliance results in reduced expected volatility and thus smaller option expense. We interpret this finding as managers opportunistically use the discretion in estimating expected volatility afforded by FAS 123. In support of this interpretation, we also find that managerial incentives play a key role in this opportunism

    Accident analysis of software architecture in high -reliability systems: Space Based Infrared System software problems

    Get PDF
    The accident analysis of SBIRS program is conducted by gathering information for 15 years to understand the cause of the accident. The program had series of failures, workarounds were developed incrementally to solve the incidental problems over the years. This resulted in major failure in thermal vacuum testing. The architecture was reassessed, the new architecture so adopted was the wrong architecture. This is the accident this research has analyzed. The cause of the accident is analyzed thoroughly to understand the circumstances in which such an architecture was adopted. A System analysis of the environment was conducted to understand the accident circumstances and an accident analysis was conducted to understand the influence of the systemic failures of the wrong architectural decision which is the accident analyzed. A comparative study of accident analysis methodologies was undertaken to derive the best-suited method for accident analysis. A systemic accident analysis method STAMP, which analyses the accidents caused by the influence of the environment was considered as the best fit. The STAMP accident analysis method was adopted to understand the accident in detail. The accident analysis was performed based on the reports gathered from GAO, DOD and other sources and was confirmed for its completeness and accuracy from GAO. STPA process was adopted to conduct accident analysis in three stages – identifying control structures, changes in control structures and dynamic process model. STAMP accident analysis was improved by adding context as an additional factor. Accidents with context as the cause of the accident were analyzed to understand the possible solutions. The realization of the importance of context as accident cause was understood and the need to enhance the accident analysis model was realized. By adding context as part of the process that needs to be transferred to ensure successful completion was suggested. An organizational model that has been successful in assessing the accidents due to the context in the different domain was studied and was suggested to be adopted as preventive accident analysis model. Finally, the wrong architectural decision being the accident is contested and argued as the accident, as currently such decisions are not considered as an accident in the industry. This research has identified the cause of the accident to be the context in which organizations were operating. The solution suggested is to stabilize the context in one organization and replicate the stabilized context around the organizations involved in the program. The solution identifies contextual enhancement techniques used in health and safety management to build a positive culture in the organization. Thus this research has contributed towards analyzing the architectural failure in SBIRS program by identifying an accident analysis method that best suits the case study, applied the accident analysis to the case study to understand the cause of the accident. A recommendation of enhancing the factors in accident analysis was suggested and an accident prevention technique was recommended and a process to adopt this technique was suggested. This research has led to two further recommendations for future work. An architectural technique which would create the framework of components to prevent future architectural accidents such as this case study will be followed up. And a process to successfully pass the context in order to prevent accidents caused by organizational context will be taken further. This research is structured to understand the problem, analyze the problem using specific accident analysis methodology related to the domain detailing the accident, comparing different domains with the similar accident cause and finally recommending an accident prevention technique which had been successful in organizations
    • …
    corecore