855 research outputs found
TSV placement optimization for liquid cooled 3D-ICs with emerging NVMs
Three dimensional integrated circuits (3D-ICs) are a promising solution to the performance bottleneck in planar integrated circuits. One of the salient features of 3D-ICs is their ability to integrate heterogeneous technologies such as emerging non-volatile memories (NVMs) in a single chip. However, thermal management in 3D-ICs is a significant challenge, owing to the high heat flux (~ 250 W/cm2). Several research groups have focused either on run-time or design-time mechanisms to reduce the heat flux and did not consider 3D-ICs with heterogeneous stacks. The goal of this work is to achieve a balanced thermal gradient in 3D-ICs, while reducing the peak temperatures. In this research, placement algorithms for design-time optimization and choice of appropriate cooling mechanisms for run-time modulation of temperature are proposed. Specifically, an architectural framework which introduce weight-based simulated annealing (WSA) algorithm for thermal-aware placement of through silicon vias (TSVs) with inter-tier liquid cooling is proposed for design-time. In addition, integrating a dedicated stack of emerging NVMs such as RRAM, PCRAM and STTRAM, a run-time simulation framework is developed to analyze the thermal and performance impact of these NVMs in 3D-MPSoCs with inter-tier liquid cooling. Experimental results of WSA algorithm implemented on MCNC91 and GSRC benchmarks demonstrate up to 11 K reduction in the average temperature across the 3D-IC chip. In addition, power density arrangement in WSA improved the uniformity by 5%. Furthermore, simulation results of PARSEC benchmarks with NVM L2 cache demonstrates a temperature reduction of 12.5 K (RRAM) compared to SRAM in 3D-ICs. Especially, RRAM has proved to be thermally efficient replacement for SRAM with 34% lower energy delay product (EDP) and 9.7 K average temperature reduction
Novel dual-Vth independent-gate FinFET circuits
This paper describes gate work function and oxide thickness tuning
to realize novel circuits using dual-Vth independent-gate FinFETs.
Dual-Vth FinFETs with independent gates enable series and parallel
merge transformations in logic gates, realizing compact low
power alternatives. Furthermore, they also enable the design of a
new class of compact logic gates with higher expressive power and
flexibility than conventional forms, e.g., implementing 12 unique
Boolean functions using only four transistors. The gates are designed
and calibrated using the University of Florida double-gate
model into a technology library. Synthesis results for 14 benchmark
circuits from the ISCAS and OpenSPARC suites indicate that
on average, the enhanced library reduces delay, power, and area by
9%, 21%, and 27%, respectively, over a conventional library designed
using FinFETs in 32nm technology.NSF CAREER Award CCF-074685
Dual-Vth Independent-Gate FinFETs for Low Power Logic Circuits
This paper describes the electrode work-function,
oxide thickness, gate-source/drain underlap, and silicon thickness
optimization required to realize dual-Vth independent-gate
FinFETs. Optimum values for these FinFET design parameters
are derived using the physics-based University of Florida SPICE
model for double-gate devices, and the optimized FinFETs are
simulated and validated using Sentaurus TCAD simulations.
Dual-Vth FinFETs with independent gates enable series and
parallel merge transformations in logic gates, realizing compact
low power alternative gates with competitive performance and
reduced input capacitance in comparison to conventional FinFET
gates. Furthermore, they also enable the design of a new class of
compact logic gates with higher expressive power and flexibility
than conventional CMOS gates, e.g., implementing 12 unique
Boolean functions using only four transistors. Circuit designs
that balance and improve the performance of the novel gates
are described. The gates are designed and calibrated using
the University of Florida double-gate model into conventional
and enhanced technology libraries. Synthesis results for 16
benchmark circuits from the ISCAS and OpenSPARC suites
indicate that on average at 2GHz, the enhanced library reduces
total power and the number of fins by 36% and 37%, respectively,
over a conventional library designed using shorted-gate FinFETs
in 32 nm technology
Why Did so Many Poor-Performing Firms Come to Market in the Late 1990s?: Nasdaq Listing Standards and the Bubble
This paper examines the impact of Nasdaq Listing Standards on the composition of new listings in the late 1990s. The Nasdaq has two types of listing standards: one based on profitability and the second based explicitly or implicitly on market capitalization. Specifically, unprofitable firms are allowed to list if either their pro-forma net tangible assets, which include the anticipated proceeds from their IPO, exceeds 75 million. We show that as the market bubble accelerated in the late 1990s, a vast majority of firms entered under a market capitalization based standard, and these firms became a substantial portion of the Nasdaq. Subsequently, these firms performed the poorest both in terms of financial performance, stock return performance as well as involuntary delistings, while firms that listed under the profitability standard performed much better. In addition, firms that entered under market capitalization standards also exhibited the greatest return volatility. These results illustrate the importance of a profitability standard and the danger of a market capitalization based standard (explicit or implicit) in a market that is in, what ex-post turns out to be, a bubble
Stock Option Expense, Forward-Looking Information, and Implied Volatilities of Traded Options
Prior research generally finds that firms underreport option expense by managing
assumptions underlying option valuation (e.g. they shorten the expected option lives), but it fails to document management of a key assumption, the one concerning expected stock-price volatility. Using a new methodology, we address two questions: (1) To what extent do companies follow the guidance in FAS 123 and use forward looking information in addition to the readily available historical volatility in estimating expected volatility? (2) What determines
the cross-sectional variation in the reliance on forward looking information? We find that firms use both historical and forward-looking information in deriving expected volatility. We also find, however, that the reliance on forward-looking information is limited to situations where this reliance results in reduced expected volatility and thus smaller option expense. We interpret this finding as managers opportunistically use the discretion in estimating expected volatility afforded by FAS 123. In support of this interpretation, we also find that managerial incentives
play a key role in this opportunism
Accident analysis of software architecture in high -reliability systems: Space Based Infrared System software problems
The accident analysis of SBIRS program is conducted by gathering information for 15 years to understand the cause of the accident. The program had series of failures, workarounds were developed incrementally to solve the incidental problems over the years. This resulted in major failure in thermal vacuum testing. The architecture was reassessed, the new architecture so adopted was the wrong architecture. This is the accident this research has analyzed. The cause of the accident is analyzed thoroughly to understand the circumstances in which such an architecture was adopted.
A System analysis of the environment was conducted to understand the accident circumstances and an accident analysis was conducted to understand the influence of the systemic failures of the wrong architectural decision which is the accident analyzed. A comparative study of accident analysis methodologies was undertaken to derive the best-suited method for accident analysis. A systemic accident analysis method STAMP, which analyses the accidents caused by the influence of the environment was considered as the best fit.
The STAMP accident analysis method was adopted to understand the accident in detail. The accident analysis was performed based on the reports gathered from GAO, DOD and other sources and was confirmed for its completeness and accuracy from GAO. STPA process was adopted to conduct accident analysis in three stages – identifying control structures, changes in control structures and dynamic process model. STAMP accident analysis was improved by adding context as an additional factor.
Accidents with context as the cause of the accident were analyzed to understand the possible solutions. The realization of the importance of context as accident cause was understood and the need to enhance the accident analysis model was realized. By adding context as part of the process that needs to be transferred to ensure successful completion was suggested. An organizational model that has been successful in assessing the accidents due to the context in the different domain was studied and was suggested to be adopted as preventive accident analysis model. Finally, the wrong architectural decision being the accident is contested and argued as the accident, as currently such decisions are not considered as an accident in the industry.
This research has identified the cause of the accident to be the context in which organizations were operating. The solution suggested is to stabilize the context in one organization and replicate the stabilized context around the organizations involved in the program. The solution identifies contextual enhancement techniques used in health and safety management to build a positive culture in the organization.
Thus this research has contributed towards analyzing the architectural failure in SBIRS program by identifying an accident analysis method that best suits the case study, applied the accident analysis to the case study to understand the cause of the accident. A recommendation of enhancing the factors in accident analysis was suggested and an accident prevention technique was recommended and a process to adopt this technique was suggested.
This research has led to two further recommendations for future work. An architectural technique which would create the framework of components to prevent future architectural accidents such as this case study will be followed up. And a process to successfully pass the context in order to prevent accidents caused by organizational context will be taken further.
This research is structured to understand the problem, analyze the problem using specific accident analysis methodology related to the domain detailing the accident, comparing different domains with the similar accident cause and finally recommending an accident prevention technique which had been successful in organizations
Recommended from our members
Design of Energy Efficient Snubber Circuits for Protection of Switching Devices in High Power Applications
This thesis was submitted for the award of Doctor of Philosophy and was awarded by Brunel University LondonSemiconductor devices are subjected to elevated levels of dv/dt and di/dt when used at high voltage high current and elevated temperature applications. To reduce the stress from semiconductor switches, turn-on snubber circuits are used during the turn on time and turn-off snubber circuits during the turn-off time. In low power applications where the switching losses is not significant these can be ignored. Over the last few years, Voltage Controlled Voltage Source (VCVS) applications in High Voltage Direct Current (HVDC) has increased particularly with the use of Multilevel Converters (MLCs). Switching Losses in such high power applications now needs to be considered as it is no longer insignificant. Energy efficient snubber circuits (EESCs) became available only for low power applications according to the literature review. The research dealt with the design of EESCs in high power cascaded H-bridge MLCs. The main contributions made were: - (1) A critical review of present snubber circuits. (2) Design of energy efficient snubber circuits. (3) Design of Safe Operating Area (SOA) was possible by application of COMSOL thermal simulation for the power switch used in MLCs. (4) A reduction in switching power loss of 1782 MWh before EESC and 1379 MWh after the EESC (22.6%), which is an annual reduction of 403MWh, which impacts on the reduction in Global Warming. (5) Significant annual cost benefits from £125,000 to £68,612 (55%) in the reduction of wasted switching dissipated energy. (6) Additional benefit in the connection of inductors in the EESCs, resulted in a reduction of harmonic levels of 6% at V3 down to 1.5% at V7. Optimisation methods, like Particle Swarm Optimisation (PSO) and Graduated Reduction Gradient (GRG) were used to evaluate individual components in the proposed EESCs. Use of COMSOL thermal simulation software was critical in the design of the power IGBT SOA. A case study of 250 kW station, a reduced scale of a typical HVDC station of 2000 MW (for example Sellindge HVDC station), based on 7-level MLC used Isolated Gate Bipolar Transistors (IGBTs) to evaluate the annual reduction in power losses and reduction in cost. If an upward trajectory is computed, based on the number of UK HVDC Converter stations, enormous economic and energy recovery can result with significant impact towards a decrease in global warming. The results obtained validated the research goals and identified a high potential for the application of EESCs in HVDC
- …