23,437 research outputs found
Statistical Power Supply Dynamic Noise Prediction in Hierarchical Power Grid and Package Networks
One of the most crucial high performance systems-on-chip design challenge is to front their power supply noise sufferance due to high frequencies, huge number of functional blocks and technology scaling down. Marking a difference from traditional post physical-design static voltage drop analysis, /a priori dynamic voltage drop/evaluation is the focus of this work. It takes into account transient currents and on-chip and package /RLC/ parasitics while exploring the power grid design solution space: Design countermeasures can be thus early defined and long post physical-design verification cycles can be shortened. As shown by an extensive set of results, a carefully extracted and modular grid library assures realistic evaluation of parasitics impact on noise and facilitates the power network construction; furthermore statistical analysis guarantees a correct current envelope evaluation and Spice simulations endorse reliable result
Trends and challenges in VLSI technology scaling towards 100 nm
Summary form only given. Moore's Law drives VLSI technology to continuous increases in transistor densities and higher clock frequencies. This tutorial will review the trends in VLSI technology scaling in the last few years and discuss the challenges facing process and circuit engineers in the 100nm generation and beyond. The first focus area is the process technology, including transistor scaling trends and research activities for the 100nm technology node and beyond. The transistor leakage and interconnect RC delays will continue to increase. The tutorial will review new circuit design techniques for emerging process technologies, including dual Vt transistors and silicon-on-insulator. It will also cover circuit and layout techniques to reduce clock distribution skew and jitter, model and reduce transistor leakage and improve the electrical performance of flip-chip packages. Finally, the tutorial will review the test challenges for the 100nm technology node due to increased clock frequency and power consumption (both active and passive) and present several potential solution
On Mitigation of Side-Channel Attacks in 3D ICs: Decorrelating Thermal Patterns from Power and Activity
Various side-channel attacks (SCAs) on ICs have been successfully
demonstrated and also mitigated to some degree. In the context of 3D ICs,
however, prior art has mainly focused on efficient implementations of classical
SCA countermeasures. That is, SCAs tailored for up-and-coming 3D ICs have been
overlooked so far. In this paper, we conduct such a novel study and focus on
one of the most accessible and critical side channels: thermal leakage of
activity and power patterns. We address the thermal leakage in 3D ICs early on
during floorplanning, along with tailored extensions for power and thermal
management. Our key idea is to carefully exploit the specifics of material and
structural properties in 3D ICs, thereby decorrelating the thermal behaviour
from underlying power and activity patterns. Most importantly, we discuss
powerful SCAs and demonstrate how our open-source tool helps to mitigate them.Comment: Published in Proc. Design Automation Conference, 201
A 90 nm CMOS 16 Gb/s Transceiver for Optical Interconnects
Interconnect architectures which leverage high-bandwidth optical channels offer a promising solution to address the increasing chip-to-chip I/O bandwidth demands. This paper describes a dense, high-speed, and low-power CMOS optical interconnect transceiver architecture. Vertical-cavity surface-emitting laser (VCSEL) data rate is extended for a given average current and corresponding reliability level with a four-tap current summing FIR transmitter. A low-voltage integrating and double-sampling optical receiver front-end provides adequate sensitivity in a power efficient manner by avoiding linear high-gain elements common in conventional transimpedance-amplifier (TIA) receivers. Clock recovery is performed with a dual-loop architecture which employs baud-rate phase detection and feedback interpolation to achieve reduced power consumption, while high-precision phase spacing is ensured at both the transmitter and receiver through adjustable delay clock buffers. A prototype chip fabricated in 1 V 90 nm CMOS achieves 16 Gb/s operation while consuming 129 mW and occupying 0.105 mm^2
Qualitative Assessment of Gene Expression in Affymetrix Genechip Arrays
Affymetrix Genechip microarrays are used widely to determine the simultaneous
expression of genes in a given biological paradigm. Probes on the Genechip
array are atomic entities which by definition are randomly distributed across
the array and in turn govern the gene expression. In the present study, we make
several interesting observations. We show that there is considerable
correlation between the probe intensities across the array which defy the
independence assumption. While the mechanism behind such correlations is
unclear, we show that scaling behavior and the profiles of perfect match (PM)
as well as mismatch (MM) probes are similar and immune to background
subtraction. We believe that the observed correlations are possibly an outcome
of inherent non-stationarities or patchiness in the array devoid of biological
significance. This is demonstrated by inspecting their scaling behavior and
profiles of the PM and MM probe intensities obtained from publicly available
Genechip arrays from three eukaryotic genomes, namely: Drosophila Melanogaster,
Homo Sapiens and Mus musculus across distinct biological paradigms and across
laboratories, with and without background subtraction. The fluctuation
functions were estimated using detrended fluctuation analysis (DFA) with fourth
order polynomial detrending. The results presented in this study provide new
insights into correlation signatures of PM and MM probe intensities and
suggests the choice of DFA as a tool for qualitative assessment of Affymetrix
Genechip microarrays prior to their analysis. A more detailed investigation is
necessary in order to understand the source of these correlations.Comment: 22 Pages, 7 Figures, 1 Tabl
Energy challenges for ICT
The energy consumption from the expanding use of information and communications technology (ICT) is unsustainable with present drivers, and it will impact heavily on the future climate change. However, ICT devices have the potential to contribute signi - cantly to the reduction of CO2 emission and enhance resource e ciency in other sectors, e.g., transportation (through intelligent transportation and advanced driver assistance systems and self-driving vehicles), heating (through smart building control), and manu- facturing (through digital automation based on smart autonomous sensors). To address the energy sustainability of ICT and capture the full potential of ICT in resource e - ciency, a multidisciplinary ICT-energy community needs to be brought together cover- ing devices, microarchitectures, ultra large-scale integration (ULSI), high-performance computing (HPC), energy harvesting, energy storage, system design, embedded sys- tems, e cient electronics, static analysis, and computation. In this chapter, we introduce challenges and opportunities in this emerging eld and a common framework to strive towards energy-sustainable ICT
A Survey of Prediction and Classification Techniques in Multicore Processor Systems
In multicore processor systems, being able to accurately predict the future provides new optimization opportunities, which otherwise could not be exploited. For example, an oracle able to predict a certain application\u27s behavior running on a smart phone could direct the power manager to switch to appropriate dynamic voltage and frequency scaling modes that would guarantee minimum levels of desired performance while saving energy consumption and thereby prolonging battery life. Using predictions enables systems to become proactive rather than continue to operate in a reactive manner. This prediction-based proactive approach has become increasingly popular in the design and optimization of integrated circuits and of multicore processor systems. Prediction transforms from simple forecasting to sophisticated machine learning based prediction and classification that learns from existing data, employs data mining, and predicts future behavior. This can be exploited by novel optimization techniques that can span across all layers of the computing stack. In this survey paper, we present a discussion of the most popular techniques on prediction and classification in the general context of computing systems with emphasis on multicore processors. The paper is far from comprehensive, but, it will help the reader interested in employing prediction in optimization of multicore processor systems
An Empirical Pixel-Based Correction for Imperfect CTE. I. HST's Advanced Camera for Surveys
We use an empirical approach to characterize the effect of charge-transfer
efficiency (CTE) losses in images taken with the Wide-Field Channel of the
Advanced Camera for Surveys. The study is based on profiles of warm pixels in
168 dark exposures taken between September and October 2009. The dark exposures
allow us to explore charge traps that affect electrons when the background is
extremely low. We develop a model for the readout process that reproduces the
observed trails out to 70 pixels. We then invert the model to convert the
observed pixel values in an image into an estimate of the original pixel
values. We find that when we apply the image-restoration process to science
images with a variety of stars on a variety of background levels, it restores
flux, position, and shape. This means that the observed trails contain
essentially all of the flux lost to inefficient CTE. The Space Telescope
Science Institute is currently evaluating this algorithm with the aim of
optimizing it and eventually providing enhanced data products. The empirical
procedure presented here should also work for other epochs (eg., pre-SM4),
though the parameters may have to be recomputed for the time when ACS was
operated at a higher temperature than the current -81 C. Finally, this
empirical approach may also hold promise for other instruments, such as WFPC2,
STIS, the ACS's HRC, and even WFC3/UVIS.Comment: 86 pages, 25 figures (6 in low resolution). PASP accepted on July 21,
201
R&D Paths of Pixel Detectors for Vertex Tracking and Radiation Imaging
This report reviews current trends in the R&D of semiconductor pixellated
sensors for vertex tracking and radiation imaging. It identifies requirements
of future HEP experiments at colliders, needed technological breakthroughs and
highlights the relation to radiation detection and imaging applications in
other fields of science.Comment: 17 pages, 2 figures, submitted to the European Strategy Preparatory
Grou
Composite CDMA - A statistical mechanics analysis
Code Division Multiple Access (CDMA) in which the spreading code assignment
to users contains a random element has recently become a cornerstone of CDMA
research. The random element in the construction is particular attractive as it
provides robustness and flexibility in utilising multi-access channels, whilst
not making significant sacrifices in terms of transmission power. Random codes
are generated from some ensemble, here we consider the possibility of combining
two standard paradigms, sparsely and densely spread codes, in a single
composite code ensemble. The composite code analysis includes a replica
symmetric calculation of performance in the large system limit, and
investigation of finite systems through a composite belief propagation
algorithm. A variety of codes are examined with a focus on the high
multi-access interference regime. In both the large size limit and finite
systems we demonstrate scenarios in which the composite code has typical
performance exceeding sparse and dense codes at equivalent signal to noise
ratio.Comment: 23 pages, 11 figures, Sigma Phi 2008 conference submission -
submitted to J.Stat.Mec
- …