41,930 research outputs found
Modeling the Impact of Process Variation on Resistive Bridge Defects
Recent research has shown that tests generated without taking process variation into account may lead to loss of test quality. At present there is no efficient device-level modeling technique that models the effect of process variation on resistive bridges. This paper presents a fast and accurate technique to model the effect of process variation on resistive bridge defects. The proposed model is implemented in two stages: firstly, it employs an accurate transistor model (BSIM4) to calculate the critical resistance of a bridge; secondly, the effect of process variation is incorporated in this model by using three transistor parameters: gate length (L), threshold voltage (V) and effective mobility (ueff) where each follow Gaussian distribution. Experiments are conducted on a 65-nm gate library (for illustration purposes), and results show that on average the proposed modeling technique is more than 7 times faster and in the worst case, error in bridge critical resistance is 0.8% when compared with HSPICE
Online Tensor Methods for Learning Latent Variable Models
We introduce an online tensor decomposition based approach for two latent
variable modeling problems namely, (1) community detection, in which we learn
the latent communities that the social actors in social networks belong to, and
(2) topic modeling, in which we infer hidden topics of text articles. We
consider decomposition of moment tensors using stochastic gradient descent. We
conduct optimization of multilinear operations in SGD and avoid directly
forming the tensors, to save computational and storage costs. We present
optimized algorithm in two platforms. Our GPU-based implementation exploits the
parallelism of SIMD architectures to allow for maximum speed-up by a careful
optimization of storage and data transfer, whereas our CPU-based implementation
uses efficient sparse matrix computations and is suitable for large sparse
datasets. For the community detection problem, we demonstrate accuracy and
computational efficiency on Facebook, Yelp and DBLP datasets, and for the topic
modeling problem, we also demonstrate good performance on the New York Times
dataset. We compare our results to the state-of-the-art algorithms such as the
variational method, and report a gain of accuracy and a gain of several orders
of magnitude in the execution time.Comment: JMLR 201
Resilience in Numerical Methods: A Position on Fault Models and Methodologies
Future extreme-scale computer systems may expose silent data corruption (SDC)
to applications, in order to save energy or increase performance. However,
resilience research struggles to come up with useful abstract programming
models for reasoning about SDC. Existing work randomly flips bits in running
applications, but this only shows average-case behavior for a low-level,
artificial hardware model. Algorithm developers need to understand worst-case
behavior with the higher-level data types they actually use, in order to make
their algorithms more resilient. Also, we know so little about how SDC may
manifest in future hardware, that it seems premature to draw conclusions about
the average case. We argue instead that numerical algorithms can benefit from a
numerical unreliability fault model, where faults manifest as unbounded
perturbations to floating-point data. Algorithms can use inexpensive "sanity"
checks that bound or exclude error in the results of computations. Given a
selective reliability programming model that requires reliability only when and
where needed, such checks can make algorithms reliable despite unbounded
faults. Sanity checks, and in general a healthy skepticism about the
correctness of subroutines, are wise even if hardware is perfectly reliable.Comment: Position Pape
Sub-Nyquist Sampling: Bridging Theory and Practice
Sampling theory encompasses all aspects related to the conversion of
continuous-time signals to discrete streams of numbers. The famous
Shannon-Nyquist theorem has become a landmark in the development of digital
signal processing. In modern applications, an increasingly number of functions
is being pushed forward to sophisticated software algorithms, leaving only
those delicate finely-tuned tasks for the circuit level.
In this paper, we review sampling strategies which target reduction of the
ADC rate below Nyquist. Our survey covers classic works from the early 50's of
the previous century through recent publications from the past several years.
The prime focus is bridging theory and practice, that is to pinpoint the
potential of sub-Nyquist strategies to emerge from the math to the hardware. In
that spirit, we integrate contemporary theoretical viewpoints, which study
signal modeling in a union of subspaces, together with a taste of practical
aspects, namely how the avant-garde modalities boil down to concrete signal
processing systems. Our hope is that this presentation style will attract the
interest of both researchers and engineers in the hope of promoting the
sub-Nyquist premise into practical applications, and encouraging further
research into this exciting new frontier.Comment: 48 pages, 18 figures, to appear in IEEE Signal Processing Magazin
A view from the Bridge: agreement between the SF-6D utility algorithm and the Health utilities Index
BACKGROUND: The SF-6D is a new health state classification and utility scoring system based on 6 dimensions (‘6D’)
of the Short Form 36, and permits a ‘‘bridging’’ transformation between SF-36 responses and utilities. The Health
Utilities Index, mark 3 (HUI3) is a valid and reliable multi-attribute health utility scale that is widely used. We
assessed within-subject agreement between SF-6D utilities and those from HUI3.
METHODS: Patients at increased risk of sudden cardiac death and participating in a randomized trial of implantable
defibrillator therapy completed both instruments at baseline. Score distributions were inspected by scatterplot and
histogram and mean score differences compared by paired t-test. Pearson correlation was computed between
instrument scores and also between dimension scores within instruments. Between-instrument agreement was by
intra-class correlation coefficient (ICC).
RESULTS: SF-6D and HUI3 forms were available from 246 patients. Mean scores for HUI3 and SF-6D were 0.61
(95% CI 0.60–0.63) and 0.58 (95% CI 0.54–0.62) respectively; a difference of 0.03 (p50.03). Score intervals for
HUI3 and SF-6D were (-0.21 to 1.0) and (0.30–0.95). Correlation between the instrument scores was 0.58 (95% CI
0.48–0.68) and agreement by ICC was 0.42 (95% CI 0.31–0.52). Correlations between dimensions of SF-6D were
higher than for HUI3.
CONCLUSIONS: Our study casts doubt on the whether utilities and QALYs estimated via SF-6D are comparable with
those from HUI3. Utility differences may be due to differences in underlying concepts of health being measured, or
different measurement approaches, or both. No gold standard exists for utility measurement and the SF-6D is a
valuable addition that permits SF-36 data to be transformed into utilities to estimate QALYs. The challenge is
developing a better understanding as to why these classification-based utility instruments differ so markedly in their
distributions and point estimates of derived utilities
- …