19,451 research outputs found
An Efficient Implementation of Built in Self Diagnosis for Low Power Test Pattern Generator
A New architecture of Built-In Self-Diagnosis is presented in this project. The logic Built-In-Self-Test architecture method is extreme response compaction architecture. This architecture first time enables an autonomous on-chip evaluation of test responses with negligible hardware overhead. Architecture advantage is all data, which is relevant for a subsequent diagnosis, is gathered during just one test session. Due to some reasons, the existing method Built-In Self-Test is less often applied to random logic than to embedded memories. The generation of deterministic test patterns can become prohibitively high due to hardware overhead. The diagnostic resolution of compacted test responses is in many cases poor and the overhead required for an acceptable resolution may become too high. Modifications in Linear Feedback Shift Register to generate test pattern with security for modified Built-In-Self-Test applications with reduced power requirement. The modified Built-In-Self-Test circuit incorporates a fault syndrome compression scheme and improves the circuit speed with reduction of time
Natural Compression for Distributed Deep Learning
Modern deep learning models are often trained in parallel over a collection
of distributed machines to reduce training time. In such settings,
communication of model updates among machines becomes a significant performance
bottleneck and various lossy update compression techniques have been proposed
to alleviate this problem. In this work, we introduce a new, simple yet
theoretically and practically effective compression technique: {\em natural
compression (NC)}. Our technique is applied individually to all entries of the
to-be-compressed update vector and works by randomized rounding to the nearest
(negative or positive) power of two, which can be computed in a "natural" way
by ignoring the mantissa. We show that compared to no compression, NC increases
the second moment of the compressed vector by not more than the tiny factor
\nicefrac{9}{8}, which means that the effect of NC on the convergence speed
of popular training algorithms, such as distributed SGD, is negligible.
However, the communications savings enabled by NC are substantial, leading to
{\em - improvement in overall theoretical running time}. For
applications requiring more aggressive compression, we generalize NC to {\em
natural dithering}, which we prove is {\em exponentially better} than the
common random dithering technique. Our compression operators can be used on
their own or in combination with existing operators for a more aggressive
combined effect, and offer new state-of-the-art both in theory and practice.Comment: 8 pages, 20 pages of Appendix, 6 Tables, 14 Figure
A survey of inlet/engine distortion compatibility
The history of distortion analysis is traced back to its origin in parallel compressor theory which was initially proposed in the late fifties. The development of this theory is reviewed up to its inclusion in the complex computer codes of today. It is found to be a very useful tool to guide development but not quantitative enough to predict compatibility. Dynamic or instantaneous distortion methodology is also reviewed from its origins in the sixties, to its current application in the eighties. Many of the requirements for interpreting instantaneous distortion are considered and illustrated. Statistical methods for predicting the peak distortion are described, and their limitations and advantages discussed. Finally, some Reynolds number and scaling considerations for inlet testing are considered. It is concluded that the deterministic instantaneous distortion methodology combined with distortion testing of engines with screens will remain the primary method of predicting compatibility for the near future. However, parallel compressor analysis and statistical peak distortion prediction will be important tools employed during the development of inlet/engine compatibility
Data-adaptive harmonic spectra and multilayer Stuart-Landau models
Harmonic decompositions of multivariate time series are considered for which
we adopt an integral operator approach with periodic semigroup kernels.
Spectral decomposition theorems are derived that cover the important cases of
two-time statistics drawn from a mixing invariant measure.
The corresponding eigenvalues can be grouped per Fourier frequency, and are
actually given, at each frequency, as the singular values of a cross-spectral
matrix depending on the data. These eigenvalues obey furthermore a variational
principle that allows us to define naturally a multidimensional power spectrum.
The eigenmodes, as far as they are concerned, exhibit a data-adaptive character
manifested in their phase which allows us in turn to define a multidimensional
phase spectrum.
The resulting data-adaptive harmonic (DAH) modes allow for reducing the
data-driven modeling effort to elemental models stacked per frequency, only
coupled at different frequencies by the same noise realization. In particular,
the DAH decomposition extracts time-dependent coefficients stacked by Fourier
frequency which can be efficiently modeled---provided the decay of temporal
correlations is sufficiently well-resolved---within a class of multilayer
stochastic models (MSMs) tailored here on stochastic Stuart-Landau oscillators.
Applications to the Lorenz 96 model and to a stochastic heat equation driven
by a space-time white noise, are considered. In both cases, the DAH
decomposition allows for an extraction of spatio-temporal modes revealing key
features of the dynamics in the embedded phase space. The multilayer
Stuart-Landau models (MSLMs) are shown to successfully model the typical
patterns of the corresponding time-evolving fields, as well as their statistics
of occurrence.Comment: 26 pages, double columns; 15 figure
Optimization of Planck/LFI on--board data handling
To asses stability against 1/f noise, the Low Frequency Instrument (LFI)
onboard the Planck mission will acquire data at a rate much higher than the
data rate allowed by its telemetry bandwith of 35.5 kbps. The data are
processed by an onboard pipeline, followed onground by a reversing step. This
paper illustrates the LFI scientific onboard processing to fit the allowed
datarate. This is a lossy process tuned by using a set of 5 parameters Naver,
r1, r2, q, O for each of the 44 LFI detectors. The paper quantifies the level
of distortion introduced by the onboard processing, EpsilonQ, as a function of
these parameters. It describes the method of optimizing the onboard processing
chain. The tuning procedure is based on a optimization algorithm applied to
unprocessed and uncompressed raw data provided either by simulations, prelaunch
tests or data taken from LFI operating in diagnostic mode. All the needed
optimization steps are performed by an automated tool, OCA2, which ends with
optimized parameters and produces a set of statistical indicators, among them
the compression rate Cr and EpsilonQ. For Planck/LFI the requirements are Cr =
2.4 and EpsilonQ <= 10% of the rms of the instrumental white noise. To speedup
the process an analytical model is developed that is able to extract most of
the relevant information on EpsilonQ and Cr as a function of the signal
statistics and the processing parameters. This model will be of interest for
the instrument data analysis. The method was applied during ground tests when
the instrument was operating in conditions representative of flight. Optimized
parameters were obtained and the performance has been verified, the required
data rate of 35.5 Kbps has been achieved while keeping EpsilonQ at a level of
3.8% of white noise rms well within the requirements.Comment: 51 pages, 13 fig.s, 3 tables, pdflatex, needs JINST.csl, graphicx,
txfonts, rotating; Issue 1.0 10 nov 2009; Sub. to JINST 23Jun09, Accepted
10Nov09, Pub.: 29Dec09; This is a preprint, not the final versio
Compressive Imaging Using RIP-Compliant CMOS Imager Architecture and Landweber Reconstruction
In this paper, we present a new image sensor architecture for fast and accurate compressive sensing (CS) of natural images. Measurement matrices usually employed in CS CMOS image sensors are recursive pseudo-random binary matrices. We have proved that the restricted isometry property of these matrices is limited by a low sparsity constant. The quality of these matrices is also affected by the non-idealities of pseudo-random number generators (PRNG). To overcome these limitations, we propose a hardware-friendly pseudo-random ternary measurement matrix generated on-chip by means of class III elementary cellular automata (ECA). These ECA present a chaotic behavior that emulates random CS measurement matrices better than other PRNG. We have combined this new architecture with a block-based CS smoothed-projected Landweber reconstruction algorithm. By means of single value decomposition, we have adapted this algorithm to perform fast and precise reconstruction while operating with binary and ternary matrices. Simulations are provided to qualify the approach.Ministerio de Economía y Competitividad TEC2015-66878-C3-1-RJunta de Andalucía TIC 2338-2013Office of Naval Research (USA) N000141410355European Union H2020 76586
- …