3,693 research outputs found
Recommended from our members
Analysing and bounding numerical error in spiking neural network simulations
This study explores how numerical error occurs in simulations of spiking neural network models, and also how this error propagates through the simulation, changing its observed behaviour. The issue of non-reproducibility in parallel spiking neural network simulations is illustrated, and a method to bound all possible trajectories is discussed. The base method used in this study is known as mixed interval and affine arithmetic (mixed IA/AA), but some extra modifications are made to improve the tightness of the error bounds.
I introduce Arpra, my new software, which is an arbitrary precision range analysis library, based on the GNU MPFR library. It improves on other implementations by enabling computations in custom floating-point precisions, and reduces the overhead rounding error of mixed IA/AA by computing in extended precision internally. It also implements a new error trimming technique, which reduces the error term whilst preserving correct boundaries. Arpra also implements deviation term condensing functions, which can reduce the number of floating-point operations per function significantly. Arpra is tested by simulating the Hénon map dynamical system, and found to produce tighter ranges than those of INTLAB, an alternative mixed IA/AA implementation.
Arpra is used to bound the trajectories of fan-in spiking neural network simulations. Despite performing better than interval arithmetic, the mixed IA/AA method used by Arpra is shown to be inadequate for bounding the simulation trajectories, due to the highly nonlinear nature of spiking neural networks. A stability analysis of the neural network model is performed, and it is found that error boundaries are moderately tight in non-spiking regions of state space, where linear dynamics dominate, but error boundaries explode in spiking regions of state space, where nonlinear dynamics dominate
Fault testing quantum switching circuits
Test pattern generation is an electronic design automation tool that attempts
to find an input (or test) sequence that, when applied to a digital circuit,
enables one to distinguish between the correct circuit behavior and the faulty
behavior caused by particular faults. The effectiveness of this classical
method is measured by the fault coverage achieved for the fault model and the
number of generated vectors, which should be directly proportional to test
application time. This work address the quantum process validation problem by
considering the quantum mechanical adaptation of test pattern generation
methods used to test classical circuits. We found that quantum mechanics allows
one to execute multiple test vectors concurrently, making each gate realized in
the process act on a complete set of characteristic states in space/time
complexity that breaks classical testability lower bounds.Comment: (almost) Forgotten rewrite from 200
A Static Analyzer for Large Safety-Critical Software
We show that abstract interpretation-based static program analysis can be
made efficient and precise enough to formally verify a class of properties for
a family of large programs with few or no false alarms. This is achieved by
refinement of a general purpose static analyzer and later adaptation to
particular programs of the family by the end-user through parametrization. This
is applied to the proof of soundness of data manipulation operations at the
machine level for periodic synchronous safety critical embedded software. The
main novelties are the design principle of static analyzers by refinement and
adaptation through parametrization, the symbolic manipulation of expressions to
improve the precision of abstract transfer functions, the octagon, ellipsoid,
and decision tree abstract domains, all with sound handling of rounding errors
in floating point computations, widening strategies (with thresholds, delayed)
and the automatic determination of the parameters (parametrized packing)
Design of a reusable distributed arithmetic filter and its application to the affine projection algorithm
Digital signal processing (DSP) is widely used in many applications spanning the spectrum from audio processing to image and video processing to radar and sonar processing. At the core of digital signal processing applications is the digital filter which are implemented in two ways, using either finite impulse response (FIR) filters or infinite impulse response (IIR) filters. The primary difference between FIR and IIR is that for FIR filters, the output is dependent only on the inputs, while for IIR filters the output is dependent on the inputs and the previous outputs. FIR filters also do not sur from stability issues stemming from the feedback of the output to the input that aect IIR filters.
In this thesis, an architecture for FIR filtering based on distributed arithmetic is presented. The proposed architecture has the ability to implement large FIR filters using minimal hardware and at the same time is able to complete the FIR filtering operation in minimal amount of time and delay when compared to typical FIR filter implementations. The proposed architecture is then used to implement the fast affine projection adaptive algorithm, an algorithm that is typically used with large filter sizes. The fast affine projection algorithm has a high computational burden that limits the throughput, which in turn restricts the number of applications. However, using the proposed FIR filtering architecture, the limitations on throughput are removed. The implementation of the fast affine projection adaptive algorithm using distributed arithmetic is unique to this thesis. The constructed adaptive filter shares all the benefits of the proposed FIR filter: low hardware requirements, high speed, and minimal delay.Ph.D.Committee Chair: Anderson, Dr. David V.; Committee Member: Hasler, Dr. Paul E.; Committee Member: Mooney, Dr. Vincent J.; Committee Member: Taylor, Dr. David G.; Committee Member: Vuduc, Dr. Richar
- …