174,204 research outputs found
Verification of Magnitude and Phase Responses in Fixed-Point Digital Filters
In the digital signal processing (DSP) area, one of the most important tasks
is digital filter design. Currently, this procedure is performed with the aid
of computational tools, which generally assume filter coefficients represented
with floating-point arithmetic. Nonetheless, during the implementation phase,
which is often done in digital signal processors or field programmable gate
arrays, the representation of the obtained coefficients can be carried out
through integer or fixed-point arithmetic, which often results in unexpected
behavior or even unstable filters. The present work addresses this issue and
proposes a verification methodology based on the digital-system verifier
(DSVerifier), with the goal of checking fixed-point digital filters w.r.t.
implementation aspects. In particular, DSVerifier checks whether the number of
bits used in coefficient representation will result in a filter with the same
features specified during the design phase. Experimental results show that
errors regarding frequency response and overflow are likely to be identified
with the proposed methodology, which thus improves overall system's
reliability
Optimal Controller and Filter Realisations using Finite-precision, Floating- point Arithmetic.
The problem of reducing the fragility of digital controllers and filters
implemented using finite-precision, floating-point arithmetic is considered.
Floating-point arithmetic parameter uncertainty is multiplicative, unlike
parameter uncertainty resulting from fixed-point arithmetic. Based on first-
order eigenvalue sensitivity analysis, an upper bound on the eigenvalue
perturbations is derived. Consequently, open-loop and closed-loop eigenvalue
sensitivity measures are proposed. These measures are dependent upon the filter/
controller realization. Problems of obtaining the optimal realization with
respect to both the open-loop and the closed-loop eigenvalue sensitivity
measures are posed. The problem for the open-loop case is completely solved.
Solutions for the closed-loop case are obtained using non-linear programming.
The problems are illustrated with a numerical example
Efficient digital-to-analog encoding
An important issue in analog circuit design is the problem of digital-to-analog conversion, i.e., the encoding of Boolean variables into a single analog value which contains enough information to reconstruct the values of the Boolean variables. A natural question is: what is the complexity of implementing the digital-to-analog encoding function? That question was answered by Wegener (see Inform. Processing Lett., vol.60, no.1, p.49-52, 1995), who proved matching lower and upper bounds on the size of the circuit for the encoding function. In particular, it was proven that [(3n-1)/2] 2-input arithmetic gates are necessary and sufficient for implementing the encoding function of n Boolean variables. However, the proof of the upper bound is not constructive. In this paper, we present an explicit construction of a digital-to-analog encoder that is optimal in the number of 2-input arithmetic gates. In addition, we present an efficient analog-to-digital decoding algorithm. Namely, given the encoded analog value, our decoding algorithm reconstructs the original Boolean values. Our construction is suboptimal in that it uses constants of maximum size n log n bits; the nonconstructive proof uses constants of maximum size 2n+[log n] bits
Stochastic rounding and reduced-precision fixed-point arithmetic for solving neural ordinary differential equations
Although double-precision floating-point arithmetic currently dominates
high-performance computing, there is increasing interest in smaller and simpler
arithmetic types. The main reasons are potential improvements in energy
efficiency and memory footprint and bandwidth. However, simply switching to
lower-precision types typically results in increased numerical errors. We
investigate approaches to improving the accuracy of reduced-precision
fixed-point arithmetic types, using examples in an important domain for
numerical computation in neuroscience: the solution of Ordinary Differential
Equations (ODEs). The Izhikevich neuron model is used to demonstrate that
rounding has an important role in producing accurate spike timings from
explicit ODE solution algorithms. In particular, fixed-point arithmetic with
stochastic rounding consistently results in smaller errors compared to single
precision floating-point and fixed-point arithmetic with round-to-nearest
across a range of neuron behaviours and ODE solvers. A computationally much
cheaper alternative is also investigated, inspired by the concept of dither
that is a widely understood mechanism for providing resolution below the least
significant bit (LSB) in digital signal processing. These results will have
implications for the solution of ODEs in other subject areas, and should also
be directly relevant to the huge range of practical problems that are
represented by Partial Differential Equations (PDEs).Comment: Submitted to Philosophical Transactions of the Royal Society
Efficient Unified Arithmetic for Hardware Cryptography
The basic arithmetic operations (i.e. addition, multiplication, and inversion) in finite fields, GF(q), where q = pk and p is a prime integer, have several applications in cryptography, such as RSA algorithm, Diffie-Hellman key exchange algorithm [1], the US federal Digital Signature Standard [2], elliptic curve cryptography [3, 4], and also recently identity based cryptography [5, 6]. Most popular finite fields that are heavily used in cryptographic applications due to elliptic curve based schemes are prime fields GF(p) and binary extension fields GF(2n). Recently, identity based cryptography based on pairing operations defined over elliptic curve points has stimulated a significant level of interest in the arithmetic of ternary extension fields, GF(3^n)
When Chaos Meets Computers
This paper focuses on an interesting phenomenon when chaos meets computers.
It is found that digital computers are absolutely incapable of showing true
long-time dynamics of some chaotic systems, including the tent map, the
Bernoulli shift map and their analogues, even in a high-precision
floating-point arithmetic. Although the results cannot directly generalized to
most chaotic systems, the risk of using digital computers to numerically study
continuous dynamical systems is shown clearly. As a result, we reach the old
saying that "it is impossible to do everything with computers only".Comment: 7 pages, 5 figure
- …
