335 research outputs found

    Gain-Scheduled Fault Detection Filter For Discrete-time LPV Systems

    Get PDF
    The present work investigates a fault detection problem using a gain-scheduled filter for discrete-time Linear Parameter Varying systems. We assume that we cannot directly measure the scheduling parameter but, instead, it is estimated. On the one hand, this assumption imposes the challenge that the fault detection filter should perform properly even when using an inexact parameter. On the other, it avoids the burden associated with designing a complex estimation process for this parameter. We propose three design approaches: the H2{\mathcal {H}_{2}} , H∞{\mathcal {H}_{\infty }} , and mixed H2/H∞{\mathcal {H}_{2}} / {\mathcal {H}_{\infty }} gain-scheduled Fault Detection Filters designed via Linear Matrix Inequalities. We also provide numerical simulations to illustrate the applicability and performance of the proposed novel methods

    Techniques for the formal verification of analog and mixed- signal designs

    Get PDF
    Embedded systems are becoming a core technology in a growing range of electronic devices. Cornerstones of embedded systems are analog and mixed signal (AMS) designs, which are integrated circuits required at the interfaces with the real world environment. The verification of AMS designs is concerned with the assurance of correct functionality, in addition to checking whether an AMS design is robust with respect to different types of inaccuracies like parameter tolerances, nonlinearities, etc. The verification framework described in this thesis is composed of two proposed methodologies each concerned with a class of AMS designs, i.e., continuous-time AMS designs and discrete-time AMS designs. The common idea behind both methodologies is built on top of Bounded Model Checking (BMC) algorithms. In BMC, we search for a counter-example for a property verified against the design model for bounded number of verification steps. If a concrete counter-example is found, then the verification is complete and reports a failure, otherwise, we need to increment the number of steps until property validation is achieved. In general, the verification is not complete because of limitations in time and memory needed for the verification. To alleviate this problem, we observed that under certain conditions and for some classes of specification properties, the verification can be complete if we complement the BMC with other methods such as abstraction and constraint based verification methods. To test and validate the proposed approaches, we developed a prototype implementation in Mathematica and we targeted analog and mixed signal systems, like oscillator circuits, switched capacitor based designs, Delta-Sigma modulators for our initial tests of this approach

    Precision analysis for hardware acceleration of numerical algorithms

    No full text
    The precision used in an algorithm affects the error and performance of individual computations, the memory usage, and the potential parallelism for a fixed hardware budget. However, when migrating an algorithm onto hardware, the potential improvements that can be obtained by tuning the precision throughout an algorithm to meet a range or error specification are often overlooked; the major reason is that it is hard to choose a number system which can guarantee any such specification can be met. Instead, the problem is mitigated by opting to use IEEE standard double precision arithmetic so as to be ‘no worse’ than a software implementation. However, the flexibility in the number representation is one of the key factors that can be exploited on reconfigurable hardware such as FPGAs, and hence ignoring this potential significantly limits the performance achievable. In order to optimise the performance of hardware reliably, we require a method that can tractably calculate tight bounds for the error or range of any variable within an algorithm, but currently only a handful of methods to calculate such bounds exist, and these either sacrifice tightness or tractability, whilst simulation-based methods cannot guarantee the given error estimate. This thesis presents a new method to calculate these bounds, taking into account both input ranges and finite precision effects, which we show to be, in general, tighter in comparison to existing methods; this in turn can be used to tune the hardware to the algorithm specifications. We demonstrate the use of this software to optimise hardware for various algorithms to accelerate the solution of a system of linear equations, which forms the basis of many problems in engineering and science, and show that significant performance gains can be obtained by using this new approach in conjunction with more traditional hardware optimisations

    Mathematical Modeling with Differential Equations in Physics, Chemistry, Biology, and Economics

    Get PDF
    This volume was conceived as a Special Issue of the MDPI journal Mathematics to illustrate and show relevant applications of differential equations in different fields, coherently with the latest trends in applied mathematics research. All the articles that were submitted for publication are valuable, interesting, and original. The readers will certainly appreciate the heterogeneity of the 10 papers included in this book and will discover how helpful all the kinds of differential equations are in a wide range of disciplines. We are confident that this book will be inspirational for young scholars as well

    Dynamical Systems in Spiking Neuromorphic Hardware

    Get PDF
    Dynamical systems are universal computers. They can perceive stimuli, remember, learn from feedback, plan sequences of actions, and coordinate complex behavioural responses. The Neural Engineering Framework (NEF) provides a general recipe to formulate models of such systems as coupled sets of nonlinear differential equations and compile them onto recurrently connected spiking neural networks – akin to a programming language for spiking models of computation. The Nengo software ecosystem supports the NEF and compiles such models onto neuromorphic hardware. In this thesis, we analyze the theory driving the success of the NEF, and expose several core principles underpinning its correctness, scalability, completeness, robustness, and extensibility. We also derive novel theoretical extensions to the framework that enable it to far more effectively leverage a wide variety of dynamics in digital hardware, and to exploit the device-level physics in analog hardware. At the same time, we propose a novel set of spiking algorithms that recruit an optimal nonlinear encoding of time, which we call the Delay Network (DN). Backpropagation across stacked layers of DNs dramatically outperforms stacked Long Short-Term Memory (LSTM) networks—a state-of-the-art deep recurrent architecture—in accuracy and training time, on a continuous-time memory task, and a chaotic time-series prediction benchmark. The basic component of this network is shown to function on state-of-the-art spiking neuromorphic hardware including Braindrop and Loihi. This implementation approaches the energy-efficiency of the human brain in the former case, and the precision of conventional computation in the latter case

    Rotationally-invariant mapping of scalar and orientational metrics of neuronal microstructure with diffusion MRI

    Full text link
    We develop a general analytical and numerical framework for estimating intra- and extra-neurite water fractions and diffusion coefficients, as well as neurite orientational dispersion, in each imaging voxel. By employing a set of rotational invariants and their expansion in the powers of diffusion weighting, we analytically uncover the nontrivial topology of the parameter estimation landscape, showing that multiple branches of parameters describe the measurement almost equally well, with only one of them corresponding to the biophysical reality. A comprehensive acquisition shows that the branch choice varies across the brain. Our framework reveals hidden degeneracies in MRI parameter estimation for neuronal tissue, provides microstructural and orientational maps in the whole brain without constraints or priors, and connects modern biophysical modeling with clinical MRI.Comment: 25 pages, 12 figures, elsarticle two-colum

    Towards a Compiler for Reals

    Get PDF
    Numerical software, common in scientific computing or embedded systems, inevitably uses a finite-precision approximation of the real arithmetic in which most algorithms are designed. In many applications, the roundoff errors introduced by finite-precision arithmetic are not the only source of inaccuracy, and measurement and other input errors further increase the uncertainty of the computed results. Adequate tools are needed to help users select suitable data types and evaluate the provided accuracy, especially for safety-critical applications. We present a source-to-source compiler called Rosa that takes as input a real-valued program with error specifications and synthesizes code over an appropriate floating-point or fixed-point data type. The main challenge of such a compiler is a fully automated, sound, and yet accurate-enough numerical error estimation. We introduce a unified technique for bounding roundoff errors from floating-point and fixed-point arithmetic of various precisions. The technique can handle nonlinear arithmetic, determine closed-form symbolic invariants for unbounded loops, and quantify the effects of discontinuities on numerical errors. We evaluate Rosa on a number of benchmarks from scientific computing and embedded systems and, comparing it to the state of the art in automated error estimation, show that it presents an interesting tradeoff between accuracy and performance

    Processing random signals in neuroscience, electrical engineering and operations research

    Get PDF
    The topic of this dissertation is the study of noise in electrical engineering, neuroscience, biomedical engineering, and operations research through mathematical models that describe, explain, predict and control dynamic phenomena. Noise is modeled through Brownian Motion and the research problems are mathematically addressed by different versions of a generalized Langevin equation. Our mathematical models utilize stochastic differential equations (SDEs) and stochastic optimal control, both of which were born in the soil of electrical engineering. Central to this dissertation is a brain-physics based model of cerebrospinal fluid (CSF) dynamics, whose structure is fundamentally determined by an electrical circuit analogy. Our general Langevin framework encompasses many of the existing equations used in electrical engineering, neuroscience, biomedical engineering and operations research. The generalized SDE for CSF dynamics extends a fundamental model in the field to discover new clinical insights and tools, provides the basis for a nonlinear controller, and suggests a new way to resolve an ongoing controversy regarding CSF dynamics in neuroscience. The natural generalization of the SDE for CSF dynamics is a SDE with polynomial drift. We develop a new analytical algorithm to solve SDEs with polynomial drift, thereby contributing to the electrical engineering literature on signal processing models, many of which are special cases of SDEs with polynomial drift. We make new contributions to the operations research literature on marketing communication models by unifying different types of dynamically optimal trajectories of spending in the framework of a classic model of market response, in which these different temporal patterns arise as a consequence of different boundary conditions. The methodologies developed in this dissertation provide an analytical foundation for the solution of fundamental problems in gas discharge lamp dynamics in power engineering, degradation dynamics of ultra-thin metal oxides in MOS capacitors, and molecular motors in nanotechnology, thereby establishing a rich agenda for future research
    • …
    corecore