431 research outputs found

    Multichannel sampling of finite rate of innovation signals

    No full text
    Recently there has been a surge of interest in sampling theory in signal processing community. New efficient sampling techniques have been developed that allow sampling and perfectly reconstructing some classes of non-bandlimited signals at sub-Nyquist rates. Depending on the setup used and reconstruction method involved, these schemes go under different names such as compressed sensing (CS), compressive sampling or sampling signals with finite rate of innovation (FRI). In this thesis we focus on the theory of sampling non-bandlimited signals with parametric structure or specifically signals with finite rate of innovation. Most of the theory on sampling FRI signals is based on a single acquisition device with one-dimensional (1-D) signals. In this thesis, we extend these results to the case of 2-D signals and multichannel acquisition systems. The essential issue in multichannel systems is that while each channel receives the input signal, it may introduce different unknown delays, gains or affine transformations which need to be estimated from the samples together with the signal itself. We pose both the calibration of the channels and the signal reconstruction stage as a parametric estimation problem and demonstrate that a simultaneous exact synchronization of the channels and reconstruction of the FRI signal is possible. Furthermore, because in practice perfect noise-free channels do not exist, we consider the case of noisy measurements and show that by considering Cramer-Rao bounds as well as numerical simulations, the multichannel systems are more resilient to noise than the single-channel ones. Finally, we consider the problem of system identification based on the multichannel and finite rate of innovation sampling techniques. First, by employing our multichannel sampling setup, we propose a novel algorithm for system identification problem with known input signal, that is for the case when both the input signal and the samples are known. Then we consider the problem of blind system identification and propose a novel algorithm for simultaneously estimating the input FRI signal and also the unknown system using an iterative algorithm

    Feature Extraction for image super-resolution using finite rate of innovation principles

    No full text
    To understand a real-world scene from several multiview pictures, it is necessary to find the disparities existing between each pair of images so that they are correctly related to one another. This process, called image registration, requires the extraction of some specific information about the scene. This is achieved by taking features out of the acquired images. Thus, the quality of the registration depends largely on the accuracy of the extracted features. Feature extraction can be formulated as a sampling problem for which perfect re- construction of the desired features is wanted. The recent sampling theory for signals with finite rate of innovation (FRI) and the B-spline theory offer an appropriate new frame- work for the extraction of features in real images. This thesis first focuses on extending the sampling theory for FRI signals to a multichannel case and then presents exact sampling results for two different types of image features used for registration: moments and edges. In the first part, it is shown that the geometric moments of an observed scene can be retrieved exactly from sampled images and used as global features for registration. The second part describes how edges can also be retrieved perfectly from sampled images for registration purposes. The proposed feature extraction schemes therefore allow in theory the exact registration of images. Indeed, various simulations show that the proposed extraction/registration methods overcome traditional ones, especially at low-resolution. These characteristics make such feature extraction techniques very appropriate for applications like image super-resolution for which a very precise registration is needed. The quality of the super-resolved images obtained using the proposed feature extraction meth- ods is improved by comparison with other approaches. Finally, the notion of polyphase components is used to adapt the image acquisition model to the characteristics of real digital cameras in order to run super-resolution experiments on real images

    Feature Extraction for Image Super-resolution using Finite Rate of Innovation Principles

    Get PDF
    To understand a real-world scene from several multiview pictures, it is necessary to find the disparities existing between each pair of images so that they are correctly related to one another., This process. called image registration, reguires the extraction of some specific information about the scene. This is achieved by taking features out of the acquired imaqes. Thus, the quality of the, registration depends largely on the accuracy of the extracted features. Feature extraction can be formulated as a sampling problem for which perfect reconstruction of the, desired features is wanted. The recent sampling theory for signals with finite rate of innovation (FR/), and the B-spline theory offer an appropriate new framework for the extraction of features in real, images. This thesis first focuses on extending the sampling theory for FRI signals to a multichannel, case and then presents exact sampling results for two different types of image features used for, registration: moments and edges. In the first part, it is shown that the geometric moments of an observed scene can be retrieved exactly from sampled images and used as global features for registration. The second part describes how edges can also be retrieved perfectly from sampled images for registration purposes. The proposed feature extraction schemes therefore allow in theory the exact registration of images. Indeed, various simulations show that the proposed extraction/registration methods overcome traditional ones, especially at low-resolution. These characteristics make such feature extraction techniques very appropriate for applications like image super-resolution for which a very precise registration is needed. The quality of the superresolved images obtained using the proposed feature extraction methods is improved by comparison with other approaches. Finally, the notion of polyphase components is used to adapt the imaqe acquisition model to the characteristics of real digital cameras in order to run super-resolution experiments on real images

    Dynamical Systems in Spiking Neuromorphic Hardware

    Get PDF
    Dynamical systems are universal computers. They can perceive stimuli, remember, learn from feedback, plan sequences of actions, and coordinate complex behavioural responses. The Neural Engineering Framework (NEF) provides a general recipe to formulate models of such systems as coupled sets of nonlinear differential equations and compile them onto recurrently connected spiking neural networks – akin to a programming language for spiking models of computation. The Nengo software ecosystem supports the NEF and compiles such models onto neuromorphic hardware. In this thesis, we analyze the theory driving the success of the NEF, and expose several core principles underpinning its correctness, scalability, completeness, robustness, and extensibility. We also derive novel theoretical extensions to the framework that enable it to far more effectively leverage a wide variety of dynamics in digital hardware, and to exploit the device-level physics in analog hardware. At the same time, we propose a novel set of spiking algorithms that recruit an optimal nonlinear encoding of time, which we call the Delay Network (DN). Backpropagation across stacked layers of DNs dramatically outperforms stacked Long Short-Term Memory (LSTM) networks—a state-of-the-art deep recurrent architecture—in accuracy and training time, on a continuous-time memory task, and a chaotic time-series prediction benchmark. The basic component of this network is shown to function on state-of-the-art spiking neuromorphic hardware including Braindrop and Loihi. This implementation approaches the energy-efficiency of the human brain in the former case, and the precision of conventional computation in the latter case

    27th Annual Computational Neuroscience Meeting (CNS*2018): Part One

    Get PDF

    27th annual computational neuroscience meeting (CNS*2018) : part one

    Get PDF

    Neuromorphic Engineering Editors' Pick 2021

    Get PDF
    This collection showcases well-received spontaneous articles from the past couple of years, which have been specially handpicked by our Chief Editors, Profs. André van Schaik and Bernabé Linares-Barranco. The work presented here highlights the broad diversity of research performed across the section and aims to put a spotlight on the main areas of interest. All research presented here displays strong advances in theory, experiment, and methodology with applications to compelling problems. This collection aims to further support Frontiers’ strong community by recognizing highly deserving authors

    A Survey of Spiking Neural Network Accelerator on FPGA

    Full text link
    Due to the ability to implement customized topology, FPGA is increasingly used to deploy SNNs in both embedded and high-performance applications. In this paper, we survey state-of-the-art SNN implementations and their applications on FPGA. We collect the recent widely-used spiking neuron models, network structures, and signal encoding formats, followed by the enumeration of related hardware design schemes for FPGA-based SNN implementations. Compared with the previous surveys, this manuscript enumerates the application instances that applied the above-mentioned technical schemes in recent research. Based on that, we discuss the actual acceleration potential of implementing SNN on FPGA. According to our above discussion, the upcoming trends are discussed in this paper and give a guideline for further advancement in related subjects

    Algorithms for Verification of Analog and Mixed-Signal Integrated Circuits

    Get PDF
    Over the past few decades, the tremendous growth in the complexity of analog and mixed-signal (AMS) systems has posed great challenges to AMS verification, resulting in a rapidly growing verification gap. Existing formal methods provide appealing completeness and reliability, yet they suffer from their limited efficiency and scalability. Data oriented machine learning based methods offer efficient and scalable solutions but do not guarantee completeness or full coverage. Additionally, the trend towards shorter time to market for AMS chips urges the development of efficient verification algorithms to accelerate with the joint design and testing phases. This dissertation envisions a hierarchical and hybrid AMS verification framework by consolidating assorted algorithms to embrace efficiency, scalability and completeness in a statistical sense. Leveraging diverse advantages from various verification techniques, this dissertation develops algorithms in different categories. In the context of formal methods, this dissertation proposes a generic and comprehensive model abstraction paradigm to model AMS content with a unifying analog representation. Moreover, an algorithm is proposed to parallelize reachability analysis by decomposing AMS systems into subsystems with lower complexity, and dividing the circuit's reachable state space exploration, which is formulated as a satisfiability problem, into subproblems with a reduced number of constraints. The proposed modeling method and the hierarchical parallelization enhance the efficiency and scalability of reachability analysis for AMS verification. On the subject of learning based method, the dissertation proposes to convert the verification problem into a binary classification problem solved using support vector machine (SVM) based learning algorithms. To reduce the need of simulations for training sample collection, an active learning strategy based on probabilistic version space reduction is proposed to perform adaptive sampling. An expansion of the active learning strategy for the purpose of conservative prediction is leveraged to minimize the occurrence of false negatives. Moreover, another learning based method is proposed to characterize AMS systems with a sparse Bayesian learning regression model. An implicit feature weighting mechanism based on the kernel method is embedded in the Bayesian learning model for concurrent quantification of influence of circuit parameters on the targeted specification, which can be efficiently solved in an iterative method similar to the expectation maximization (EM) algorithm. Besides, the achieved sparse parameter weighting offers favorable assistance to design analysis and test optimization
    • …
    corecore