26 research outputs found

    Learning-based reconstruction of FRI signals

    Get PDF
    Finite Rate of Innovation (FRI) sampling theory enables reconstruction of classes of continuous non-bandlimited signals that have a small number of free parameters from their low-rate discrete samples. This task is often translated into a spectral estimation problem that is solved using methods involving estimating signal subspaces, which tend to break down at a certain peak signal-to-noise ratio (PSNR). To avoid this breakdown, we consider alternative approaches that make use of information from labelled data. We propose two model-based learning methods, including deep unfolding the denoising process in spectral estimation, and constructing an encoder-decoder deep neural network that models the acquisition process. Simulation results of both learning algorithms indicate significant improvements of the breakdown PSNR over classical subspace-based methods. While the deep unfolded network achieves similar performance as the classical FRI techniques and outperforms the encoder-decoder network in the low noise regimes, the latter allows to reconstruct the FRI signal even when the sampling kernel is unknown. We also achieve competitive results in detecting pulses from in vivo calcium imaging data in terms of true positive and false positive rate while providing more precise estimations

    Exact Feature Extraction Using Finite Rate of Innovation Principles With an Application to Image Super-Resolution

    Get PDF
    The accurate registration of multiview images is of central importance in many advanced image processing applications. Image super-resolution, for example, is a typical application where the quality of the super-resolved image is degrading as registration errors increase. Popular registration methods are often based on features extracted from the acquired images. The accuracy of the registration is in this case directly related to the number of extracted features and to the precision at which the features are located: images are best registered when many features are found with a good precision. However, in low-resolution images, only a few features can be extracted and often with a poor precision. By taking a sampling perspective, we propose in this paper new methods for extracting features in low-resolution images in order to develop efficient registration techniques. We consider, in particular, the sampling theory of signals with finite rate of innovation and show that some features of interest for registration can be retrieved perfectly in this framework, thus allowing an exact registration. We also demonstrate through simulations that the sampling model which enables the use of finite rate of innovation principles is well suited for modeling the acquisition of images by a camera. Simulations of image registration and image super-resolution of artificially sampled images are first presented, analyzed and compared to traditional techniques. We finally present favorable experimental results of super-resolution of real images acquired by a digital camera available on the market

    Learning to process with spikes and to localise pulses

    Get PDF
    In the last few decades, deep learning with artificial neural networks (ANNs) has emerged as one of the most widely used techniques in tasks such as classification and regression, achieving competitive results and in some cases even surpassing human-level performance. Nonetheless, as ANN architectures are optimised towards empirical results and departed from their biological precursors, how exactly human brains process information using these short electrical pulses called spikes remains a mystery. Hence, in this thesis, we explore the problem of learning to process with spikes and to localise pulses. We first consider spiking neural networks (SNNs), a type of ANN that more closely mimic biological neural networks in that neurons communicate with one another using spikes. This unique architecture allows us to look into the role of heterogeneity in learning. Since it is conjectured that the information is encoded by the timing of spikes, we are particularly interested in the heterogeneity of time constants of neurons. We then trained SNNs for classification tasks on a range of visual and auditory neuromorphic datasets, which contain streams of events (spike times) instead of the conventional frame-based data, and show that the overall performance is improved by allowing the neurons to have different time constants, especially on tasks with richer temporal structure. We also find that the learned time constants are distributed similarly to those experimentally observed in some mammalian cells. Besides, we demonstrate that learning with heterogeneity improves robustness against hyperparameter mistuning. These results suggest that heterogeneity may be more than the byproduct of noisy processes and perhaps serves a key role in learning in changing environments, yet heterogeneity has been overlooked in basic artificial models. While neuromorphic datasets, which are often captured by neuromorphic devices that closely model the corresponding biological systems, have enabled us to explore the more biologically plausible SNNs, there still exists a gap in understanding how spike times encode information in actual biological neural networks like human brains, as such data is difficult to acquire due to the trade-off between the timing precision and the number of cells simultaneously recorded electrically. Instead, what we usually obtain is the low-rate discrete samples of trains of filtered spikes. Hence, in the second part of the thesis, we focus on a different type of problem involving pulses, that is to retrieve the precise pulse locations from these low-rate samples. We make use of the finite rate of innovation (FRI) sampling theory, which states that perfect reconstruction is possible for classes of continuous non-bandlimited signals that have a small number of free parameters. However, existing FRI methods break down under very noisy conditions due to the so-called subspace swap event. Thus, we present two novel model-based learning architectures: Deep Unfolded Projected Wirtinger Gradient Descent (Deep Unfolded PWGD) and FRI Encoder-Decoder Network (FRIED-Net). The former is based on the existing iterative denoising algorithm for subspace-based methods, while the latter models directly the relationship between the samples and the locations of the pulses using an autoencoder-like network. Using a stream of K Diracs as an example, we show that both algorithms are able to overcome the breakdown inherent in the existing subspace-based methods. Moreover, we extend our FRIED-Net framework beyond conventional FRI methods by considering when the shape is unknown. We show that the pulse shape can be learned using backpropagation. This coincides with the application of spike detection from real-world calcium imaging data, where we achieve competitive results. Finally, we explore beyond canonical FRI signals and demonstrate that FRIED-Net is able to reconstruct streams of pulses with different shapes.Open Acces

    Decimated generalized Prony systems

    Full text link
    We continue studying robustness of solving algebraic systems of Prony type (also known as the exponential fitting systems), which appear prominently in many areas of mathematics, in particular modern "sub-Nyquist" sampling theories. We show that by considering these systems at arithmetic progressions (or "decimating" them), one can achieve better performance in the presence of noise. We also show that the corresponding lower bounds are closely related to well-known estimates, obtained for similar problems but in different contexts

    Multichannel Sampling of Signals With Finite Rate of Innovation

    Get PDF
    In this letter, we present a possible extension of the theory of sampling signals with finite rate of innovation (FRI) to the case of multichannel acquisition systems. The essential issue of a multichannel system is that each channel introduces different unknown delays and gains that need to be estimated for the calibration of the channels. We pose both the synchronization stage and the signal reconstruction stage as a parametric estimation problem and demonstrate that a simultaneous exact synchronization of the channels and reconstruction of the FRI signal is possible. We also consider the case of noisy measurements and evaluate the Cramér-Rao bounds (CRB) of the proposed system. Numerical results as well as the CRB show clearly that multichannel systems are more resilient to noise than the single-channel ones

    Multiphoton minimal inertia scanning for fast acquisition of neural activity signals

    Get PDF
    Objective: Multi-photon laser scanning microscopy provides a powerful tool for monitoring the spatiotemporal dynamics of neural circuit activity. It is, however, intrinsically a point scanning technique. Standard raster scanning enables imaging at subcellular resolution; however, acquisition rates are limited by the size of the field of view to be scanned. Recently developed scanning strategies such as Travelling Salesman Scanning (TSS) have been developed to maximize cellular sampling rate by scanning only select regions in the field of view corresponding to locations of interest such as somata. However, such strategies are not optimized for the mechanical properties of galvanometric scanners. We thus aimed to develop a new scanning algorithm which produces minimal inertia trajectories, and compare its performance with existing scanning algorithms. Approach: We describe here the Adaptive Spiral Scanning (SSA) algorithm, which fits a set of near-circular trajectories to the cellular distribution to avoid inertial drifts of galvanometer position. We compare its performance to raster scanning and TSS in terms of cellular sampling frequency and signal-to-noise ratio (SNR). Main Results: Using surrogate neuron spatial position data, we show that SSA acquisition rates are an order of magnitude higher than those for raster scanning and generally exceed those achieved by TSS for neural densities comparable with those found in the cortex. We show that this result also holds true for in vitro hippocampal mouse brain slices bath loaded with the synthetic calcium dye Cal-520 AM. The ability of TSS to "park" the laser on each neuron along the scanning trajectory, however, enables higher SNR than SSA when all targets are precisely scanned. Raster scanning has the highest SNR but at a substantial cost in number of cells scanned. To understand the impact of sampling rate and SNR on functional calcium imaging, we used the Crame ́r-Rao Bound on evoked calcium traces recorded simultaneously with electrophysiology traces to calculate the lower bound estimate of the spike timing occurrence. Significance: The results show that TSS and SSA achieve comparable accuracy in spike time estimates compared to raster scanning, despite lower SNR. SSA is an easily implementable way for standard multi-photon laser scanning systems to gain temporal precision in the detection of action potentials while scanning hundreds of active cells

    On the accuracy of solving confluent Prony systems

    Full text link
    In this paper we consider several nonlinear systems of algebraic equations which can be called "Prony-type". These systems arise in various reconstruction problems in several branches of theoretical and applied mathematics, such as frequency estimation and nonlinear Fourier inversion. Consequently, the question of stability of solution with respect to errors in the right-hand side becomes critical for the success of any particular application. We investigate the question of "maximal possible accuracy" of solving Prony-type systems, putting stress on the "local" behavior which approximates situations with low absolute measurement error. The accuracy estimates are formulated in very simple geometric terms, shedding some light on the structure of the problem. Numerical tests suggest that "global" solution techniques such as Prony's algorithm and ESPRIT method are suboptimal when compared to this theoretical "best local" behavior

    Source localization via time difference of arrival

    Get PDF
    Accurate localization of a signal source, based on the signals collected by a number of receiving sensors deployed in the source surrounding area is a problem of interest in various fields. This dissertation aims at exploring different techniques to improve the localization accuracy of non-cooperative sources, i.e., sources for which the specific transmitted symbols and the time of the transmitted signal are unknown to the receiving sensors. With the localization of non-cooperative sources, time difference of arrival (TDOA) of the signals received at pairs of sensors is typically employed. A two-stage localization method in multipath environments is proposed. During the first stage, TDOA of the signals received at pairs of sensors is estimated. In the second stage, the actual location is computed from the TDOA estimates. This later stage is referred to as hyperbolic localization and it generally involves a non-convex optimization. For the first stage, a TDOA estimation method that exploits the sparsity of multipath channels is proposed. This is formulated as an f1-regularization problem, where the f1-norm is used as channel sparsity constraint. For the second stage, three methods are proposed to offer high accuracy at different computational costs. The first method takes a semi-definite relaxation (SDR) approach to relax the hyperbolic localization to a convex optimization. The second method follows a linearized formulation of the problem and seeks a biased estimate of improved accuracy. A third method is proposed to exploit the source sparsity. With this, the hyperbolic localization is formulated as an an f1-regularization problem, where the f1-norm is used as source sparsity constraint. The proposed methods compare favorably to other existing methods, each of them having its own advantages. The SDR method has the advantage of simplicity and low computational cost. The second method may perform better than the SDR approach in some situations, but at the price of higher computational cost. The l1-regularization may outperform the first two methods, but is sensitive to the choice of a regularization parameter. The proposed two-stage localization approach is shown to deliver higher accuracy and robustness to noise, compared to existing TDOA localization methods. A single-stage source localization method is explored. The approach is coherent in the sense that, in addition to the TDOA information, it utilizes the relative carrier phases of the received signals among pairs of sensors. A location estimator is constructed based on a maximum likelihood metric. The potential of accuracy improvement by the coherent approach is shown through the Cramer Rao lower bound (CRB). However, the technique has to contend with high peak sidelobes in the localization metric, especially at low signal-to-noise ratio (SNR). Employing a small antenna array at each sensor is shown to lower the sidelobes level in the localization metric. Finally, the performance of time delay and amplitude estimation from samples of the received signal taken at rates lower than the conventional Nyquist rate is evaluated. To this end, a CRB is developed and its variation with system parameters is analyzed. It is shown that while with noiseless low rate sampling there is no estimation accuracy loss compared to Nyquist sampling, in the presence of additive noise the performance degrades significantly. However, increasing the low sampling rate by a small factor leads to significant performance improvement, especially for time delay estimation
    corecore