459 research outputs found

    VPNet: Variable Projection Networks

    Get PDF
    In this paper, we introduce VPNet, a novel model-driven neural network architecture based on variable projections (VP). The application of VP operators in neural networks implies learnable features, interpretable parameters, and compact network structures. This paper discusses the motivation and mathematical background of VPNet as well as experiments. The concept was evaluated in the context of signal processing. We performed classification tasks on a synthetic dataset, and real electrocardiogram (ECG) signals. Compared to fully-connected and 1D convolutional networks, VPNet features fast learning ability and good accuracy at a low computational cost in both of the training and inference. Based on the promising results and mentioned advantages, we expect broader impact in signal processing, including classification, regression, and even clustering problems

    Toward sparse and geometry adapted video approximations

    Get PDF
    Video signals are sequences of natural images, where images are often modeled as piecewise-smooth signals. Hence, video can be seen as a 3D piecewise-smooth signal made of piecewise-smooth regions that move through time. Based on the piecewise-smooth model and on related theoretical work on rate-distortion performance of wavelet and oracle based coding schemes, one can better analyze the appropriate coding strategies that adaptive video codecs need to implement in order to be efficient. Efficient video representations for coding purposes require the use of adaptive signal decompositions able to capture appropriately the structure and redundancy appearing in video signals. Adaptivity needs to be such that it allows for proper modeling of signals in order to represent these with the lowest possible coding cost. Video is a very structured signal with high geometric content. This includes temporal geometry (normally represented by motion information) as well as spatial geometry. Clearly, most of past and present strategies used to represent video signals do not exploit properly its spatial geometry. Similarly to the case of images, a very interesting approach seems to be the decomposition of video using large over-complete libraries of basis functions able to represent salient geometric features of the signal. In the framework of video, these features should model 2D geometric video components as well as their temporal evolution, forming spatio-temporal 3D geometric primitives. Through this PhD dissertation, different aspects on the use of adaptivity in video representation are studied looking toward exploiting both aspects of video: its piecewise nature and the geometry. The first part of this work studies the use of localized temporal adaptivity in subband video coding. This is done considering two transformation schemes used for video coding: 3D wavelet representations and motion compensated temporal filtering. A theoretical R-D analysis as well as empirical results demonstrate how temporal adaptivity improves coding performance of moving edges in 3D transform (without motion compensation) based video coding. Adaptivity allows, at the same time, to equally exploit redundancy in non-moving video areas. The analogy between motion compensated video and 1D piecewise-smooth signals is studied as well. This motivates the introduction of local length adaptivity within frame-adaptive motion compensated lifted wavelet decompositions. This allows an optimal rate-distortion performance when video motion trajectories are shorter than the transformation "Group Of Pictures", or when efficient motion compensation can not be ensured. After studying temporal adaptivity, the second part of this thesis is dedicated to understand the fundamentals of how can temporal and spatial geometry be jointly exploited. This work builds on some previous results that considered the representation of spatial geometry in video (but not temporal, i.e, without motion). In order to obtain flexible and efficient (sparse) signal representations, using redundant dictionaries, the use of highly non-linear decomposition algorithms, like Matching Pursuit, is required. General signal representation using these techniques is still quite unexplored. For this reason, previous to the study of video representation, some aspects of non-linear decomposition algorithms and the efficient decomposition of images using Matching Pursuits and a geometric dictionary are investigated. A part of this investigation concerns the study on the influence of using a priori models within approximation non-linear algorithms. Dictionaries with a high internal coherence have some problems to obtain optimally sparse signal representations when used with Matching Pursuits. It is proved, theoretically and empirically, that inserting in this algorithm a priori models allows to improve the capacity to obtain sparse signal approximations, mainly when coherent dictionaries are used. Another point discussed in this preliminary study, on the use of Matching Pursuits, concerns the approach used in this work for the decompositions of video frames and images. The technique proposed in this thesis improves a previous work, where authors had to recur to sub-optimal Matching Pursuit strategies (using Genetic Algorithms), given the size of the functions library. In this work the use of full search strategies is made possible, at the same time that approximation efficiency is significantly improved and computational complexity is reduced. Finally, a priori based Matching Pursuit geometric decompositions are investigated for geometric video representations. Regularity constraints are taken into account to recover the temporal evolution of spatial geometric signal components. The results obtained for coding and multi-modal (audio-visual) signal analysis, clarify many unknowns and show to be promising, encouraging to prosecute research on the subject

    Can we identify non-stationary dynamics of trial-to-trial variability?"

    Get PDF
    Identifying sources of the apparent variability in non-stationary scenarios is a fundamental problem in many biological data analysis settings. For instance, neurophysiological responses to the same task often vary from each repetition of the same experiment (trial) to the next. The origin and functional role of this observed variability is one of the fundamental questions in neuroscience. The nature of such trial-to-trial dynamics however remains largely elusive to current data analysis approaches. A range of strategies have been proposed in modalities such as electro-encephalography but gaining a fundamental insight into latent sources of trial-to-trial variability in neural recordings is still a major challenge. In this paper, we present a proof-of-concept study to the analysis of trial-to-trial variability dynamics founded on non-autonomous dynamical systems. At this initial stage, we evaluate the capacity of a simple statistic based on the behaviour of trajectories in classification settings, the trajectory coherence, in order to identify trial-to-trial dynamics. First, we derive the conditions leading to observable changes in datasets generated by a compact dynamical system (the Duffing equation). This canonical system plays the role of a ubiquitous model of non-stationary supervised classification problems. Second, we estimate the coherence of class-trajectories in empirically reconstructed space of system states. We show how this analysis can discern variations attributable to non-autonomous deterministic processes from stochastic fluctuations. The analyses are benchmarked using simulated and two different real datasets which have been shown to exhibit attractor dynamics. As an illustrative example, we focused on the analysis of the rat's frontal cortex ensemble dynamics during a decision-making task. Results suggest that, in line with recent hypotheses, rather than internal noise, it is the deterministic trend which most likely underlies the observed trial-to-trial variability. Thus, the empirical tool developed within this study potentially allows us to infer the source of variability in in-vivo neural recordings

    Planck 2013 results. XXIV. Constraints on primordial non-Gaussianity

    Get PDF
    Peer reviewe

    Planck 2013 results. XXIV. Constraints on primordial non-Gaussianity

    Get PDF
    The Planck nominal mission cosmic microwave background (CMB) maps yield unprecedented constraints on primordial non-Gaussianity (NG). Using three optimal bispectrum estimators, separable template-fitting (KSW), binned, and modal, we obtain consistent values for the primordial local, equilateral, and orthogonal bispectrum amplitudes, quoting as our final result fNLlocal = 2.7 ± 5.8, fNLequil = -42 ± 75, and fNLorth = -25 ± 39 (68% CL statistical). Non-Gaussianity is detected in the data; using skew-Cl statistics we find a nonzero bispectrum from residual point sources, and the integrated-Sachs-Wolfe-lensing bispectrum at a level expected in the ?CDM scenario. The results are based on comprehensive cross-validation of these estimators on Gaussian and non-Gaussian simulations, are stable across component separation techniques, pass an extensive suite of tests, and are confirmed by skew-Cl, wavelet bispectrum and Minkowski functional estimators. Beyond estimates of individual shape amplitudes, we present model-independent, three-dimensional reconstructions of the Planck CMB bispectrum and thus derive constraints on early-Universe scenarios that generate primordial NG, including general single-field models of inflation, excited initial states (non-Bunch-Davies vacua), and directionally-dependent vector models. We provide an initial survey of scale-dependent feature and resonance models. These results bound both general single-field and multi-field model parameter ranges, such as the speed of sound, cs ≥ 0.02 (95% CL), in an effective field theory parametrization, and the curvaton decay fraction rD ≥ 0.15 (95% CL). The Planck data significantly limit the viable parameter space of the ekpyrotic/cyclic scenarios. The amplitude of the four-point function in the local model τNL< 2800 (95% CL). Taken together, these constraints represent the highest precision tests to date of physical mechanisms for the origin of cosmic structure.The development of Planck has been supported by: ESA; CNES and CNRS/INSU-IN2P3-INP (France); ASI, CNR, and INAF (Italy); NASA and DoE (USA); STFC and UKSA (UK); CSIC, MICINN and JA (Spain); Tekes, AoF and CSC (Finland); DLR and MPG (Germany); CSA (Canada); DTU Space (Denmark); SER/SSO (Switzerland); RCN (Norway); SFI (Ireland); FCT/MCTES (Portugal); and PRACE (EU).Peer Reviewe

    X-ray CT Image Reconstruction on Highly-Parallel Architectures.

    Full text link
    Model-based image reconstruction (MBIR) methods for X-ray CT use accurate models of the CT acquisition process, the statistics of the noisy measurements, and noise-reducing regularization to produce potentially higher quality images than conventional methods even at reduced X-ray doses. They do this by minimizing a statistically motivated high-dimensional cost function; the high computational cost of numerically minimizing this function has prevented MBIR methods from reaching ubiquity in the clinic. Modern highly-parallel hardware like graphics processing units (GPUs) may offer the computational resources to solve these reconstruction problems quickly, but simply "translating" existing algorithms designed for conventional processors to the GPU may not fully exploit the hardware's capabilities. This thesis proposes GPU-specialized image denoising and image reconstruction algorithms. The proposed image denoising algorithm uses group coordinate descent with carefully structured groups. The algorithm converges very rapidly: in one experiment, it denoises a 65 megapixel image in about 1.5 seconds, while the popular Chambolle-Pock primal-dual algorithm running on the same hardware takes over a minute to reach the same level of accuracy. For X-ray CT reconstruction, this thesis uses duality and group coordinate ascent to propose an alternative to the popular ordered subsets (OS) method. Similar to OS, the proposed method can use a subset of the data to update the image. Unlike OS, the proposed method is convergent. In one helical CT reconstruction experiment, an implementation of the proposed algorithm using one GPU converges more quickly than a state-of-the-art algorithm converges using four GPUs. Using four GPUs, the proposed algorithm reaches near convergence of a wide-cone axial reconstruction problem with over 220 million voxels in only 11 minutes.PhDElectrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/113551/1/mcgaffin_1.pd
    • …
    corecore