35 research outputs found

    Bayesian Inference for Brain Activity from Functional Magnetic Resonance Imaging Collected at Two Spatial Resolutions

    Full text link
    Neuroradiologists and neurosurgeons increasingly opt to use functional magnetic resonance imaging (fMRI) to map functionally relevant brain regions for noninvasive presurgical planning and intraoperative neuronavigation. This application requires a high degree of spatial accuracy, but the fMRI signal-to-noise ratio (SNR) decreases as spatial resolution increases. In practice, fMRI scans can be collected at multiple spatial resolutions, and it is of interest to make more accurate inference on brain activity by combining data with different resolutions. To this end, we develop a new Bayesian model to leverage both better anatomical precision in high resolution fMRI and higher SNR in standard resolution fMRI. We assign a Gaussian process prior to the mean intensity function and develop an efficient, scalable posterior computation algorithm to integrate both sources of data. We draw posterior samples using an algorithm analogous to Riemann manifold Hamiltonian Monte Carlo in an expanded parameter space. We illustrate our method in analysis of presurgical fMRI data, and show in simulation that it infers the mean intensity more accurately than alternatives that use either the high or standard resolution fMRI data alone.Comment: 37 pages, 12 figure

    Compressed Optical Imaging

    Get PDF
    We address the resolution of inverse problems where visual data must be recovered from incomplete information optically acquired in the spatial domain. The optical acquisition models that are involved share a common mathematical structure consisting of a linear operator followed by optional pointwise nonlinearities. The linear operator generally includes lowpass filtering effects and, in some cases, downsampling. Both tend to make the problems ill-posed. Our general resolution strategy is to rely on variational principles, which allows for a tight control on the objective or perceptual quality of the reconstructed data. The three related problems that we investigate and propose to solve are 1. The reconstruction of images from sparse samples. Following a non-ideal acquisition framework, the measurements take the form of spatial-domain samples whose locations are specified a priori. The reconstruction algorithm that we propose is linked to PDE flows with tensor-valued diffusivities. We demonstrate through several experiments that our approach preserves finer visual features than standard interpolation techniques do, especially at very low sampling rates. 2. The reconstruction of images from binary measurements. The acquisition model that we consider relies on optical principles and fits in a compressed-sensing framework. We develop a reconstruction algorithm that allows us to recover grayscale images from the available binary data. It substantially improves upon the state of the art in terms of quality and computational performance. Our overall approach is physically relevant; moreover, it can handle large amounts of data efficiently. 3. The reconstruction of phase and amplitude profiles from single digital holographic acquisitions. Unlike conventional approaches that are based on demodulation, our iterative reconstruction method is able to accurately recover the original object from a single downsampled intensity hologram, as shown in simulated and real measurement settings. It also consistently outperforms the state of the art in terms of signal-to-noise ratio and with respect to the size of the field of view. The common goal of the proposed reconstruction methods is to yield an accurate estimate of the original data from all available measurements. In accordance with the forward model, they are typically capable of handling samples that are sparse in the spatial domain and/or distorted due to pointwise nonlinear effects, as demonstrated in our experiments

    A System Centric View of Modern Structured and Sparse Inference Tasks

    Get PDF
    University of Minnesota Ph.D. dissertation.June 2017. Major: Electrical/Computer Engineering. Advisor: Jarvis Haupt. 1 computer file (PDF); xii, 140 pages.We are living in the era of data deluge wherein we are collecting unprecedented amount of data from variety of sources. Modern inference tasks are centered around exploiting structure and sparsity in the data to extract relevant information. This thesis takes an end-to-end system centric view of these inference tasks which mainly consist of two sub-parts (i) data acquisition and (ii) data processing. In context of the data acquisition part of the system, we address issues pertaining to noise, clutter (the unwanted extraneous signals which accompany the desired signal), quantization, and missing observations. In the data processing part of the system we investigate the problems that arise in resource-constrained scenarios such as limited computational power and limited battery life. The first part of this thesis is centered around computationally-efficient approximations of a given linear dimensionality reduction (LDR) operator. In particular, we explore the partial circulant matrix (a matrix whose rows are related by circular shifts) based approximations as they allow for computationally-efficient implementations. We present several theoretical results that provide insight into existence of such approximations. We also propose a data-driven approach to numerically obtain such approximations and demonstrate the utility on real-life data. The second part of this thesis is focused around the issues of noise, missing observations, and quantization arising in matrix and tensor data. In particular, we propose a sparsity regularized maximum likelihood approach to completion of matrices following sparse factor models (matrices which can be expressed as a product of two matrices one of which is sparse). We provide general theoretical error bounds for the proposed approach which can be instantiated for variety of noise distributions. We also consider the problem of tensor completion and extend the results of matrix completion to the tensor setting. The problem of matrix completion from quantized and noisy observations is also investigated in as general terms as possible. We propose a constrained maximum likelihood approach to quantized matrix completion, provide probabilistic error bounds for this approach, and numerical algorithms which are used to provide numerical evidence for the proposed error bounds. The final part of this thesis is focused on issues related to clutter and limited battery life in signal acquisition. Specifically, we investigate the problem of compressive measurement design under a given sensing energy budget for estimating structured signals in structured clutter. We propose a novel approach that leverages the prior information about signal and clutter to judiciously allocate sensing energy to the compressive measurements. We also investigate the problem of processing Electrodermal Activity (EDA) signals recorded as the conductance over a user's skin. EDA signals contain information about the user's neuron ring and psychological state. These signals contain the desired information carrying signal superimposed with unwanted components which may be considered as clutter. We propose a novel compressed sensing based approach with provable error guarantees for processing EDA signals to extract relevant information, and demonstrate its efficacy, as compared to existing techniques, via numerical experiments

    Correlation Filters for Unmanned Aerial Vehicle-Based Aerial Tracking: A Review and Experimental Evaluation

    Full text link
    Aerial tracking, which has exhibited its omnipresent dedication and splendid performance, is one of the most active applications in the remote sensing field. Especially, unmanned aerial vehicle (UAV)-based remote sensing system, equipped with a visual tracking approach, has been widely used in aviation, navigation, agriculture,transportation, and public security, etc. As is mentioned above, the UAV-based aerial tracking platform has been gradually developed from research to practical application stage, reaching one of the main aerial remote sensing technologies in the future. However, due to the real-world onerous situations, e.g., harsh external challenges, the vibration of the UAV mechanical structure (especially under strong wind conditions), the maneuvering flight in complex environment, and the limited computation resources onboard, accuracy, robustness, and high efficiency are all crucial for the onboard tracking methods. Recently, the discriminative correlation filter (DCF)-based trackers have stood out for their high computational efficiency and appealing robustness on a single CPU, and have flourished in the UAV visual tracking community. In this work, the basic framework of the DCF-based trackers is firstly generalized, based on which, 23 state-of-the-art DCF-based trackers are orderly summarized according to their innovations for solving various issues. Besides, exhaustive and quantitative experiments have been extended on various prevailing UAV tracking benchmarks, i.e., UAV123, UAV123@10fps, UAV20L, UAVDT, DTB70, and VisDrone2019-SOT, which contain 371,903 frames in total. The experiments show the performance, verify the feasibility, and demonstrate the current challenges of DCF-based trackers onboard UAV tracking.Comment: 28 pages, 10 figures, submitted to GRS

    Convolutional Neural Network Architectures for Signals Supported on Graphs

    Full text link
    Two architectures that generalize convolutional neural networks (CNNs) for the processing of signals supported on graphs are introduced. We start with the selection graph neural network (GNN), which replaces linear time invariant filters with linear shift invariant graph filters to generate convolutional features and reinterprets pooling as a possibly nonlinear subsampling stage where nearby nodes pool their information in a set of preselected sample nodes. A key component of the architecture is to remember the position of sampled nodes to permit computation of convolutional features at deeper layers. The second architecture, dubbed aggregation GNN, diffuses the signal through the graph and stores the sequence of diffused components observed by a designated node. This procedure effectively aggregates all components into a stream of information having temporal structure to which the convolution and pooling stages of regular CNNs can be applied. A multinode version of aggregation GNNs is further introduced for operation in large scale graphs. An important property of selection and aggregation GNNs is that they reduce to conventional CNNs when particularized to time signals reinterpreted as graph signals in a circulant graph. Comparative numerical analyses are performed in a source localization application over synthetic and real-world networks. Performance is also evaluated for an authorship attribution problem and text category classification. Multinode aggregation GNNs are consistently the best performing GNN architecture.Comment: Submitted to IEEE Transactions on Signal Processin

    Multi-dimensional data analytics and deep learning via tensor networks

    Get PDF
    With the booming of big data and multi-sensor technology, multi-dimensional data, known as tensors, has demonstrated promising capability in capturing multidimensional correlation via efficiently extracting the latent structures, and drawn considerable attention in multiple disciplines such as image processing, recommender system, data analytics, etc. In addition to the multi-dimensional nature of real data, artificially designed tensors, referred as layers in deep neural networks, have also been intensively investigated and achieved the state-of-the-art performance in imaging processing, speech processing, and natural language understanding. However, algorithms related with multi-dimensional data are unfortunately expensive in computation and storage, thus limiting its application when the computational resources are limited. Although tensor factorization has been proposed to reduce the dimensionality and alleviate the computational cost, the trade-off among computation, storage, and performance has not been well studied. To this end, we first investigate an efficient dimensionality reduction method using a novel Tensor Train (TT) factorization. In particular, we propose a Tensor Train Principal Component Analysis (TT-PCA) and a Tensor Train Neighborhood Preserving Embedding (TT-NPE) to project data onto a Tensor Train Subspace (TTS) and effectively extract the discriminative features from the data. Mathematical analysis and simulation demonstrate TT-PCA and TT-NPE achieve better trade-off among computation, storage, and performance than the bench-mark tensor-based dimensionality reduction approaches. We then extend the TT factorization into general Tensor Ring (TR) factorization and propose a tensor ring completion algorithm, which can utilize 10% randomly observed pixels to recover the gunshot video at an error rate of only 6.25%. Inspired by the novel trade-off between model complexity and data representation, we introduce a Tensor Ring Nets (TRN) to compress the deep neural networks significantly. Using the benchmark 28-layer WideResNet architectures, TRN is able to compress the neural network by 243× with only 2.3% degradation in Cifar10 image classification

    Proceedings of the second "international Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST'14)

    Get PDF
    The implicit objective of the biennial "international - Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST) is to foster collaboration between international scientific teams by disseminating ideas through both specific oral/poster presentations and free discussions. For its second edition, the iTWIST workshop took place in the medieval and picturesque town of Namur in Belgium, from Wednesday August 27th till Friday August 29th, 2014. The workshop was conveniently located in "The Arsenal" building within walking distance of both hotels and town center. iTWIST'14 has gathered about 70 international participants and has featured 9 invited talks, 10 oral presentations, and 14 posters on the following themes, all related to the theory, application and generalization of the "sparsity paradigm": Sparsity-driven data sensing and processing; Union of low dimensional subspaces; Beyond linear and convex inverse problem; Matrix/manifold/graph sensing/processing; Blind inverse problems and dictionary learning; Sparsity and computational neuroscience; Information theory, geometry and randomness; Complexity/accuracy tradeoffs in numerical methods; Sparsity? What's next?; Sparse machine learning and inference.Comment: 69 pages, 24 extended abstracts, iTWIST'14 website: http://sites.google.com/site/itwist1
    corecore