205 research outputs found

    Can Compact Currents be Uniquely Determined ?

    Get PDF
    Introduction The EEG/MEG inverse problem is ill-posed and its solutions have two independent sources of non-uniqueness. The problem is ill-posed by the nature of the physics because infinitely many different current configurations can give rise to the same electro-magnetic fields. It is also mathematically underdetermined because the number of available mathematically independent data points is less than the dimension of the solution space. Since electro-magnetic inverse solutions are non-unique, some criteria must be chosen by which to select a particular solution. Here we assume that neuronal generators of EEG/MEG data, and hence our desired solutions, are compact. Such an assumption, in one form or another, ranging from dipolar sources to sources being activations of small areas of arbitrary shape and location in the cortex, has been used in most neuro-magnetic inverse solutions. Here we take the latter case to be our target solutions. Intercortical recordings have shown a much greater range of activity in the cortex. We maintain, however, that our assumption is a likely one, since EEG/MEG data reflect only the synchronous activations on the order of at least 10,000-100,000 neurons. The spatial extent of the individual synchronous activations, at least in normal functional cases, is fairly small, as suggested by physiological data, by functional images from PET and fMRI, and by the EEG/MEG field waveforms. The degree of compactness of the currents, however, is not entirely known. Although numerous algorithms have been developed to find localized solutions, the solutions from these algorithms do not necessarily agree. This suggests that the assumption of compactness by itself does not resolve the non-uniqueness of the electro-magnetic inverse problem. Since neural currents are not known to satisfy any particular optimization criterion, such as the minimum l 1 norm, we do not know which of the competing compact solutions is correct, as long such non-uniqueness prevails. This paper establishes the conditions under which the electro-magnetic inverse problem becomes unique. Due to the space limitations here we were only able to provide a sparse summary of the results of our analysis. The full details of the analysis and the treatment of the equally important issue of methodologies for unambiguous identification of neural currents as suggested by the analysis, are presented in Uniqueness of electric field generators has also been studied i

    Accelerated Diffusion Spectrum Imaging with Compressed Sensing Using Adaptive Dictionaries

    Get PDF
    Diffusion Spectrum Imaging (DSI) offers detailed information on complex distributions of intravoxel fiber orientations at the expense of extremely long imaging times (~1 hour). It is possible to accelerate DSI by sub-Nyquist sampling of the q-space followed by nonlinear reconstruction to estimate the diffusion probability density functions (pdfs). Recent work by Menzel et al. imposed sparsity constraints on the pdfs under wavelet and Total Variation (TV) transforms. As the performance of Compressed Sensing (CS) reconstruction depends strongly on the level of sparsity in the selected transform space, a dictionary specifically tailored for sparse representation of diffusion pdfs can yield higher fidelity results. To our knowledge, this work is the first application of adaptive dictionaries in DSI, whereby we reduce the scan time of whole brain DSI acquisition from 50 to 17 min while retaining high image quality. In vivo experiments were conducted with the novel 3T Connectome MRI, whose strong gradients are particularly suited for DSI. The RMSE from the proposed reconstruction is up to 2 times lower than that of Menzel et al.’s method, and is actually comparable to that of the fully-sampled 50 minute scan. Further, we demonstrate that a dictionary trained using pdfs from a single slice of a particular subject generalizes well to other slices from the same subject, as well as to slices from another subject.National Institutes of Health (U.S.) (NIH R01 EB007942)National Institute for Biomedical Imaging and Bioengineering (U.S.) (NIBIB K99EB012107)National Institute for Biomedical Imaging and Bioengineering (U.S.) (NIBIB R01EB006847)National Institute for Biomedical Imaging and Bioengineering (U.S.) (K99/R00 EB008129)National Center for Research Resources (U.S.) (NCRR P41RR14075)National Institutes of Health (U.S.) (NIH Blueprint for Neuroscience Research U01MH093765)National Institutes of Health (U.S.) (The Human Connectome project)Siemens Aktiengesellschaft (Siemens-MIT Alliance)Center for Integration of Medicine and Innovative Technology (MIT-CIMIT Medical Engineering Fellowship

    Multi-contrast reconstruction with Bayesian compressed sensing

    Get PDF
    Clinical imaging with structural MRI routinely relies on multiple acquisitions of the same region of interest under several different contrast preparations. This work presents a reconstruction algorithm based on Bayesian compressed sensing to jointly reconstruct a set of images from undersampled k-space data with higher fidelity than when the images are reconstructed either individually or jointly by a previously proposed algorithm, M-FOCUSS. The joint inference problem is formulated in a hierarchical Bayesian setting, wherein solving each of the inverse problems corresponds to finding the parameters (here, image gradient coefficients) associated with each of the images. The variance of image gradients across contrasts for a single volumetric spatial position is a single hyperparameter. All of the images from the same anatomical region, but with different contrast properties, contribute to the estimation of the hyperparameters, and once they are found, the k-space data belonging to each image are used independently to infer the image gradients. Thus, commonality of image spatial structure across contrasts is exploited without the problematic assumption of correlation across contrasts. Examples demonstrate improved reconstruction quality (up to a factor of 4 in root-mean-square error) compared with previous compressed sensing algorithms and show the benefit of joint inversion under a hierarchical Bayesian model

    Joint Channel Estimation Algorithm via Weighted Homotopy for Massive MIMO OFDM System

    Get PDF
    Massive (or large-scale) multiple-input multiple-output (MIMO) orthogonal frequency division multiplexing (OFDM) system is widely acknowledged as a key technology for future communication. One main challenge to implement this system in practice is the high dimensional channel estimation, where the large number of channel matrix entries requires prohibitively high computational complexity. To solve this problem efficiently, a channel estimation approach using few number of pilots is necessary. In this paper, we propose a weighted Homotopy based channel estimation approach which utilizes the sparse nature in MIMO channels to achieve a decent channel estimation performance with much less pilot overhead. Moreover, inspired by the fact that MIMO channels are observed to have approximately common support in a neighborhood, an information exchange strategy based on the proposed approach is developed to further improve the estimation accuracy and reduce the required number of pilots through joint channel estimation. Compared with the traditional sparse channel estimation methods, the proposed approach can achieve more than 2dB gain in terms of mean square error (MSE) with the same number of pilots, or achieve the same performance with much less pilots

    Deconvolution of Serum Cortisol Levels by Using Compressed Sensing

    Get PDF
    The pulsatile release of cortisol from the adrenal glands is controlled by a hierarchical system that involves corticotropin releasing hormone (CRH) from the hypothalamus, adrenocorticotropin hormone (ACTH) from the pituitary, and cortisol from the adrenal glands. Determining the number, timing, and amplitude of the cortisol secretory events and recovering the infusion and clearance rates from serial measurements of serum cortisol levels is a challenging problem. Despite many years of work on this problem, a complete satisfactory solution has been elusive. We formulate this question as a non-convex optimization problem, and solve it using a coordinate descent algorithm that has a principled combination of (i) compressed sensing for recovering the amplitude and timing of the secretory events, and (ii) generalized cross validation for choosing the regularization parameter. Using only the observed serum cortisol levels, we model cortisol secretion from the adrenal glands using a second-order linear differential equation with pulsatile inputs that represent cortisol pulses released in response to pulses of ACTH. Using our algorithm and the assumption that the number of pulses is between 15 to 22 pulses over 24 hours, we successfully deconvolve both simulated datasets and actual 24-hr serum cortisol datasets sampled every 10 minutes from 10 healthy women. Assuming a one-minute resolution for the secretory events, we obtain physiologically plausible timings and amplitudes of each cortisol secretory event with R[superscript 2] above 0.92. Identification of the amplitude and timing of pulsatile hormone release allows (i) quantifying of normal and abnormal secretion patterns towards the goal of understanding pathological neuroendocrine states, and (ii) potentially designing optimal approaches for treating hormonal disorders.National Science Foundation (U.S.). Graduate Research Fellowship ProgramNational Institutes of Health (U.S.) (NIH DP1 OD003646)National Science Foundation (U.S.) (0836720)National Science Foundation (U.S.). Office of Emerging Frontiers in Research and Innovation (EFRI-0735956

    Sparse motion bases selection for human motion denoising

    Get PDF
    Human motion denoising is an indispensable step of data preprocessing for many motion data based applications. In this paper, we propose a data-driven based human motion denoising method that sparsely selects the most correlated subset of motion bases for clean motion reconstruction. Meanwhile, it takes the statistic property of two common noises, i.e., Gaussian noise and outliers, into account in deriving the objective functions. In particular, our method firstly divides each human pose into five partitions termed as poselets to gain a much fine-grained pose representation. Then, these poselets are reorganized into multiple overlapped poselet groups using a lagged window moving across the entire motion sequence to preserve the embedded spatial 13temporal motion patterns. Afterward, five compacted and representative motion dictionaries are constructed in parallel by means of fast K-SVD in the training phase; they are used to remove the noise and outliers from noisy motion sequences in the testing phase by solving !131-minimization problems. Extensive experiments show that our method outperforms its competitors. More importantly, compared with other data-driven based method, our method does not need to specifically choose the training data, it can be more easily applied to real-world applications

    The Cosparse Analysis Model and Algorithms

    Get PDF
    After a decade of extensive study of the sparse representation synthesis model, we can safely say that this is a mature and stable field, with clear theoretical foundations, and appealing applications. Alongside this approach, there is an analysis counterpart model, which, despite its similarity to the synthesis alternative, is markedly different. Surprisingly, the analysis model did not get a similar attention, and its understanding today is shallow and partial. In this paper we take a closer look at the analysis approach, better define it as a generative model for signals, and contrast it with the synthesis one. This work proposes effective pursuit methods that aim to solve inverse problems regularized with the analysis-model prior, accompanied by a preliminary theoretical study of their performance. We demonstrate the effectiveness of the analysis model in several experiments.Comment: Submitted (2011

    A novel underdetermined source recovery algorithm based on k-sparse component analysis

    Get PDF
    Sparse component analysis (SCA) is a popular method for addressing underdetermined blind source separation in array signal processing applications. We are motivated by problems that arise in the applications where the sources are densely sparse (i.e. the number of active sources is high and very close to the number of sensors). The separation performance of current underdetermined source recovery (USR) solutions, including the relaxation and greedy families, reduces with decreasing the mixing system dimension and increasing the sparsity level (k). In this paper, we present a k-SCA-based algorithm that is suitable for USR in low-dimensional mixing systems. Assuming the sources is at most (m−1) sparse where m is the number of mixtures; the proposed method is capable of recovering the sources from the mixtures given the mixing matrix using a subspace detection framework. Simulation results show that the proposed algorithm achieves better separation performance in k-SCA conditions compared to state-of-the-art USR algorithms such as basis pursuit, minimizing norm-L1, smoothed L0, focal underdetermined system solver and orthogonal matching pursuit
    • 

    corecore