2,341 research outputs found

    Matched subspace detection with hypothesis dependent noise power

    Get PDF
    We consider the problem of detecting a subspace signal in white Gaussian noise when the noise power may be different under the null hypothesis—where it is assumed to be known—and the alternative hypothesis. This situation occurs when the presence of the signal of interest (SOI) triggers an increase in the noise power. Accordingly, it may be relevant in the case of a mismatch between the actual SOI subspace and its presumed value, resulting in a modelling error. We derive the generalized likelihood ratio test (GLRT) for the problem at hand and contrast it with the GLRT which assumes known and equal noise power under the two hypotheses. A performance analysis is carried out and the distributions of the two test statistics are derived. From this analysis, we discuss the differences between the two detectors and provide explanations for the improved performance of the new detector. Numerical simulations attest to the validity of the analysis

    Diffusion LMS for clustered multitask networks

    Full text link
    Recent research works on distributed adaptive networks have intensively studied the case where the nodes estimate a common parameter vector collaboratively. However, there are many applications that are multitask-oriented in the sense that there are multiple parameter vectors that need to be inferred simultaneously. In this paper, we employ diffusion strategies to develop distributed algorithms that address clustered multitask problems by minimizing an appropriate mean-square error criterion with ℓ2\ell_2-regularization. Some results on the mean-square stability and convergence of the algorithm are also provided. Simulations are conducted to illustrate the theoretical findings.Comment: 5 pages, 6 figures, submitted to ICASSP 201

    Multitask Diffusion Adaptation over Networks

    Full text link
    Adaptive networks are suitable for decentralized inference tasks, e.g., to monitor complex natural phenomena. Recent research works have intensively studied distributed optimization problems in the case where the nodes have to estimate a single optimum parameter vector collaboratively. However, there are many important applications that are multitask-oriented in the sense that there are multiple optimum parameter vectors to be inferred simultaneously, in a collaborative manner, over the area covered by the network. In this paper, we employ diffusion strategies to develop distributed algorithms that address multitask problems by minimizing an appropriate mean-square error criterion with ℓ2\ell_2-regularization. The stability and convergence of the algorithm in the mean and in the mean-square sense is analyzed. Simulations are conducted to verify the theoretical findings, and to illustrate how the distributed strategy can be used in several useful applications related to spectral sensing, target localization, and hyperspectral data unmixing.Comment: 29 pages, 11 figures, submitted for publicatio

    Nonlinear unmixing of hyperspectral images using a semiparametric model and spatial regularization

    Full text link
    Incorporating spatial information into hyperspectral unmixing procedures has been shown to have positive effects, due to the inherent spatial-spectral duality in hyperspectral scenes. Current research works that consider spatial information are mainly focused on the linear mixing model. In this paper, we investigate a variational approach to incorporating spatial correlation into a nonlinear unmixing procedure. A nonlinear algorithm operating in reproducing kernel Hilbert spaces, associated with an ℓ1\ell_1 local variation norm as the spatial regularizer, is derived. Experimental results, with both synthetic and real data, illustrate the effectiveness of the proposed scheme.Comment: 5 pages, 1 figure, submitted to ICASSP 201

    Distributed image reconstruction for very large arrays in radio astronomy

    Get PDF
    Current and future radio interferometric arrays such as LOFAR and SKA are characterized by a paradox. Their large number of receptors (up to millions) allow theoretically unprecedented high imaging resolution. In the same time, the ultra massive amounts of samples makes the data transfer and computational loads (correlation and calibration) order of magnitudes too high to allow any currently existing image reconstruction algorithm to achieve, or even approach, the theoretical resolution. We investigate here decentralized and distributed image reconstruction strategies which select, transfer and process only a fraction of the total data. The loss in MSE incurred by the proposed approach is evaluated theoretically and numerically on simple test cases.Comment: Sensor Array and Multichannel Signal Processing Workshop (SAM), 2014 IEEE 8th, Jun 2014, Coruna, Spain. 201

    Proximal Multitask Learning over Networks with Sparsity-inducing Coregularization

    Full text link
    In this work, we consider multitask learning problems where clusters of nodes are interested in estimating their own parameter vector. Cooperation among clusters is beneficial when the optimal models of adjacent clusters have a good number of similar entries. We propose a fully distributed algorithm for solving this problem. The approach relies on minimizing a global mean-square error criterion regularized by non-differentiable terms to promote cooperation among neighboring clusters. A general diffusion forward-backward splitting strategy is introduced. Then, it is specialized to the case of sparsity promoting regularizers. A closed-form expression for the proximal operator of a weighted sum of ℓ1\ell_1-norms is derived to achieve higher efficiency. We also provide conditions on the step-sizes that ensure convergence of the algorithm in the mean and mean-square error sense. Simulations are conducted to illustrate the effectiveness of the strategy

    Distributed Deblurring of Large Images of Wide Field-Of-View

    Full text link
    Image deblurring is an economic way to reduce certain degradations (blur and noise) in acquired images. Thus, it has become essential tool in high resolution imaging in many applications, e.g., astronomy, microscopy or computational photography. In applications such as astronomy and satellite imaging, the size of acquired images can be extremely large (up to gigapixels) covering wide field-of-view suffering from shift-variant blur. Most of the existing image deblurring techniques are designed and implemented to work efficiently on centralized computing system having multiple processors and a shared memory. Thus, the largest image that can be handle is limited by the size of the physical memory available on the system. In this paper, we propose a distributed nonblind image deblurring algorithm in which several connected processing nodes (with reasonable computational resources) process simultaneously different portions of a large image while maintaining certain coherency among them to finally obtain a single crisp image. Unlike the existing centralized techniques, image deblurring in distributed fashion raises several issues. To tackle these issues, we consider certain approximations that trade-offs between the quality of deblurred image and the computational resources required to achieve it. The experimental results show that our algorithm produces the similar quality of images as the existing centralized techniques while allowing distribution, and thus being cost effective for extremely large images.Comment: 16 pages, 10 figures, submitted to IEEE Trans. on Image Processin
    • 

    corecore