22,458 research outputs found

    Single- and Multiple-Shell Uniform Sampling Schemes for Diffusion MRI Using Spherical Codes

    Get PDF
    In diffusion MRI (dMRI), a good sampling scheme is important for efficient acquisition and robust reconstruction. Diffusion weighted signal is normally acquired on single or multiple shells in q-space. Signal samples are typically distributed uniformly on different shells to make them invariant to the orientation of structures within tissue, or the laboratory coordinate frame. The Electrostatic Energy Minimization (EEM) method, originally proposed for single shell sampling scheme in dMRI, was recently generalized to multi-shell schemes, called Generalized EEM (GEEM). GEEM has been successfully used in the Human Connectome Project (HCP). However, EEM does not directly address the goal of optimal sampling, i.e., achieving large angular separation between sampling points. In this paper, we propose a more natural formulation, called Spherical Code (SC), to directly maximize the minimal angle between different samples in single or multiple shells. We consider not only continuous problems to design single or multiple shell sampling schemes, but also discrete problems to uniformly extract sub-sampled schemes from an existing single or multiple shell scheme, and to order samples in an existing scheme. We propose five algorithms to solve the above problems, including an incremental SC (ISC), a sophisticated greedy algorithm called Iterative Maximum Overlap Construction (IMOC), an 1-Opt greedy method, a Mixed Integer Linear Programming (MILP) method, and a Constrained Non-Linear Optimization (CNLO) method. To our knowledge, this is the first work to use the SC formulation for single or multiple shell sampling schemes in dMRI. Experimental results indicate that SC methods obtain larger angular separation and better rotational invariance than the state-of-the-art EEM and GEEM. The related codes and a tutorial have been released in DMRITool.Comment: Accepted by IEEE transactions on Medical Imaging. Codes have been released in dmritool https://diffusionmritool.github.io/tutorial_qspacesampling.htm

    Distributed Unmixing of Hyperspectral Data With Sparsity Constraint

    Full text link
    Spectral unmixing (SU) is a data processing problem in hyperspectral remote sensing. The significant challenge in the SU problem is how to identify endmembers and their weights, accurately. For estimation of signature and fractional abundance matrices in a blind problem, nonnegative matrix factorization (NMF) and its developments are used widely in the SU problem. One of the constraints which was added to NMF is sparsity constraint that was regularized by L 1/2 norm. In this paper, a new algorithm based on distributed optimization has been used for spectral unmixing. In the proposed algorithm, a network including single-node clusters has been employed. Each pixel in hyperspectral images considered as a node in this network. The distributed unmixing with sparsity constraint has been optimized with diffusion LMS strategy, and then the update equations for fractional abundance and signature matrices are obtained. Simulation results based on defined performance metrics, illustrate advantage of the proposed algorithm in spectral unmixing of hyperspectral data compared with other methods. The results show that the AAD and SAD of the proposed approach are improved respectively about 6 and 27 percent toward distributed unmixing in SNR=25dB.Comment: 6 pages, conference pape

    Distributed Learning for Stochastic Generalized Nash Equilibrium Problems

    Full text link
    This work examines a stochastic formulation of the generalized Nash equilibrium problem (GNEP) where agents are subject to randomness in the environment of unknown statistical distribution. We focus on fully-distributed online learning by agents and employ penalized individual cost functions to deal with coupled constraints. Three stochastic gradient strategies are developed with constant step-sizes. We allow the agents to use heterogeneous step-sizes and show that the penalty solution is able to approach the Nash equilibrium in a stable manner within O(μmax)O(\mu_\text{max}), for small step-size value μmax\mu_\text{max} and sufficiently large penalty parameters. The operation of the algorithm is illustrated by considering the network Cournot competition problem

    Distributed Coupled Multi-Agent Stochastic Optimization

    Full text link
    This work develops effective distributed strategies for the solution of constrained multi-agent stochastic optimization problems with coupled parameters across the agents. In this formulation, each agent is influenced by only a subset of the entries of a global parameter vector or model, and is subject to convex constraints that are only known locally. Problems of this type arise in several applications, most notably in disease propagation models, minimum-cost flow problems, distributed control formulations, and distributed power system monitoring. This work focuses on stochastic settings, where a stochastic risk function is associated with each agent and the objective is to seek the minimizer of the aggregate sum of all risks subject to a set of constraints. Agents are not aware of the statistical distribution of the data and, therefore, can only rely on stochastic approximations in their learning strategies. We derive an effective distributed learning strategy that is able to track drifts in the underlying parameter model. A detailed performance and stability analysis is carried out showing that the resulting coupled diffusion strategy converges at a linear rate to an O(μ)−O(\mu)-neighborhood of the true penalized optimizer
    • …
    corecore