35,121 research outputs found

    Single- and Multiple-Shell Uniform Sampling Schemes for Diffusion MRI Using Spherical Codes

    Get PDF
    In diffusion MRI (dMRI), a good sampling scheme is important for efficient acquisition and robust reconstruction. Diffusion weighted signal is normally acquired on single or multiple shells in q-space. Signal samples are typically distributed uniformly on different shells to make them invariant to the orientation of structures within tissue, or the laboratory coordinate frame. The Electrostatic Energy Minimization (EEM) method, originally proposed for single shell sampling scheme in dMRI, was recently generalized to multi-shell schemes, called Generalized EEM (GEEM). GEEM has been successfully used in the Human Connectome Project (HCP). However, EEM does not directly address the goal of optimal sampling, i.e., achieving large angular separation between sampling points. In this paper, we propose a more natural formulation, called Spherical Code (SC), to directly maximize the minimal angle between different samples in single or multiple shells. We consider not only continuous problems to design single or multiple shell sampling schemes, but also discrete problems to uniformly extract sub-sampled schemes from an existing single or multiple shell scheme, and to order samples in an existing scheme. We propose five algorithms to solve the above problems, including an incremental SC (ISC), a sophisticated greedy algorithm called Iterative Maximum Overlap Construction (IMOC), an 1-Opt greedy method, a Mixed Integer Linear Programming (MILP) method, and a Constrained Non-Linear Optimization (CNLO) method. To our knowledge, this is the first work to use the SC formulation for single or multiple shell sampling schemes in dMRI. Experimental results indicate that SC methods obtain larger angular separation and better rotational invariance than the state-of-the-art EEM and GEEM. The related codes and a tutorial have been released in DMRITool.Comment: Accepted by IEEE transactions on Medical Imaging. Codes have been released in dmritool https://diffusionmritool.github.io/tutorial_qspacesampling.htm

    Maximum block improvement and polynomial optimization

    Get PDF

    Two Algorithms for Orthogonal Nonnegative Matrix Factorization with Application to Clustering

    Full text link
    Approximate matrix factorization techniques with both nonnegativity and orthogonality constraints, referred to as orthogonal nonnegative matrix factorization (ONMF), have been recently introduced and shown to work remarkably well for clustering tasks such as document classification. In this paper, we introduce two new methods to solve ONMF. First, we show athematical equivalence between ONMF and a weighted variant of spherical k-means, from which we derive our first method, a simple EM-like algorithm. This also allows us to determine when ONMF should be preferred to k-means and spherical k-means. Our second method is based on an augmented Lagrangian approach. Standard ONMF algorithms typically enforce nonnegativity for their iterates while trying to achieve orthogonality at the limit (e.g., using a proper penalization term or a suitably chosen search direction). Our method works the opposite way: orthogonality is strictly imposed at each step while nonnegativity is asymptotically obtained, using a quadratic penalty. Finally, we show that the two proposed approaches compare favorably with standard ONMF algorithms on synthetic, text and image data sets.Comment: 17 pages, 8 figures. New numerical experiments (document and synthetic data sets

    Approximating the least hypervolume contributor: NP-hard in general, but fast in practice

    Get PDF
    The hypervolume indicator is an increasingly popular set measure to compare the quality of two Pareto sets. The basic ingredient of most hypervolume indicator based optimization algorithms is the calculation of the hypervolume contribution of single solutions regarding a Pareto set. We show that exact calculation of the hypervolume contribution is #P-hard while its approximation is NP-hard. The same holds for the calculation of the minimal contribution. We also prove that it is NP-hard to decide whether a solution has the least hypervolume contribution. Even deciding whether the contribution of a solution is at most (1+\eps) times the minimal contribution is NP-hard. This implies that it is neither possible to efficiently find the least contributing solution (unless P=NPP = NP) nor to approximate it (unless NP=BPPNP = BPP). Nevertheless, in the second part of the paper we present a fast approximation algorithm for this problem. We prove that for arbitrarily given \eps,\delta>0 it calculates a solution with contribution at most (1+\eps) times the minimal contribution with probability at least (1−ή)(1-\delta). Though it cannot run in polynomial time for all instances, it performs extremely fast on various benchmark datasets. The algorithm solves very large problem instances which are intractable for exact algorithms (e.g., 10000 solutions in 100 dimensions) within a few seconds.Comment: 22 pages, to appear in Theoretical Computer Scienc

    A trivariate interpolation algorithm using a cube-partition searching procedure

    Get PDF
    In this paper we propose a fast algorithm for trivariate interpolation, which is based on the partition of unity method for constructing a global interpolant by blending local radial basis function interpolants and using locally supported weight functions. The partition of unity algorithm is efficiently implemented and optimized by connecting the method with an effective cube-partition searching procedure. More precisely, we construct a cube structure, which partitions the domain and strictly depends on the size of its subdomains, so that the new searching procedure and, accordingly, the resulting algorithm enable us to efficiently deal with a large number of nodes. Complexity analysis and numerical experiments show high efficiency and accuracy of the proposed interpolation algorithm
    • 

    corecore