25,819 research outputs found

    Computing Optimal Experimental Designs via Interior Point Method

    Full text link
    In this paper, we study optimal experimental design problems with a broad class of smooth convex optimality criteria, including the classical A-, D- and p th mean criterion. In particular, we propose an interior point (IP) method for them and establish its global convergence. Furthermore, by exploiting the structure of the Hessian matrix of the aforementioned optimality criteria, we derive an explicit formula for computing its rank. Using this result, we then show that the Newton direction arising in the IP method can be computed efficiently via Sherman-Morrison-Woodbury formula when the size of the moment matrix is small relative to the sample size. Finally, we compare our IP method with the widely used multiplicative algorithm introduced by Silvey et al. [29]. The computational results show that the IP method generally outperforms the multiplicative algorithm both in speed and solution quality

    Separability criteria via sets of mutually unbiased measurements

    Full text link
    Mutually unbiased measurements (MUMs) are generalized from the concept of mutually unbiased bases (MUBs) and include the complete set of MUBs as a special case, but they are superior to MUBs as they do not need to be rank one projectors. We investigate entanglement detection using sets of MUMs and derived separability criteria for dd-dimensional multipartite systems, and arbitrary high-dimensional bipartitie and multipartite systems. These criteria provide experimental implementation in detecting entanglement of unknown quantum states.Comment: 10 pages in Scientific Reports, 2015, online. arXiv admin note: text overlap with arXiv:1407.0314 by other author

    Penalty methods for a class of non-Lipschitz optimization problems

    Full text link
    We consider a class of constrained optimization problems with a possibly nonconvex non-Lipschitz objective and a convex feasible set being the intersection of a polyhedron and a possibly degenerate ellipsoid. Such problems have a wide range of applications in data science, where the objective is used for inducing sparsity in the solutions while the constraint set models the noise tolerance and incorporates other prior information for data fitting. To solve this class of constrained optimization problems, a common approach is the penalty method. However, there is little theory on exact penalization for problems with nonconvex and non-Lipschitz objective functions. In this paper, we study the existence of exact penalty parameters regarding local minimizers, stationary points and ϵ\epsilon-minimizers under suitable assumptions. Moreover, we discuss a penalty method whose subproblems are solved via a nonmonotone proximal gradient method with a suitable update scheme for the penalty parameters, and prove the convergence of the algorithm to a KKT point of the constrained problem. Preliminary numerical results demonstrate the efficiency of the penalty method for finding sparse solutions of underdetermined linear systems

    Diffraction of a pulse by a three-dimensional corner

    Get PDF
    Three dimensional diffraction of sonic booms by corners of structure
    corecore