10,878 research outputs found

    Characterizing Real-Valued Multivariate Complex Polynomials and Their Symmetric Tensor Representations

    Get PDF
    In this paper we study multivariate polynomial functions in complex variables and the corresponding associated symmetric tensor representations. The focus is on finding conditions under which such complex polynomials/tensors always take real values. We introduce the notion of symmetric conjugate forms and general conjugate forms, and present characteristic conditions for such complex polynomials to be real-valued. As applications of our results, we discuss the relation between nonnegative polynomials and sums of squares in the context of complex polynomials. Moreover, new notions of eigenvalues/eigenvectors for complex tensors are introduced, extending properties from the Hermitian matrices. Finally, we discuss an important property for symmetric tensors, which states that the largest absolute value of eigenvalue of a symmetric real tensor is equal to its largest singular value; the result is known as Banach's theorem. We show that a similar result holds in the complex case as well

    Approximation methods for complex polynomial optimization

    Get PDF

    On Algorithms Based on Joint Estimation of Currents and Contrast in Microwave Tomography

    Full text link
    This paper deals with improvements to the contrast source inversion method which is widely used in microwave tomography. First, the method is reviewed and weaknesses of both the criterion form and the optimization strategy are underlined. Then, two new algorithms are proposed. Both of them are based on the same criterion, similar but more robust than the one used in contrast source inversion. The first technique keeps the main characteristics of the contrast source inversion optimization scheme but is based on a better exploitation of the conjugate gradient algorithm. The second technique is based on a preconditioned conjugate gradient algorithm and performs simultaneous updates of sets of unknowns that are normally processed sequentially. Both techniques are shown to be more efficient than original contrast source inversion.Comment: 12 pages, 12 figures, 5 table

    Radio interferometric gain calibration as a complex optimization problem

    Full text link
    Recent developments in optimization theory have extended some traditional algorithms for least-squares optimization of real-valued functions (Gauss-Newton, Levenberg-Marquardt, etc.) into the domain of complex functions of a complex variable. This employs a formalism called the Wirtinger derivative, and derives a full-complex Jacobian counterpart to the conventional real Jacobian. We apply these developments to the problem of radio interferometric gain calibration, and show how the general complex Jacobian formalism, when combined with conventional optimization approaches, yields a whole new family of calibration algorithms, including those for the polarized and direction-dependent gain regime. We further extend the Wirtinger calculus to an operator-based matrix calculus for describing the polarized calibration regime. Using approximate matrix inversion results in computationally efficient implementations; we show that some recently proposed calibration algorithms such as StefCal and peeling can be understood as special cases of this, and place them in the context of the general formalism. Finally, we present an implementation and some applied results of CohJones, another specialized direction-dependent calibration algorithm derived from the formalism.Comment: 18 pages; 6 figures; accepted by MNRA

    Variational Inference in Nonconjugate Models

    Full text link
    Mean-field variational methods are widely used for approximate posterior inference in many probabilistic models. In a typical application, mean-field methods approximately compute the posterior with a coordinate-ascent optimization algorithm. When the model is conditionally conjugate, the coordinate updates are easily derived and in closed form. However, many models of interest---like the correlated topic model and Bayesian logistic regression---are nonconjuate. In these models, mean-field methods cannot be directly applied and practitioners have had to develop variational algorithms on a case-by-case basis. In this paper, we develop two generic methods for nonconjugate models, Laplace variational inference and delta method variational inference. Our methods have several advantages: they allow for easily derived variational algorithms with a wide class of nonconjugate models; they extend and unify some of the existing algorithms that have been derived for specific models; and they work well on real-world datasets. We studied our methods on the correlated topic model, Bayesian logistic regression, and hierarchical Bayesian logistic regression

    Metaheuristic design of feedforward neural networks: a review of two decades of research

    Get PDF
    Over the past two decades, the feedforward neural network (FNN) optimization has been a key interest among the researchers and practitioners of multiple disciplines. The FNN optimization is often viewed from the various perspectives: the optimization of weights, network architecture, activation nodes, learning parameters, learning environment, etc. Researchers adopted such different viewpoints mainly to improve the FNN's generalization ability. The gradient-descent algorithm such as backpropagation has been widely applied to optimize the FNNs. Its success is evident from the FNN's application to numerous real-world problems. However, due to the limitations of the gradient-based optimization methods, the metaheuristic algorithms including the evolutionary algorithms, swarm intelligence, etc., are still being widely explored by the researchers aiming to obtain generalized FNN for a given problem. This article attempts to summarize a broad spectrum of FNN optimization methodologies including conventional and metaheuristic approaches. This article also tries to connect various research directions emerged out of the FNN optimization practices, such as evolving neural network (NN), cooperative coevolution NN, complex-valued NN, deep learning, extreme learning machine, quantum NN, etc. Additionally, it provides interesting research challenges for future research to cope-up with the present information processing era

    Simultaneously Sparse Solutions to Linear Inverse Problems with Multiple System Matrices and a Single Observation Vector

    Full text link
    A linear inverse problem is proposed that requires the determination of multiple unknown signal vectors. Each unknown vector passes through a different system matrix and the results are added to yield a single observation vector. Given the matrices and lone observation, the objective is to find a simultaneously sparse set of unknown vectors that solves the system. We will refer to this as the multiple-system single-output (MSSO) simultaneous sparsity problem. This manuscript contrasts the MSSO problem with other simultaneous sparsity problems and conducts a thorough initial exploration of algorithms with which to solve it. Seven algorithms are formulated that approximately solve this NP-Hard problem. Three greedy techniques are developed (matching pursuit, orthogonal matching pursuit, and least squares matching pursuit) along with four methods based on a convex relaxation (iteratively reweighted least squares, two forms of iterative shrinkage, and formulation as a second-order cone program). The algorithms are evaluated across three experiments: the first and second involve sparsity profile recovery in noiseless and noisy scenarios, respectively, while the third deals with magnetic resonance imaging radio-frequency excitation pulse design.Comment: 36 pages; manuscript unchanged from July 21, 2008, except for updated references; content appears in September 2008 PhD thesi
    corecore