187 research outputs found

    On inequalities for normalized Schur functions

    Full text link
    We prove a conjecture of Cuttler et al.~[2011] [A. Cuttler, C. Greene, and M. Skandera; \emph{Inequalities for symmetric means}. European J. Combinatorics, 32(2011), 745--761] on the monotonicity of \emph{normalized Schur functions} under the usual (dominance) partial-order on partitions. We believe that our proof technique may be helpful in obtaining similar inequalities for other symmetric functions.Comment: This version fixes the error of the previous on

    Fast projections onto mixed-norm balls with applications

    Full text link
    Joint sparsity offers powerful structural cues for feature selection, especially for variables that are expected to demonstrate a "grouped" behavior. Such behavior is commonly modeled via group-lasso, multitask lasso, and related methods where feature selection is effected via mixed-norms. Several mixed-norm based sparse models have received substantial attention, and for some cases efficient algorithms are also available. Surprisingly, several constrained sparse models seem to be lacking scalable algorithms. We address this deficiency by presenting batch and online (stochastic-gradient) optimization methods, both of which rely on efficient projections onto mixed-norm balls. We illustrate our methods by applying them to the multitask lasso. We conclude by mentioning some open problems.Comment: Preprint of paper under revie

    Explicit eigenvalues of certain scaled trigonometric matrices

    Full text link
    In a very recent paper "\emph{On eigenvalues and equivalent transformation of trigonometric matrices}" (D. Zhang, Z. Lin, and Y. Liu, LAA 436, 71--78 (2012)), the authors motivated and discussed a trigonometric matrix that arises in the design of finite impulse response (FIR) digital filters. The eigenvalues of this matrix shed light on the FIR filter design, so obtaining them in closed form was investigated. Zhang \emph{et al.}\ proved that their matrix had rank-4 and they conjectured closed form expressions for its eigenvalues, leaving a rigorous proof as an open problem. This paper studies trigonometric matrices significantly more general than theirs, deduces their rank, and derives closed-forms for their eigenvalues. As a corollary, it yields a short proof of the conjectures in the aforementioned paper.Comment: 7 pages; fixed Lemma 2, tightened inequalitie

    On the matrix square root via geometric optimization

    Full text link
    This paper is triggered by the preprint "\emph{Computing Matrix Squareroot via Non Convex Local Search}" by Jain et al. (\textit{\textcolor{blue}{arXiv:1507.05854}}), which analyzes gradient-descent for computing the square root of a positive definite matrix. Contrary to claims of~\citet{jain2015}, our experiments reveal that Newton-like methods compute matrix square roots rapidly and reliably, even for highly ill-conditioned matrices and without requiring commutativity. We observe that gradient-descent converges very slowly primarily due to tiny step-sizes and ill-conditioning. We derive an alternative first-order method based on geodesic convexity: our method admits a transparent convergence analysis (<1< 1 page), attains linear rate, and displays reliable convergence even for rank deficient problems. Though superior to gradient-descent, ultimately our method is also outperformed by a well-known scaled Newton method. Nevertheless, the primary value of our work is its conceptual value: it shows that for deriving gradient based methods for the matrix square root, \emph{the manifold geometric view of positive definite matrices can be much more advantageous than the Euclidean view}.Comment: 8 pages, 12 plots, this version contains several more references and more words about the rank-deficient cas
    corecore