4,952 research outputs found

    Numerical range for random matrices

    Full text link
    We analyze the numerical range of high-dimensional random matrices, obtaining limit results and corresponding quantitative estimates in the non-limit case. For a large class of random matrices their numerical range is shown to converge to a disc. In particular, numerical range of complex Ginibre matrix almost surely converges to the disk of radius 2\sqrt{2}. Since the spectrum of non-hermitian random matrices from the Ginibre ensemble lives asymptotically in a neighborhood of the unit disk, it follows that the outer belt of width 2−1\sqrt{2}-1 containing no eigenvalues can be seen as a quantification the non-normality of the complex Ginibre random matrix. We also show that the numerical range of upper triangular Gaussian matrices converges to the same disk of radius 2\sqrt{2}, while all eigenvalues are equal to zero and we prove that the operator norm of such matrices converges to 2e\sqrt{2e}.Comment: 23 pages, 4 figure

    Neutrino mixing, interval matrices and singular values

    Get PDF
    We study the properties of singular values of mixing matrices embedded within an experimentally determined interval matrix. We argue that any physically admissible mixing matrix needs to have the property of being a contraction. This condition constrains the interval matrix, by imposing correlations on its elements and leaving behind only physical mixings that may unveil signs of new physics in terms of extra neutrino species. We propose a description of the admissible three-dimensional mixing space as a convex hull over experimentally determined unitary mixing matrices parametrized by Euler angles which allows us to select either unitary or nonunitary mixing matrices. The unitarity-breaking cases are found through singular values and we construct unitary extensions yielding a complete theory of minimal dimensionality larger than three through the theory of unitary matrix dilations. We discuss further applications to the quark sector.Comment: Misprints correcte

    Construction of aggregation operators with noble reinforcement

    Full text link
    This paper examines disjunctive aggregation operators used in various recommender systems. A specific requirement in these systems is the property of noble reinforcement: allowing a collection of high-valued arguments to reinforce each other while avoiding reinforcement of low-valued arguments. We present a new construction of Lipschitz-continuous aggregation operators with noble reinforcement property and its refinements. <br /

    Toward a probability theory for product logic: states, integral representation and reasoning

    Full text link
    The aim of this paper is to extend probability theory from the classical to the product t-norm fuzzy logic setting. More precisely, we axiomatize a generalized notion of finitely additive probability for product logic formulas, called state, and show that every state is the Lebesgue integral with respect to a unique regular Borel probability measure. Furthermore, the relation between states and measures is shown to be one-one. In addition, we study geometrical properties of the convex set of states and show that extremal states, i.e., the extremal points of the state space, are the same as the truth-value assignments of the logic. Finally, we axiomatize a two-tiered modal logic for probabilistic reasoning on product logic events and prove soundness and completeness with respect to probabilistic spaces, where the algebra is a free product algebra and the measure is a state in the above sense.Comment: 27 pages, 1 figur

    The Rate of Convergence of AdaBoost

    Get PDF
    The AdaBoost algorithm was designed to combine many "weak" hypotheses that perform slightly better than random guessing into a "strong" hypothesis that has very low error. We study the rate at which AdaBoost iteratively converges to the minimum of the "exponential loss." Unlike previous work, our proofs do not require a weak-learning assumption, nor do they require that minimizers of the exponential loss are finite. Our first result shows that at iteration tt, the exponential loss of AdaBoost's computed parameter vector will be at most ϵ\epsilon more than that of any parameter vector of ℓ1\ell_1-norm bounded by BB in a number of rounds that is at most a polynomial in BB and 1/ϵ1/\epsilon. We also provide lower bounds showing that a polynomial dependence on these parameters is necessary. Our second result is that within C/ϵC/\epsilon iterations, AdaBoost achieves a value of the exponential loss that is at most ϵ\epsilon more than the best possible value, where CC depends on the dataset. We show that this dependence of the rate on ϵ\epsilon is optimal up to constant factors, i.e., at least Ω(1/ϵ)\Omega(1/\epsilon) rounds are necessary to achieve within ϵ\epsilon of the optimal exponential loss.Comment: A preliminary version will appear in COLT 201
    • …
    corecore