385 research outputs found

    Mixed-integer Nonlinear Optimization: a hatchery for modern mathematics

    Get PDF
    The second MFO Oberwolfach Workshop on Mixed-Integer Nonlinear Programming (MINLP) took place between 2nd and 8th June 2019. MINLP refers to one of the hardest Mathematical Programming (MP) problem classes, involving both nonlinear functions as well as continuous and integer decision variables. MP is a formal language for describing optimization problems, and is traditionally part of Operations Research (OR), which is itself at the intersection of mathematics, computer science, engineering and econometrics. The scientific program has covered the three announced areas (hierarchies of approximation, mixed-integer nonlinear optimal control, and dealing with uncertainties) with a variety of tutorials, talks, short research announcements, and a special "open problems'' session

    Spectral Properties of Heavy-Tailed Random Matrices

    Full text link
    The classical Random Matrix Theory studies asymptotic spectral properties of random matrices when their dimensions grow to infinity. In contrast, the non-asymptotic branch of the theory is focused on explicit high probability estimates that we can obtain for large enough, but fixed size random matrices. This goal naturally brings into play some beautiful methods of high-dimensional probability and geometry, such as the concentration of measure phenomenon. One of the less understood random matrix models is a heavy-tailed model. This is the case when the matrix entries have distributions with slower tail decay than gaussian, e.g., with a few finite moments only. This work is devoted to the study of the heavy-tailed matrices and addresses two main questions: invertibility and regularization of the operator norm. First, the invertibility result of Rudelson and Vershynin is generalized from the case when the matrix entries are subgaussian to the case when only two finite moments are required. Then, it is shown that the operator norm of a matrix can be reduced to the optimal order O(sqrt(n)) if and only if the entries have zero mean and finite variance. We also study the constructive ways to perform such regularization. We show that deletion of a few large entries regularizes the operator norm only if all matrix entries have more than two finite moments. In the case with exactly two finite moments, we propose an algorithm that zeroes out a small fraction of the matrix entries to achieve the operator norm of an almost optimal order O(sqrt(n*ln ln n)) Finally, if in the latter case the matrix has scaled Bernoulli entries, we get a stronger regularization algorithm that provides a) O(sqrt(n))-operator norm of the resulting matrix and b) simple structure of the "bad" submatrix to be zeroed out.PHDMathematicsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/146003/1/erebrova_1.pd

    36th International Symposium on Theoretical Aspects of Computer Science: STACS 2019, March 13-16, 2019, Berlin, Germany

    Get PDF

    LIPIcs, Volume 244, ESA 2022, Complete Volume

    Get PDF
    LIPIcs, Volume 244, ESA 2022, Complete Volum
    • …
    corecore