237 research outputs found
LIPIcs, Volume 251, ITCS 2023, Complete Volume
LIPIcs, Volume 251, ITCS 2023, Complete Volum
Sum-of-squares representations for copositive matrices and independent sets in graphs
A polynomial optimization problem asks for minimizing a polynomial function (cost) given a set of constraints (rules) represented by polynomial inequalities and equations. Many hard problems in combinatorial optimization and applications in operations research can be naturally encoded as polynomial optimization problems. A common approach for addressing such computationally hard problems is by considering variations of the original problem that give an approximate solution, and that can be solved efficiently. One such approach for attacking hard combinatorial problems and, more generally, polynomial optimization problems, is given by the so-called sum-of-squares approximations. This thesis focuses on studying whether these approximations find the optimal solution of the original problem.We investigate this question in two main settings: 1) Copositive programs and 2) parameters dealing with independent sets in graphs. Among our main new results, we characterize the matrix sizes for which sum-of-squares approximations are able to capture all copositive matrices. In addition, we show finite convergence of the sums-of-squares approximations for maximum independent sets in graphs based on their continuous copositive reformulations. We also study sum-of-squares approximations for parameters asking for maximum balanced independent sets in bipartite graphs. In particular, we find connections with the Lovász theta number and we design eigenvalue bounds for several related parameters when the graphs satisfy some symmetry properties.<br/
LIPIcs, Volume 261, ICALP 2023, Complete Volume
LIPIcs, Volume 261, ICALP 2023, Complete Volum
A Strong Composition Theorem for Junta Complexity and the Boosting of Property Testers
We prove a strong composition theorem for junta complexity and show how such
theorems can be used to generically boost the performance of property testers.
The -approximate junta complexity of a function is the
smallest integer such that is -close to a function that
depends only on variables. A strong composition theorem states that if
has large -approximate junta complexity, then has even
larger -approximate junta complexity, even for . We develop a fairly complete understanding of this behavior,
proving that the junta complexity of is characterized by that of
along with the multivariate noise sensitivity of . For the important
case of symmetric functions , we relate their multivariate noise sensitivity
to the simpler and well-studied case of univariate noise sensitivity.
We then show how strong composition theorems yield boosting algorithms for
property testers: with a strong composition theorem for any class of functions,
a large-distance tester for that class is immediately upgraded into one for
small distances. Combining our contributions yields a booster for junta
testers, and with it new implications for junta testing. This is the first
boosting-type result in property testing, and we hope that the connection to
composition theorems adds compelling motivation to the study of both topics.Comment: 44 pages, 1 figure, FOCS 202
LIPIcs, Volume 274, ESA 2023, Complete Volume
LIPIcs, Volume 274, ESA 2023, Complete Volum
Fundamentals
Volume 1 establishes the foundations of this new field. It goes through all the steps from data collection, their summary and clustering, to different aspects of resource-aware learning, i.e., hardware, memory, energy, and communication awareness. Machine learning methods are inspected with respect to resource requirements and how to enhance scalability on diverse computing architectures ranging from embedded systems to large computing clusters
LIPIcs, Volume 258, SoCG 2023, Complete Volume
LIPIcs, Volume 258, SoCG 2023, Complete Volum
Evaluation of optimal solutions in multicriteria models for intelligent decision support
La memoria se enmarca dentro de la optimización y su uso para la toma de decisiones. La secuencia lógica ha sido la modelación, implementación, resolución y validación que conducen a una decisión. Para esto, hemos utilizado herramientas del análisis multicrerio, optimización multiobjetivo y técnicas de inteligencia artificial.
El trabajo se ha estructurado en dos partes (divididas en tres capÃtulos cada una) que se corresponden con la parte teórica y con la parte experimental. En la primera parte se analiza el contexto del campo de estudio con un análisis del marco histórico y posteriormente se dedica un capÃtulo a la optimización multicriterio en el se recogen modelos conocidos, junto con aportaciones originales de este trabajo. En el tercer capÃtulo, dedicado a la inteligencia artificial, se presentan los fundamentos del aprendizaje estadÃstico , las técnicas de aprendizaje automático y de aprendizaje profundo necesarias para las aportaciones en la segunda parte.
La segunda parte contiene siete casos reales a los que se han aplicado las técnicas descritas. En el primer capÃtulo se estudian dos casos: el rendimiento académico de los estudiantes de la Universidad Industrial de Santander (Colombia) y un sistema objetivo para la asignación del premio MVP en la NBA. En el siguiente capÃtulo se utilizan técnicas de inteligencia artificial a la similitud musical (detección de plagios en Youtube), la predicción del precio de cierre de una empresa en el mercado bursátil de Nueva York y la clasificación automática de señales espaciales acústicas en entornos envolventes. En el último capÃtulo a la potencia de la inteligencia artificial se le incorporan técnicas de análisis multicriterio para detectar el fracaso escolar universitario de manera precoz (en la Universidad Industrial de Santander) y, para establecer un ranking de modelos de inteligencia artificial de se recurre a métodos multicriterio.
Para acabar la memoria, a pesar de que cada capÃtulo contiene una conclusión parcial, en el capÃtulo 8 se recogen las principales conclusiones de toda la memoria y una bibliografÃa bastante exhaustiva de los temas tratados. Además, el trabajo concluye con tres apéndices que contienen los programas y herramientas, que a pesar de ser útiles para la comprensión de la memoria, se ha preferido poner por separado para que los capÃtulos resulten más fluidos
High-Dimensional Statistics
These lecture notes were written for the course 18.657, High Dimensional
Statistics at MIT. They build on a set of notes that was prepared at Princeton
University in 2013-14 that was modified (and hopefully improved) over the
years.Comment: This is the 2017 version of these notes, uploaded to arXiv without
any chang
- …