52 research outputs found

    International Conference on Continuous Optimization (ICCOPT) 2019 Conference Book

    Get PDF
    The Sixth International Conference on Continuous Optimization took place on the campus of the Technical University of Berlin, August 3-8, 2019. The ICCOPT is a flagship conference of the Mathematical Optimization Society (MOS), organized every three years. ICCOPT 2019 was hosted by the Weierstrass Institute for Applied Analysis and Stochastics (WIAS) Berlin. It included a Summer School and a Conference with a series of plenary and semi-plenary talks, organized and contributed sessions, and poster sessions. This book comprises the full conference program. It contains, in particular, the scientific program in survey style as well as with all details, and information on the social program, the venue, special meetings, and more

    Laplacian Mixture Modeling for Network Analysis and Unsupervised Learning on Graphs

    Full text link
    Laplacian mixture models identify overlapping regions of influence in unlabeled graph and network data in a scalable and computationally efficient way, yielding useful low-dimensional representations. By combining Laplacian eigenspace and finite mixture modeling methods, they provide probabilistic or fuzzy dimensionality reductions or domain decompositions for a variety of input data types, including mixture distributions, feature vectors, and graphs or networks. Provable optimal recovery using the algorithm is analytically shown for a nontrivial class of cluster graphs. Heuristic approximations for scalable high-performance implementations are described and empirically tested. Connections to PageRank and community detection in network analysis demonstrate the wide applicability of this approach. The origins of fuzzy spectral methods, beginning with generalized heat or diffusion equations in physics, are reviewed and summarized. Comparisons to other dimensionality reduction and clustering methods for challenging unsupervised machine learning problems are also discussed.Comment: 13 figures, 35 reference

    Exploring Algorithmic Limits of Matrix Rank Minimization under Affine Constraints

    Full text link
    Many applications require recovering a matrix of minimal rank within an affine constraint set, with matrix completion a notable special case. Because the problem is NP-hard in general, it is common to replace the matrix rank with the nuclear norm, which acts as a convenient convex surrogate. While elegant theoretical conditions elucidate when this replacement is likely to be successful, they are highly restrictive and convex algorithms fail when the ambient rank is too high or when the constraint set is poorly structured. Non-convex alternatives fare somewhat better when carefully tuned; however, convergence to locally optimal solutions remains a continuing source of failure. Against this backdrop we derive a deceptively simple and parameter-free probabilistic PCA-like algorithm that is capable, over a wide battery of empirical tests, of successful recovery even at the theoretical limit where the number of measurements equal the degrees of freedom in the unknown low-rank matrix. Somewhat surprisingly, this is possible even when the affine constraint set is highly ill-conditioned. While proving general recovery guarantees remains evasive for non-convex algorithms, Bayesian-inspired or otherwise, we nonetheless show conditions whereby the underlying cost function has a unique stationary point located at the global optimum; no existing cost function we are aware of satisfies this same property. We conclude with a simple computer vision application involving image rectification and a standard collaborative filtering benchmark

    Essays on strategic trading

    Get PDF
    This dissertation discusses various aspects of strategic trading using both analytical modeling and numerical methods. Strategic trading, in short, encompasses models of trading, most notably models of optimal execution and portfolio selection, in which one seeks to rigorously consider various---both explicit and implicit---costs stemming from the act of trading itself. The strategic trading approach, rooted in the market microstructure literature, contrasts with many classical finance models in which markets are assumed to be frictionless and traders can, for the most part, take prices as given. Introducing trading costs to dynamic models of financial markets tend to complicate matters. First, the objectives of the traders become more nuanced since now overtrading leads to poor outcomes due to increased trading costs. Second, when trades affect prices and there are multiple traders in the market, the traders start to behave in a more calculated fashion, taking into account both their own objectives and the perceived actions of others. Acknowledging this strategic behavior is especially important when the traders are asymmetrically informed. These new features allow the models discussed to better reflect aspects real-world trading, for instance, intraday trading patterns, and enable one to ask and answer new questions, for instance, related to the interactions between different traders. To efficiently analyze the models put forth, numerical methods must be utilized. This is, as is to be expected, the price one must pay from added complexity. However, it also opens an opportunity to have a closer look at the numerical approaches themselves. This opportunity is capitalized on and various new and novel computational procedures influenced by the growing field of numerical real algebraic geometry are introduced and employed. These procedures are utilizable beyond the scope of this dissertation and enable one to sharpen the analysis of dynamic equilibrium models.Tämä väitöskirja käsittelee strategista kaupankäyntiä hyödyntäen sekä analyyttisiä että numeerisia menetelmiä. Strategisen kaupankäynnin mallit, erityisesti optimaalinen kauppojen toteutus ja portfolion valinta, pyrkivät tarkasti huomioimaan kaupankäynnistä itsestään aiheutuvat eksplisiittiset ja implisiittiset kustannukset. Tämä erottaa strategisen kaupankäynnin mallit klassisista kitkattomista malleista. Kustannusten huomioiminen rahoitusmarkkinoiden dynaamisessa tarkastelussa monimutkaistaa malleja. Ensinnäkin kaupankävijöiden tavoitteet muuttuvat hienovaraisemmiksi, koska liian aktiivinen kaupankäynti johtaa korkeisiin kaupankäyntikuluihin ja heikkoon tuottoon. Toiseksi oletus siitä, että kaupankävijöiden valitsemat toimet vaikuttavat hintoihin, johtaa pelikäyttäytymiseen silloin, kun markkinoilla on useampia kaupankävijöitä. Pelikäyttäytymisen huomioiminen on ensiarvoisen tärkeää, mikäli informaatio kaupankävijöiden kesken on asymmetristä. Näiden piirteiden johdosta tässä väitöskirjassa käsitellyt mallit mahdollistavat abstrahoitujen rahoitusmarkkinoiden aiempaa täsmällisemmän tarkastelun esimerkiksi päivänsisäisen kaupankäynnin osalta. Tämän lisäksi mallien avulla voidaan löytää vastauksia uusiin kysymyksiin, kuten esimerkiksi siihen, millaisia ovat kaupankävijöiden keskinäiset vuorovaikutussuhteet dynaamisilla markkinoilla. Monimutkaisten mallien analysointiin hyödynnetään numeerisia menetelmiä. Tämä avaa mahdollisuuden näiden menetelmien yksityiskohtaisempaan tarkasteluun, ja tätä mahdollisuutta hyödynnetään pohtimalla laskennallisia ratkaisuja tuoreesta numeerista reaalista algebrallista geometriaa hyödyntävästä näkökulmasta. Väitöskirjassa esitellyt uudet laskennalliset ratkaisut ovat laajalti hyödynnettävissä, ja niiden avulla on mahdollista terävöittää dynaamisten tasapainomallien analysointia

    Benchopt: Reproducible, efficient and collaborative optimization benchmarks

    Full text link
    Numerical validation is at the core of machine learning research as it allows to assess the actual impact of new methods, and to confirm the agreement between theory and practice. Yet, the rapid development of the field poses several challenges: researchers are confronted with a profusion of methods to compare, limited transparency and consensus on best practices, as well as tedious re-implementation work. As a result, validation is often very partial, which can lead to wrong conclusions that slow down the progress of research. We propose Benchopt, a collaborative framework to automate, reproduce and publish optimization benchmarks in machine learning across programming languages and hardware architectures. Benchopt simplifies benchmarking for the community by providing an off-the-shelf tool for running, sharing and extending experiments. To demonstrate its broad usability, we showcase benchmarks on three standard learning tasks: 2\ell_2-regularized logistic regression, Lasso, and ResNet18 training for image classification. These benchmarks highlight key practical findings that give a more nuanced view of the state-of-the-art for these problems, showing that for practical evaluation, the devil is in the details. We hope that Benchopt will foster collaborative work in the community hence improving the reproducibility of research findings.Comment: Accepted in proceedings of NeurIPS 22; Benchopt library documentation is available at https://benchopt.github.io

    Receding-horizon motion planning of quadrupedal robot locomotion

    Get PDF
    Quadrupedal robots are designed to offer efficient and robust mobility on uneven terrain. This thesis investigates combining numerical optimization and machine learning methods to achieve interpretable kinodynamic planning of natural and agile locomotion. The proposed algorithm, called Receding-Horizon Experience-Controlled Adaptive Legged Locomotion (RHECALL), uses nonlinear programming (NLP) with learned initialization to produce long-horizon, high-fidelity, terrain-aware, whole-body trajectories. RHECALL has been implemented and validated on the ANYbotics ANYmal B and C quadrupeds on complex terrain. The proposed optimal control problem formulation uses the single-rigid-body dynamics (SRBD) model and adopts a direct collocation transcription method which enables the discovery of aperiodic contact sequences. To generate reliable trajectories, we propose fast-to-compute analytical costs that leverage the discretization and terrain-dependent kinematic constraints. To extend the formulation to receding-horizon planning, we propose a segmentation approach with asynchronous centre of mass (COM) and end-effector timings and a heuristic initialization scheme which reuses the previous solution. We integrate real-time 2.5D perception data for online foothold selection. Additionally, we demonstrate that a learned stability criterion can be incorporated into the planning framework. To accelerate the convergence of the NLP solver to locally optimal solutions, we propose data-driven initialization schemes trained using supervised and unsupervised behaviour cloning. We demonstrate the computational advantage of the schemes and the ability to leverage latent space to reconstruct dynamic segments of plans which are several seconds long. Finally, in order to apply RHECALL to quadrupeds with significant leg inertias, we derive the more accurate lump leg single-rigid-body dynamics (LL-SRBD) and centroidal dynamics (CD) models and their first-order partial derivatives. To facilitate intuitive usage of costs, constraints and initializations, we parameterize these models by Euclidean-space variables. We show the models have the ability to shape rotational inertia of the robot which offers potential to further improve agility

    Geometric algorithms for component analysis with a view to gene expression data analysis

    Get PDF
    The research reported in this thesis addresses the problem of component analysis, which aims at reducing large data to lower dimensions, to reveal the essential structure of the data. This problem is encountered in almost all areas of science - from physics and biology to finance, economics and psychometrics - where large data sets need to be analyzed.Several paradigms for component analysis are considered, e.g., principal component analysis, independent component analysis and sparse principal component analysis, which are naturally formulated as an optimization problem subject to constraints that endow the problem with a well-characterized matrix manifold structure. Component analysis is so cast in the realm of optimization on matrix manifolds. Algorithms for component analysis are subsequently derived that take advantage of the geometrical structure of the problem.When formalizing component analysis into an optimization framework, three main classes of problems are encountered, for which methods are proposed. We first consider the problem of optimizing a smooth function on the set of n-by-p real matrices with orthonormal columns. Then, a method is proposed to maximize a convex function on a compact manifold, which generalizes to this context the well-known power method that computes the dominant eigenvector of a matrix. Finally, we address the issue of solving problems defined in terms of large positive semidefinite matrices in a numerically efficient manner by using low-rank approximations of such matrices.The efficiency of the proposed algorithms for component analysis is evaluated on the analysis of gene expression data related to breast cancer, which encode the expression levels of thousands of genes gained from experiments on hundreds of cancerous cells. Such data provide a snapshot of the biological processes that occur in tumor cells and offer huge opportunities for an improved understanding of cancer. Thanks to an original framework to evaluate the biological significance of a set of components, well-known but also novel knowledge is inferred about the biological processes that underlie breast cancer.Hence, to summarize the thesis in one sentence: We adopt a geometric point of view to propose optimization algorithms performing component analysis, which, applied on large gene expression data, enable to reveal novel biological knowledge

    The Third Air Force/NASA Symposium on Recent Advances in Multidisciplinary Analysis and Optimization

    Get PDF
    The third Air Force/NASA Symposium on Recent Advances in Multidisciplinary Analysis and Optimization was held on 24-26 Sept. 1990. Sessions were on the following topics: dynamics and controls; multilevel optimization; sensitivity analysis; aerodynamic design software systems; optimization theory; analysis and design; shape optimization; vehicle components; structural optimization; aeroelasticity; artificial intelligence; multidisciplinary optimization; and composites

    Understanding Complexity in Multiobjective Optimization

    Get PDF
    This report documents the program and outcomes of the Dagstuhl Seminar 15031 Understanding Complexity in Multiobjective Optimization. This seminar carried on the series of four previous Dagstuhl Seminars (04461, 06501, 09041 and 12041) that were focused on Multiobjective Optimization, and strengthening the links between the Evolutionary Multiobjective Optimization (EMO) and Multiple Criteria Decision Making (MCDM) communities. The purpose of the seminar was to bring together researchers from the two communities to take part in a wide-ranging discussion about the different sources and impacts of complexity in multiobjective optimization. The outcome was a clarified viewpoint of complexity in the various facets of multiobjective optimization, leading to several research initiatives with innovative approaches for coping with complexity
    corecore