30 research outputs found

    Positive Semidefiniteness and Positive Definiteness of a Linear Parametric Interval Matrix

    Full text link
    We consider a symmetric matrix, the entries of which depend linearly on some parameters. The domains of the parameters are compact real intervals. We investigate the problem of checking whether for each (or some) setting of the parameters, the matrix is positive definite (or positive semidefinite). We state a characterization in the form of equivalent conditions, and also propose some computationally cheap sufficient\,/\,necessary conditions. Our results extend the classical results on positive (semi-)definiteness of interval matrices. They may be useful for checking convexity or non-convexity in global optimization methods based on branch and bound framework and using interval techniques

    A modification of the [alpha]BB method for box-constrained optimization and an application to inverse kinematics

    Get PDF
    For many practical applications it is important to determine not only a numerical approximation of one but a representation of the whole set of globally optimal solutions of a non-convex optimization problem. Then one element of this representation may be chosen based on additional information which cannot be formulated as a mathematical function or within a hierarchical problem formulation. We present such an application in the field of robotic design. This application problem can be modeled as a smooth box-constrained optimization problem. For determining a representation of the global optimal solution set with a predefined quality we modify the well known BB method. We illustrate the properties and give a proof for the finiteness and correctness of our modified BB method

    Nonlinear Dynamic Systems Parameterization Using Interval-Based Global Optimization: Computing Lipschitz Constants and Beyond

    Full text link
    Numerous state-feedback and observer designs for nonlinear dynamic systems (NDS) have been developed in the past three decades. These designs assume that NDS nonlinearities satisfy one of the following function set classifications: bounded Jacobian, Lipschitz continuity, one-sided Lipschitz, quadratic inner-boundedness, and quadratic boundedness. These function sets are characterized by constant scalars or matrices bounding the NDS' nonlinearities. These constants (i) depend on the NDS' operating region, topology, and parameters, and (ii) are utilized to synthesize observer/controller gains. Unfortunately, there is a near-complete absence of algorithms to compute such bounding constants. In this paper, we develop analytical then computational methods to compute such constants. First, for every function set classification, we derive analytical expressions for these bounding constants through global maximization formulations. Second, we utilize a derivative-free, interval-based global maximization algorithm based on branch-and-bound framework to numerically obtain the bounding constants. Third, we showcase the effectiveness of our approaches to compute the corresponding parameters on some NDS such as highway traffic networks and synchronous generator models.Comment: IEEE Transactions on Automatic Contro

    Proceedings of the XIII Global Optimization Workshop: GOW'16

    Get PDF
    [Excerpt] Preface: Past Global Optimization Workshop shave been held in Sopron (1985 and 1990), Szeged (WGO, 1995), Florence (GO’99, 1999), Hanmer Springs (Let’s GO, 2001), Santorini (Frontiers in GO, 2003), San José (Go’05, 2005), Mykonos (AGO’07, 2007), Skukuza (SAGO’08, 2008), Toulouse (TOGO’10, 2010), Natal (NAGO’12, 2012) and Málaga (MAGO’14, 2014) with the aim of stimulating discussion between senior and junior researchers on the topic of Global Optimization. In 2016, the XIII Global Optimization Workshop (GOW’16) takes place in Braga and is organized by three researchers from the University of Minho. Two of them belong to the Systems Engineering and Operational Research Group from the Algoritmi Research Centre and the other to the Statistics, Applied Probability and Operational Research Group from the Centre of Mathematics. The event received more than 50 submissions from 15 countries from Europe, South America and North America. We want to express our gratitude to the invited speaker Panos Pardalos for accepting the invitation and sharing his expertise, helping us to meet the workshop objectives. GOW’16 would not have been possible without the valuable contribution from the authors and the International Scientific Committee members. We thank you all. This proceedings book intends to present an overview of the topics that will be addressed in the workshop with the goal of contributing to interesting and fruitful discussions between the authors and participants. After the event, high quality papers can be submitted to a special issue of the Journal of Global Optimization dedicated to the workshop. [...

    Learning discrete and Lipschitz representations

    Get PDF
    Learning to embed data into a low dimensional vector space that is more useful for some downstream task is one of the most common problems addressed in the representation learning literature. Conventional approaches to solving this problem typically rely on training neural networks using labelled training data. In order to construct an accurate embedding function that will generalise to data not seen during training, one must either gather a very large training dataset, or adequately bias the learning process. This thesis focuses on the task of incorporating new inductive biases into the representation learning paradigm by constraining the set of functions that a learned feature extractor can come from. The first part of this thesis investigates how one can learn a mapping that changes slowly with respect to its input. This is first addressed by deriving the Lipschitz constant of common feed-forward neural network architectures, and subsequently demonstrating how this constant can be constrained during training. Following this, it is investigated how a similar goal can be accomplished when one assumes that the inputs of interest lie near a low dimensional manifold embedded in a high dimensional vector space. This results in an algorithm that takes advantage of an empirical analog to the Lipschitz constant. Experimental results show that these methods have favourable performance compared to other methods commonly used for imposing inductive biases on neural network learning algorithms. In the second part of this thesis, methods for extracting representations using decision tree models are developed. The first method presented is a problem transformation approach that allows one to reuse existing tree induction techniques. The second approach shows how one can incrementally construct decision trees using gradient information as the source of supervision, allowing one to use an ensemble of decision trees as a layer in a neural network. The experimental results indicate that these approaches improve the performance of representation learning on tabular data across multiple tasks
    corecore