71 research outputs found

    A MODIFIED PARTICLE SWARM OPTIMIZATION ALGORITHM FOR GENERAL INVERSE ORDERED p-MEDIAN LOCATION PROBLEM ON NETWORKS

    Get PDF
    This paper is concerned with a general inverse ordered p-median location problem on network where the task is to change (increase or decrease) the edge lengths and vertex weights at minimum cost subject to given modification bounds such that a given set of p vertices becomes an optimal solution of the location problem, i.e., an ordered p-median under the new edge lengths and vertex weights. A modified particle swarm optimization algorithm is designed to solve the problem under the cost functions related to the sum-type Hamming, bottleneck-type Hamming distances and the recti-linear and Chebyshev norms. By computational experiments, the high efficiency of the proposed algorithm is illustrated

    Hilbert geometry of the Siegel disk: The Siegel-Klein disk model

    Full text link
    We study the Hilbert geometry induced by the Siegel disk domain, an open bounded convex set of complex square matrices of operator norm strictly less than one. This Hilbert geometry yields a generalization of the Klein disk model of hyperbolic geometry, henceforth called the Siegel-Klein disk model to differentiate it with the classical Siegel upper plane and disk domains. In the Siegel-Klein disk, geodesics are by construction always unique and Euclidean straight, allowing one to design efficient geometric algorithms and data-structures from computational geometry. For example, we show how to approximate the smallest enclosing ball of a set of complex square matrices in the Siegel disk domains: We compare two generalizations of the iterative core-set algorithm of Badoiu and Clarkson (BC) in the Siegel-Poincar\'e disk and in the Siegel-Klein disk: We demonstrate that geometric computing in the Siegel-Klein disk allows one (i) to bypass the time-costly recentering operations to the disk origin required at each iteration of the BC algorithm in the Siegel-Poincar\'e disk model, and (ii) to approximate fast and numerically the Siegel-Klein distance with guaranteed lower and upper bounds derived from nested Hilbert geometries.Comment: 42 pages, 7 figure

    Subject index volumes 1–92

    Get PDF

    Unconstrained Learning Machines

    Get PDF
    With the use of information technology in industries, a new need has arisen in analyzing large scale data sets and automating data analysis that was once performed by human intuition and simple analog processing machines. The new generation of computer programs now has to outperform their predecessors in detecting complex and non-trivial patterns buried in data warehouses. Improved Machines Learning (ML) techniques such as Neural Networks (NNs) and Support Vector Machines (SVMs) have shown remarkable performances on supervised learning problems for the past couple of decades (e.g. anomaly detection, classification and identification, interpolation and extrapolation, etc.).Nevertheless, many such techniques have ill-conditioned structures which lack adaptability for processing exotic data or very large amounts of data. Some techniques cannot even process data in an on-line fashion. Furthermore, as the processing power of computers increases, there is a pressing need for ML algorithms to perform supervised learning tasks in less time than previously required over even larger sets of data, which means that time and memory complexities of these algorithms must be improved.The aims of this research is to construct an improved type of SVM-like algorithms for tasks such as nonlinear classification and interpolation that is more scalable, error-tolerant and accurate. Additionally, this family of algorithms must be able to compute solutions in a controlled timing, preferably small with respect to modern computational technologies. These new algorithms should also be versatile enough to have useful applications in engineering, meteorology or quality control.This dissertation introduces a family of SVM-based algorithms named Unconstrained Learning Machines (ULMs) which attempt to solve the robustness, scalability and timing issues of traditional supervised learning algorithms. ULMs are not based on geometrical analogies (e.g. SVMs) or on the replication of biological models (e.g. NNs). Their construction is strictly based on statistical considerations taken from the recently developed statistical learning theory. Like SVMs, ULMS are using kernel methods extensively in order to process exotic and/or non-numerical objects stored in databases and search for hidden patterns in data with tailored measures of similarities.ULMs are applied to a variety of problems in manufacturing engineering and in meteorology. The robust nonlinear nonparametric interpolation abilities of ULMs allow for the representation of sub-millimetric deformations on the surface of manufactured parts, the selection of conforming objects and the diagnostic and modeling of manufacturing processes. ULMs play a role in assimilating the system states of computational weather models, removing the intrinsic noise without any knowledge of the underlying mathematical models and helping the establishment of more accurate forecasts

    Variable-sized uncertainty and inverse problems in robust optimization

    Get PDF
    In robust optimization, the general aim is to find a solution that performs well over a set of possible parameter outcomes, the so-called uncertainty set. In this paper, we assume that the uncertainty size is not fixed, and instead aim at finding a set of robust solutions that covers all possible uncertainty set outcomes. We refer to these problems as robust optimization with variable-sized uncertainty. We discuss how to construct smallest possible sets of min–max robust solutions and give bounds on their size. A special case of this perspective is to analyze for which uncertainty sets a nominal solution ceases to be a robust solution, which amounts to an inverse robust optimization problem. We consider this problem with a min–max regret objective and present mixed-integer linear programming formulations that can be applied to construct suitable uncertainty sets. Results on both variable-sized uncertainty and inverse problems are further supported with experimental data

    Graph-based Methods for Visualization and Clustering

    Get PDF
    The amount of data that we produce and consume is larger than it has been at any point in the history of mankind, and it keeps growing exponentially. All this information, gathered in overwhelming volumes, often comes with two problematic characteristics: it is complex and deprived of semantical context. A common step to address those issues is to embed raw data in lower dimensions, by finding a mapping which preserves the similarity between data points from their original space to a new one. Measuring similarity between large sets of high-dimensional objects is, however, problematic for two main reasons: first, high-dimensional points are subject to the curse of dimensionality and second, the number of pairwise distances between points is quadratic with respect to the amount of data points. Both problems can be addressed by using nearest neighbours graphs to understand the structure in data. As a matter of fact, most dimensionality reduction methods use similarity matrices that can be interpreted as graph adjacency matrices. Yet, despite recent progresses, dimensionality reduction is still very challenging when applied to very large datasets. Indeed, although recent methods specifically address the problem of scaleability, processing datasets of millions of elements remain a very lengthy process. In this thesis, we propose new contributions which address the problem of scaleability using the framework of Graph Signal Processing, which extends traditional signal processing to graphs. We do so motivated by the premise that graphs are well suited to represent the structure of the data. In the first part of this thesis, we look at quantitative measures for the evaluation of dimensionality reduction methods. Using tools from graph theory and Graph Signal Processing, we show that specific characteristics related to quality can be assessed by taking measures on the graph, which indirectly validates the hypothesis relating graph to structure. The second contribution is a new method for a fast eigenspace approximation of the graph Laplacian. Using principles of GSP and random matrices, we show that an approximated eigensubpace can be recovered very efficiently, which be used for fast spectral clustering or visualization. Next, we propose a compressive scheme to accelerate any dimensionality reduction technique. The idea is based on compressive sampling and transductive learning on graphs: after computing the embedding for a small subset of data points, we propagate the information everywhere using transductive inference. The key components of this technique are a good sampling strategy to select the subset and the application of transductive learning on graphs. Finally, we address the problem of over-discriminative feature spaces by proposing a hierarchical clustering structure combined with multi-resolution graphs. Using efficient coarsening and refinement procedures on this structure, we show that dimensionality reduction algorithms can be run on intermediate levels and up-sampled to all points leading to a very fast dimensionality reduction method. For all contributions, we provide extensive experiments on both synthetic and natural datasets, including large-scale problems. This allows us to show the pertinence of our models and the validity of our proposed algorithms. Following reproducible principles, we provide everything needed to repeat the examples and the experiments presented in this work

    Subject Index Volumes 1–200

    Get PDF
    corecore