232 research outputs found
Polyhedral Tools for Control
Polyhedral operations play a central role in constrained control. One of the most fundamental operations is that of projection, required both by addition and multiplication. This thesis investigates projection and its relation to multi-parametric linear optimisation for the types of problems that are of particular interest to the control community. The first part of the thesis introduces an algorithm for the projection of polytopes in halfspace form, called Equality Set Projection (ESP). ESP has the desirable property of output sensitivity for non-degenerate polytopes. That is, a linear number of linear programs are needed per output facet of the projection. It is demonstrated that ESP is particularly well suited to control problems and comparative simulations are given, which greatly favour ESP. Part two is an investigation into the multi-parametric linear program (mpLP). The mpLP has received a lot of attention in the control literature as certain model predictive control problems can be posed as mpLPs and thereby pre-solved, eliminating the need for online optimisation. The structure of the solution to the mpLP is studied and an approach is pre- sented that eliminates degeneracy. This approach causes the control input to be continuous, preventing chattering, which is a significant problem in control with a linear cost. Four new enumeration methods are presented that have benefits for various control problems and comparative simulations demonstrate that they outperform existing codes. The third part studies the relationship between projection and multi-parametric linear programs. It is shown that projections can be posed as mpLPs and mpLPs as projections, demonstrating the fundamental nature of both of these problems. The output of a multi-parametric linear program that has been solved for the MPC control inputs offline is a piecewise linear controller defined over a union of polyhedra. The online work is then to determine which region the current measured state is in and apply the appropriate linear control law. This final part introduces a new method of searching for the appropriate region by posing the problem as a nearest neighbour search. This search can be done in logarithmic time and we demonstrate speed increases from 20Hz to 20kHz for a large example system
Sixteen space-filling curves and traversals for d-dimensional cubes and simplices
This article describes sixteen different ways to traverse d-dimensional space
recursively in a way that is well-defined for any number of dimensions. Each of
these traversals has distinct properties that may be beneficial for certain
applications. Some of the traversals are novel, some have been known in
principle but had not been described adequately for any number of dimensions,
some of the traversals have been known. This article is the first to present
them all in a consistent notation system. Furthermore, with this article, tools
are provided to enumerate points in a regular grid in the order in which they
are visited by each traversal. In particular, we cover: five discontinuous
traversals based on subdividing cubes into 2^d subcubes: Z-traversal (Morton
indexing), U-traversal, Gray-code traversal, Double-Gray-code traversal, and
Inside-out traversal; two discontinuous traversals based on subdividing
simplices into 2^d subsimplices: the Hill-Z traversal and the Maehara-reflected
traversal; five continuous traversals based on subdividing cubes into 2^d
subcubes: the Base-camp Hilbert curve, the Harmonious Hilbert curve, the Alfa
Hilbert curve, the Beta Hilbert curve, and the Butz-Hilbert curve; four
continuous traversals based on subdividing cubes into 3^d subcubes: the Peano
curve, the Coil curve, the Half-coil curve, and the Meurthe curve. All of these
traversals are self-similar in the sense that the traversal in each of the
subcubes or subsimplices of a cube or simplex, on any level of recursive
subdivision, can be obtained by scaling, translating, rotating, reflecting
and/or reversing the traversal of the complete unit cube or simplex.Comment: 28 pages, 12 figures. v2: fixed a confusing typo on page 12, line
Efficient Semidefinite Branch-and-Cut for MAP-MRF Inference
We propose a Branch-and-Cut (B&C) method for solving general MAP-MRF
inference problems. The core of our method is a very efficient bounding
procedure, which combines scalable semidefinite programming (SDP) and a
cutting-plane method for seeking violated constraints. In order to further
speed up the computation, several strategies have been exploited, including
model reduction, warm start and removal of inactive constraints.
We analyze the performance of the proposed method under different settings,
and demonstrate that our method either outperforms or performs on par with
state-of-the-art approaches. Especially when the connectivities are dense or
when the relative magnitudes of the unary costs are low, we achieve the best
reported results. Experiments show that the proposed algorithm achieves better
approximation than the state-of-the-art methods within a variety of time
budgets on challenging non-submodular MAP-MRF inference problems.Comment: 21 page
The Polytope of Optimal Approximate Designs
For many statistical experiments, there exists a multitude of optimal
designs. If we consider models with uncorrelated observations and adopt the
approach of approximate experimental design, the set of all optimal designs
typically forms a multivariate polytope. In this paper, we mathematically
characterize the polytope of optimal designs. In particular, we show that its
vertices correspond to the so-called minimal optimum designs. Consequently, we
compute the vertices for several classical multifactor regression models of the
first and the second degree. To this end, we use software tools based on
rational arithmetic; therefore, the computed list is accurate and complete. The
polytope of optimal experimental designs, and its vertices, can be applied in
several ways. For instance, it can aid in constructing cost-efficient and
efficient exact designs
On-line force capability evaluation based on efficient polytope vertex search
International audienceEllipsoid-based manipulability measures are often used to characterize the force/velocity task-space capabilities of robots. While computationally simple, this approach largely approximate and underestimate the true capabilities. Force/velocity polytopes appear to be a more appropriate representation to characterize the robot's task-space capabilities. However, due to the computational complexity of the associated vertex search problem, the polytope approach is mostly restricted to offline use, e.g. as a tool aiding robot mechanical design, robot placement in work-space and offline trajectory planning. In this paper, a novel on-line polytope vertex search algorithm is proposed. It exploits the parallelotope geometry of actuator constraints. The proposed algorithm significantly reduces the complexity and computation time of the vertex search problem in comparison to commonly used algorithms. In order to highlight the on-line capability of the proposed algorithm and its potential for robot control, a challenging experiment with two collaborating Franka Emika Panda robots, carrying a load of 12 kilograms, is proposed. In this experiment, the load distribution is adapted on-line, as a function of the configuration dependant task-space force capability of each robot, in order to avoid, as much as possible, the saturation of their capacit
Dense subgraph mining with a mixed graph model
In this paper we introduce a graph clustering method based on
dense bipartite subgraph mining. The method applies a mixed
graph model (both standard and bipartite) in a three-phase
algorithm. First a seed mining method is applied to find seeds
of clusters, the second phase consists of refining the seeds,
and in the third phase vertices outside the seeds are clustered.
The method is able to detect overlapping clusters, can handle
outliers and applicable without restrictions on the degrees of
vertices or the size of the clusters. The running time of the
method is polynomial. A theoretical result is introduced on
density bounds of bipartite subgraphs with size and local
density conditions. Test results on artificial datasets and
social interaction graphs are also presented
Global Optimization of the Maximum K-Cut Problem
RÉSUMÉ: Le problème de la k-coupe maximale (max-k-cut) est un problème de partitionnement de graphes qui est un des représentatifs de la classe des problèmes combinatoires NP-difficiles. Le max-kcut peut être utilisé dans de nombreuses applications industrielles. L’objectif de ce problème est
de partitionner l’ensemble des sommets en k parties de telle façon que le poids total des arrêtes coupées soit maximisé.
Les méthodes proposées dans la littérature pour résoudre le max-k-cut emploient, généralement, la programmation semidéfinie positive (SDP) associée. En comparaison avec les relaxations de la programmation linéaire (LP), les relaxations SDP sont plus fortes mais les temps de calcul sont plus élevés. Par conséquent, les méthodes basées sur la SDP ne peuvent pas résoudre de gros problèmes. Cette thèse introduit une méthode efficace de branchement et de résolution du problème max-k-cut en utilisant des relaxations SDP et LP renforcées. Cette thèse présente trois approches pour améliorer les solutions du max-k-cut. La première approche se concentre sur l’identification des classes d’inégalités les plus pertinentes des relaxations
de max-k-cut. Cette approche consiste en une étude expérimentale de quatre classes d’inégalités de la littérature : clique, general clique, wheel et bicycle wheel. Afin d’inclure ces inégalités dans
les formulations, nous utilisons un algorithme de plan coupant (CPA) pour ajouter seulement les inégalités les plus importantes . Ainsi, nous avons conçu plusieurs procédures de séparation pour trouver les violations. Les résultats suggèrent que les inégalités de wheel sont les plus fortes. De plus, l’inclusion de ces inégalités dans le max-k-cut peut améliorer la borne de la SDP de plus de 2%.
La deuxième approche introduit les contraintes basées sur formulation SDP pour renforcer la relaxation LP. De plus, le CPA est amélioré en exploitant la technique de terminaison précoce d’une méthode de points intérieurs. Les résultats montrent que la relaxation LP avec les inégalités basées
sur la SDP surpasse la relaxation SDP pour de nombreux cas, en particulier pour les instances avec un grand nombre de partitions (k � 7). La troisième approche étudie la méthode d’énumération implicite en se basant sur les résultats
des dernières approches. On étudie quatre composantes de la méthode. Tout d’abord, nous présentons quatre méthodes heuristiques pour trouver des solutions réalisables : l’heuristique itérative d’agrégation, l’heuristique d’opérateur multiple, la recherche à voisinages variables, et la procédure de recherche aléatoire adaptative gloutonne. La deuxième procédure analyse les stratégies dichotomiques
et polytomiques pour diviser un sous-problème. La troisième composante étudie cinq règles de branchement. Enfin, pour la sélection des noeuds de l’arbre de branchement, nous considérons les stratégies suivantes : meilleur d’abord, profondeur d’abord, et largeur d’abord. Pour chaque
stratégie, nous fournissons des tests pour différentes valeurs de k. Les résultats montrent que la
méthode exacte proposée est capable de trouver de nombreuses solutions. Chacune de ces trois approches a contribué à la conception d’une méthode efficace pour résoudre
le problème du max-k-cut. De plus, les approches proposées peuvent être étendues pour résoudre des problèmes génériques d’optimisation en variables mixtes.----------ABSTRACT: In graph theory, the maximum k-cut (max-k-cut) problem is a representative problem of the class of NP-hard combinatorial optimization problems. It arises in many industrial applications and the objective of this problem is to partition vertices of a given graph into at most k partitions such that the total weight of the cut is maximized. The methods proposed in the literature to optimally solve the max-k-cut employ, usually, the associated semidefinite programming (SDP) relaxation in a branch-and-bound framework. In comparison with the linear programming (LP) relaxation, the SDP relaxation is stronger but it suffers from high CPU times. Therefore, methods based on SDP cannot solve large problems. This thesis introduces
an efficient branch-and-bound method to solve the max-k-cut problem by using tightened SDP and LP relaxations.
This thesis presents three approaches to improve the solutions of the problem. The first approach focuses on identifying relevant classes of inequalities to tighten the relaxations of the max-k-cut. This approach carries out an experimental study of four classes of inequalities from the literature: clique, general clique, wheel and bicycle wheel. In order to include these inequalities, we employ a cutting plane algorithm (CPA) to add only the most important inequalities in practice and we design several separation routines to find violations in a relaxed solution. Computational results suggest that the wheel inequalities are the strongest by far. Moreover, the inclusion of these
inequalities in the max-k-cut improves the bound of the SDP formulation by more than 2%. The second approach introduces the SDP-based constraints to strengthen the LP relaxation. Moreover, the CPA is improved by exploiting the early-termination technique of an interior-point method.
Computational results show that the LP relaxation with the SDP-based inequalities outperforms the SDP relaxations for many instances, especially for a large number of partitions (k ďż˝ 7). The third approach investigates the branch-and-bound method using both previous approaches. Four components of the branch-and-bound are considered. First, four heuristic methods are presented to find a feasible solution: the iterative clustering heuristic, the multiple operator heuristic, the variable neighborhood search, and the greedy randomized adaptive search procedure. The second procedure analyzes the dichotomic and polytomic strategies to split a subproblem. The third feature studies five branching rules. Finally, for the node selection, we consider the following
strategies: best-first search, depth-first search, and breadth-first search. For each component, we provide computational tests for different values of k. Computational results show that the proposed exact method is able to uncover many solutions. Each one of these three approaches contributed to the design of an efficient method to solve the max-k-cut problem. Moreover, the proposed approaches can be extended to solve generic mixinteger SDP problems
- …