179 research outputs found
AAR-based decomposition algorithm for non-linear convex optimisation
Postprint (published version
Decomposition techniques for computational limit analysis
Limit analysis is relevant in many practical engineering areas such as the design of mechanical structure or the analysis of soil mechanics. The theory of limit analysis assumes a rigid, perfectly-plastic material to model the collapse of a solid that is subjected to a static load distribution.
Within this context, the problem of limit analysis is to consider a continuum that is subjected to a fixed force distribution consisting of both volume and surfaces loads. Then the objective is to obtain the maximum multiple of this force distribution that causes the collapse of the body. This multiple is usually called collapse multiplier. This collapse multiplier can be obtained analytically by solving an infinite dimensional nonlinear optimisation problem. Thus the computation of the multiplier requires two steps, the first step is to discretise its corresponding analytical problem by the introduction of finite dimensional spaces and the second step is to solve a nonlinear
optimisation problem, which represents the major difficulty and challenge in the numerical solution process.
Solving this optimisation problem, which may become very large and computationally expensive in three dimensional problems, is the second important step. Recent techniques have allowed scientists to determine upper and lower bounds of the load factor under which the structure will collapse. Despite the attractiveness of these results, their application to practical examples is still hampered by the size of the resulting optimisation process. Thus a remedy to this is the use of decomposition methods and to parallelise the corresponding optimisation problem.
The aim of this work is to present a decomposition technique which can reduce the memory requirements and computational cost of this type of problems. For this purpose, we exploit the important feature of the underlying optimisation problem: the objective function contains one scaler variable. The main contributes of the thesis are, rewriting the constraints of the problem as the intersection of appropriate sets, and proposing efficient algorithmic strategies to iteratively solve the decomposition algorithm.El análisis en estados lÃmite es una herramienta relente en muchas aplicaciones de la ingenierÃa como por ejemplo en el análisis de estructuras o en mecánica del suelo. La teorÃa de estados lÃmite asume un material rÃgido con plasticidad perfecta para modelar la capacidad portante y los mecanismos de derrumbe de un sólido sometido a una distribución de cargas estáticas. En este contexto, el problema en estados lÃmite considera el continuo sometido a una distribución de cargas, tanto volumétricas como de superficie, y tiene como objetivo hallar el máximo multiplicador de la carga que provoca el derrumbe del cuerpo. Este valor se conoce como el máximo factor de carga, y puede ser calculado resolviendo un problema de optimización no lineal de dimensión infinita. Desde el punto de vista computacional, se requieren pues dos pasos: la discretización del problema analÃtico mediante el uso de espacios de dimensión finita, y la resolución del problema de optimización resultante. Este último paso representa uno de los mayores retos en el proceso del cálculo del factor de carga. El problema de optimización mencionado puede ser de gran tamaño y con un alto coste computacional, sobretodo en el análisis lÃmite tridimensional. Técnicas recientes han permitido a investigadores e ingenieros determinar cotas superiores e inferiores del factor de carga. A pesar del atractivo de estos resultados, su aplicación práctica en ejemplos realistas está todavÃa obstaculizada por el tamaño del problema de optimización resultante. Posibles remedios a este obstáculo son el diseño de técnicas de descomposición y la paralelizarÃan del problema de optimización. El objetivo de este trabajo es presentar una técnica de descomposición que pueda reducir los requerimientos y el coste computacional de este tipo de problemas. Con este propósito, se explotan una propiedad importante del problema de optimización: la función objetivo contiene una único escalar (el factor de carga). La contribución principal de la tesis es el replanteamiento del problema de optimización como la intersección de dos conjuntos, y la propuesta de un algoritmo eficiente para su resolución iterativa.Postprint (published version
AAR-based decomposition method for lower-bound limit analysis
Despite the recent progress in optimisation techniques, finite-element stability analysis of realistic three-dimensional problems is still hampered by the size of the resulting optimisation problem. Current solvers may take a prohibitive computational time, if they give a solution at all. The possible remedies to this are the design of adaptive de-remeshing techniques, decomposition of the system of equations or of the optimisation problem. This paper concentrates on the last approach, and presents an algorithm especially suited for limit analysis. Optimisation problems in limit analysis are in general convex but non-linear. This fact renders the design of decomposition techniques specially challenging. The efficiency of general approaches such as Benders or Dantzig–Wolfe is not always satisfactory, and strongly depends on the structure of the optimisation problem. This work presents a new method that is based on rewriting the feasibility region of the global optimisation problem as the intersection of two subsets. By resorting to the averaged alternating reflections (AAR) method in order to find the distance between the sets, the optimisation problem is successfully solved in a decomposed manner. Some representative examples illustrate the application of the method and its efficiency with respect to other well-known decomposition algorithm
AAR-based decomposition method for lower-bound limit analysis
Despite the recent progress in optimisation techniques, finite-element stability analysis of realistic three-dimensional problems is still hampered by the size of the resulting optimisation problem. Current solvers may take a prohibitive computational time, if they give a solution at all. The possible remedies to this are the design of adaptive de-remeshing techniques, decomposition of the system of equations or of the optimisation problem. This paper concentrates on the last approach, and presents an algorithm especially suited for limit analysis. Optimisation problems in limit analysis are in general convex but non-linear. This fact renders the design of decomposition techniques specially challenging. The efficiency of general approaches such as Benders or Dantzig–Wolfe is not always satisfactory, and strongly depends on the structure of the optimisation problem. This work presents a new method that is based on rewriting the feasibility region of the global optimisation problem as the intersection of two subsets. By resorting to the averaged alternating reflections (AAR) method in order to find the distance between the sets, the optimisation problem is successfully solved in a decomposed manner. Some representative examples illustrate the application of the method and its efficiency with respect to other well-known decomposition algorithms.Peer ReviewedPostprint (author's final draft
The complexity of general-valued CSPs seen from the other side
The constraint satisfaction problem (CSP) is concerned with homomorphisms
between two structures. For CSPs with restricted left-hand side structures, the
results of Dalmau, Kolaitis, and Vardi [CP'02], Grohe [FOCS'03/JACM'07], and
Atserias, Bulatov, and Dalmau [ICALP'07] establish the precise borderline of
polynomial-time solvability (subject to complexity-theoretic assumptions) and
of solvability by bounded-consistency algorithms (unconditionally) as bounded
treewidth modulo homomorphic equivalence.
The general-valued constraint satisfaction problem (VCSP) is a generalisation
of the CSP concerned with homomorphisms between two valued structures. For
VCSPs with restricted left-hand side valued structures, we establish the
precise borderline of polynomial-time solvability (subject to
complexity-theoretic assumptions) and of solvability by the -th level of the
Sherali-Adams LP hierarchy (unconditionally). We also obtain results on related
problems concerned with finding a solution and recognising the tractable cases;
the latter has an application in database theory.Comment: v2: Full version of a FOCS'18 paper; improved presentation and small
correction
Optimization of dispersive coefficients in the homogenization of the wave equation in periodic structures
International audienceWe study dispersive effects of wave propagation in periodic media, which can be modelled by adding a fourth-order term in the homogenized equation. The corresponding fourth-order dispersive tensor is called Burnett tensor and we numerically optimize its values in order to minimize or maximize dispersion. More precisely, we consider the case of a two-phase composite medium with an 8-fold symmetry assumption of the periodicity cell in two space dimensions. We obtain upper and lower bound for the dispersive properties, along with optimal microgeometries
On modeling and optimisation of air Traffic flow management problem with en-route capacities.
Master of Science in Mathematics, Statistics and Computer Science. University of KwaZulu-Natal, Durban 2016.The air transportation industry in the past ten years witnessed an upsurge with the number
of passengers swelling exponentially. This development has seen a high demand in airport
and airspace usage, which consequently has an enormous strain on the aviation industry
of a given country. Although increase in airport capacity would be logical to meet this
demand, factors such as poor weather conditions and other unforeseen ones have made
it difficult if not impossible to do such. In fact there is a high probability of capacity
reduction in most of the airports and air sectors within these regions. It is no surprise
therefore that, most countries experience congestion almost on a daily basis. Congestion
interrupts activities in the air transportation network and this has dire consequences on
the air traffic control system as well as the nation's economy due to the significant costs
incurred by airlines and passengers.
This is against a background where most air tra c managers are met with the challenge
of finding optimal scheduling strategies that can minimise delay costs. Current practices
and research has shown that there is a high possibility of reducing the effects of congestion
problems on the air traffic control system as well as the total delay costs incurred to the
nearest minimum through an optimal control of
ights. Optimal control of these
ights
can either be achieved by assigning ground holding delays or air borne delays together
with any other control actions to mitigate congestion. This exposes a need for adequate
air traffic
ow management given that it plays a crucial role in alleviating delay costs.
Air Traffic Flow Management (ATFM) is defined as a set of strategic processes that reduce
air traffic delays and congestion problems. More precisely, it is the regulation of air traffic
in such a way that the available airport and airspace capacity are utilised efficiently without
been exceeded when handling traffic. The problem of managing air traffic so as to ensure
efficient and safe
ow of aircraft throughout the airspace is often referred to as the Air
Traffic Flow Management Problem (ATFMP).
This thesis provides a detailed insight on the ATFMP wherein the existing approaches,
methodologies and optimisation techniques that have been (and continue to be) used to
address the ATFMP were critically examined. Particular attention to optimisation models
on airport capacity and airspace allocation were also discussed extensively as they depict
what is obtainable in the air transportation system. Furthermore, the thesis attempted a
comprehensive and, up-to-date review which extensively fed off literature on ATFMP. The
instances in this literature were mainly derived from North America, Europe and Africa.
Having reviewed the current ATFM practices and existing optimisation models and approaches
for solving the ATFMP, the generalised basic model was extended to account for
additional modeling variations. Furthermore, deterministic integer programming formulations
were developed for reducing the air traffic delays and congestion problems based
on the sector and path-based approaches already proposed for incorporating rerouting options
into the basic ATFMP model. The formulation does not only takes into account all
the
ight phases but it also solves for optimal synthesis of other
ow management activities
including rerouting decisions,
ight cancellation and penalisation. The claims from
the basic ATFMP model was validated on artificially constructed datasets and generated
instances. The computational performance of the basic and modified ATFMP reveals that
the resulting solutions are completely integral, and an optimal solution can be obtained
within the shortest possible computational time. Thereby, affirming the fact that these
models can be used in effective decision making and efficient management of the air traffic
flow
The Aggregating Algorithm and Regression
Our main interest is in the problem of making predictions in the online mode of learning where at every step in time a signal arrives and a prediction needs to be made before the corresponding outcome arrives. Loss is suffered if the prediction and outcome do not match perfectly. In the prediction with expert advice framework, this protocol is augmented by a pool of experts that produce their predictions before we have to make ours. The Aggregating Algorithm (AA) is a technique that optimally merges these experts so that the resulting strategy suffers a cumulative loss that is almost as good as that of the best expert in the pool.
The AA was applied to the problem of regression, where outcomes are continuous real numbers, to get the AA for Regression (AAR) and its kernel version, KAAR. On typical datasets, KAAR’s empirical performance is not as good as that of Kernel Ridge Regression (KRR) which is a popular regression method. KAAR performs better than KRR only when the data is corrupted with lots of noise or contains severe outliers. To alleviate this we introduce methods that are a hybrid between KRR and KAAR. Empirical experiments suggest that, in general, these new methods perform as good as or better than both KRR and KAAR.
In the second part of this dissertation we deal with a more difficult problem— we allow the dependence of outcomes on signals to change with time. To handle this we propose two new methods: WeCKAAR and KAARCh. WeCKAAR is a simple modification of one of our methods from the first part of the dissertation to include decaying weights. KAARCh is an application of
the AA to the case where the experts are all the predictors that can change with time. We show that KAARCh suffers a cumulative loss that is almost as good as that of any expert that does not change very rapidly. Empirical results on data with changing dependencies demonstrate that WeCKAAR and KAARCh perform well in practice and are considerably better than Kernel Ridge Regression
Mathematical optimization methods for aircraft conflict resolution in air traffic control
Air traffic control is a very dynamic and heavy constrained environment where many decisions need to be taken over short periods of time and in the context of uncertainty. Adopting automation under such circumstances can be a crucial initiative to reduce controller workload and improve airspace usage and capacity. Traditional methods for air traffic control have been exhaustively used in the last decades and are reaching their limits, therefore automated approaches are receiving a significant and growing attention. In this thesis, the focus is to obtain optimal aircraft trajectories to ensure flight safety in the short-term by solving optimization problems.
During cruise stage, separation conditions require a minimum of 5 Nautical Miles (NM) horizontally or 1000 feet (ft) vertically between any pair of aircraft. A conflict between two or more aircraft is a loss of separation among these aircraft. Air traffic networks are organized in flight levels which are separated by at least 1000 ft, hence during cruise stage, most conflicts occur among aircraft flying at the same flight level. This thesis presents several mathematical formulations to address the aircraft conflict resolution problem and its variants.
The core contribution of this research is the development of novel mixed integer programming models for the aircraft conflict resolution problem. New mathematical optimization formulations for the deterministic aircraft conflict resolution problem are analyzed and exact methods are developed. Building on this framework, richer formulations capable of accounting for aircraft trajectory prediction uncertainty and trajectory recovery are proposed.
Results suggest that the formulations presented in thesis are efficient and competitive enough with the state-of-art models and they can provide an alternative solution to possibly fill some of the gaps currently present in the literature. Furthermore, the results obtained demonstrate the impact of these models in solving very denser air space scenarios and their competitiveness with state-of-the-art formulations without regarding variable discretization or non-linear components
- …