20 research outputs found
Piecewise Parallel Optimal Algorithm
This chapter studies a new optimal algorithm that can be implemented in a piecewise parallel manner onboard spacecraft, where the capacity of onboard computers is limited. The proposed algorithm contains two phases. The predicting phase deals with the open-loop state trajectory optimization with simplified system model and evenly discretized time interval of the state trajectory. The tracking phase concerns the closed-loop optimal tracking control for the optimal reference trajectory with full system model subject to real space perturbations. The finite receding horizon control method is used in the tracking program. The optimal control problems in both programs are solved by a direct collocation method based on the discretized Hermite–Simpson method with coincident nodes. By considering the convergence of system error, the current closed-loop control tracking interval and next open-loop control predicting interval are processed simultaneously. Two cases are simulated with the proposed algorithm to validate the effectiveness of proposed algorithm. The numerical results show that the proposed parallel optimal algorithm is very effective in dealing with the optimal control problems for complex nonlinear dynamic systems in aerospace engineering area
Canonical Duality Theory for Global Optimization problems and applications
The canonical duality theory is studied, through a discussion on a general global optimization problem and applications on fundamentally important problems. This general problem is a formulation of the minimization problem with inequality constraints, where the objective function and constraints are any convex or nonconvex functions satisfying certain decomposition conditions. It covers convex problems, mixed integer programming problems and many other nonlinear programming problems. The three main parts of the canonical duality theory are canonical dual transformation, complementary-dual principle and triality theory. The complementary-dual principle is further developed, which conventionally states that each critical point of the canonical dual problem is corresponding to a KKT point of the primal problem with their sharing the same function value. The new result emphasizes that there exists a one-to-one correspondence between KKT points of the dual problem and of the primal problem and each pair of the corresponding KKT points share the same function value, which implies that there is truly no duality gap between the canonical dual problem and the primal problem. The triality theory reveals insightful information about global and local solutions. It is shown that as long as the global optimality condition holds true, the primal problem is equivalent to a convex problem in the dual space, which can be solved efficiently by existing convex methods; even if the condition does not hold, the convex problem still provides a lower bound that is at least as good as that by the Lagrangian relaxation method. It is also shown that through examining the canonical dual problem, the hidden convexity of the primal problem is easily observable. The canonical duality theory is then applied to dealing with three fundamentally important problems. The first one is the spherically constrained quadratic problem, also referred to as the trust region subproblem. The canonical dual problem is onedimensional and it is proved that the primal problem, no matter with convex or nonconvex objective function, is equivalent to a convex problem in the dual space. Moreover, conditions are found which comprise the boundary that separates instances into “hard case” and “easy case”. A canonical primal-dual algorithm is developed, which is able to efficiently solve the problem, including the “hard case”, and can be used as a unified method for similar problems. The second one is the binary quadratic problem, a fundamental problem in discrete optimization. The discussion is focused on lower bounds and analytically solvable cases, which are obtained by analyzing the canonical dual problem with perturbation techniques. The third one is a general nonconvex problem with log-sum-exp functions and quartic polynomials. It arises widely in engineering science and it can be used to approximate nonsmooth optimization problems. The work shows that problems can still be efficiently solved, via the canonical duality approach, even if they are nonconvex and nonsmooth.Doctor of Philosoph
Spacecraft Trajectory Optimization: A review of Models, Objectives, Approaches and Solutions
This article is a survey paper on solving spacecraft trajectory optimization problems. The solving process is decomposed into four key steps of mathematical modeling of the problem, defining the objective functions, development of an approach and obtaining the solution of the problem. Several subcategories for each step have been identified and described. Subsequently, important classifications and their characteristics have been discussed for solving the problems. Finally, a discussion on how to choose an element of each step for a given problem is provided.La Caixa,
TIN2016-78365-
Advanced correlation-based character recognition applied to the Archimedes Palimpsest
The Archimedes Palimpsest is a manuscript containing the partial text of seven treatises by Archimedes that were copied onto parchment and bound in the tenth-century AD. This work is aimed at providing tools that allow scholars of ancient Greek mathematics to retrieve as much information as possible from images of the remaining degraded text. Acorrelation pattern recognition (CPR) system has been developed to recognize distorted versions of Greek characters in problematic regions of the palimpsest imagery, which have been obscured by damage from mold and fire, overtext, and natural aging. Feature vectors for each class of characters are constructed using a series of spatial correlation algorithms and corresponding performance metrics. Principal components analysis (PCA) is employed prior to classification to remove features corresponding to filtering schemes that performed poorly for the spatial characteristics of the selected region-of-interest. A probability is then assigned to each class, forming a character probability distribution based on relative distances from the class feature vectors to the ROI feature vector in principal component (PC) space. However, the current CPR system does not produce a single classification decision, as is common in most target detection problems, but instead has been designed to provide intermediate results that allow the user to apply his or her own decisions (or evidence) to arrive at a conclusion. To achieve this result, a probabilistic network has been incorporated into the recognition system. A probabilistic network represents a method for modeling the uncertainty in a system, and for this application, it allows information from the existing iv partial transcription and contextual knowledge from the user to be an integral part of the decision-making process. The CPR system was designed to provide a framework for future research in the area of spatial pattern recognition by accommodating a broad range of applications and the development of new filtering methods. For example, during preliminary testing, the CPR system was used to confirm the publication date of a fifteenth-century Hebrew colophon, and demonstrated success in the detection of registration markers in three-dimensional MRI breast imaging. In addition, a new correlation algorithm that exploits the benefits of linear discriminant analysis (LDA) and the inherent shift invariance of spatial correlation has been derived, implemented, and tested. Results show that this composite filtering method provides a high level of class discrimination while maintaining tolerance to withinclass distortions. With the integration of this algorithm into the existing filter library, this work completes each stage of a cyclic workflow using the developed CPR system, and provides the necessary tools for continued experimentation
3D Face Modelling, Analysis and Synthesis
Human faces have always been of a special interest to researchers in the computer vision and graphics areas. There has been an explosion in the number of studies around accurately modelling, analysing and synthesising realistic faces for various applications. The importance of human faces emerges from the fact that they are invaluable means of effective communication, recognition, behaviour analysis, conveying emotions, etc. Therefore, addressing the automatic visual perception of human faces efficiently could open up many influential applications in various domains, e.g. virtual/augmented reality, computer-aided surgeries, security and surveillance, entertainment, and many more. However, the vast variability associated with the geometry and appearance of human faces captured in unconstrained videos and images renders their automatic analysis and understanding very challenging even today.
The primary objective of this thesis is to develop novel methodologies of 3D computer vision for human faces that go beyond the state of the art and achieve unprecedented quality and robustness. In more detail, this thesis advances the state of the art in 3D facial shape reconstruction and tracking, fine-grained 3D facial motion estimation, expression recognition and facial synthesis with the aid of 3D face modelling. We give a special attention to the case where the input comes from monocular imagery data captured under uncontrolled settings, a.k.a. \textit{in-the-wild} data. This kind of data are available in abundance nowadays on the internet. Analysing these data pushes the boundaries of currently available computer vision algorithms and opens up many new crucial applications in the industry. We define the four targeted vision problems (3D facial reconstruction tracking, fine-grained 3D facial motion estimation, expression recognition, facial synthesis) in this thesis as the four 3D-based essential systems for the automatic facial behaviour understanding and show how they rely on each other. Finally, to aid the research conducted in this thesis, we collect and annotate a large-scale videos dataset of monocular facial performances. All of our proposed methods demonstarte very promising quantitative and qualitative results when compared to the state-of-the-art methods
Estudo numérico de regularidade em programação semidefinida e aplicações
Doutoramento em MatemáticaThis thesis is devoted to the study of regularity in semidefinite programming
(SDP), an important area of convex optimization with a
wide range of applications. The duality theory, optimality conditions
and methods for SDP rely on certain assumptions of regularity that
are not always satisfied. Absence of regularity, i.e., nonregularity, may
affect the characterization of optimality of solutions and SDP solvers
may run into numerical difficulties, leading to unreliable results.
There exist different notions associated to regularity. In this thesis,
we study in particular, well-posedness, good behaviour and constraint
qualifications (CQs), as well as relations among them. A widely used
CQ in SDP is the Slater condition. This condition guarantees that the
first order necessary optimality conditions in the Karush-Kuhn-Tucker
formulation are satisfied. Current SDP solvers do not check if a problem
satisfies the Slater condition, but work assuming its fulfilment. We
develop and implement in MATLAB numerical procedures to verify if a
given SDP problem is regular in terms of the Slater condition and to determine
the irregularity degree in the case of nonregularity. Numerical
experiments presented in this work show that the proposed procedures
are quite effcient and confirm the obtained conclusions about the relationship
between the Slater condition and other regularity notions.
Other contribution of the thesis consists in the development and MATLAB
implementation of an algorithm for generating nonregular SDP
problems with a desired irregularity degree. The database of nonregular
problems constructed using this generator is publicly available and
can be used for testing new SDP methods and solvers.
Another contribution of this thesis is concerned with an SDP application
to data analysis. We consider a nonlinear SDP model and linear
SDP relaxations for clustering problems and study their regularity. We
show that the nonlinear SDP model is nonregular, while its relaxations
are regular. We suggest a SDP-based algorithm for solving clustering
and dimensionality reduction problems and implement it in R. Numerical
tests on various real-life data sets confirm the fastness and efficiency
of this numerical procedure.Esta tese _e dedicada ao estudo de regularidade em programação
semidefinida (SDP - semidefinite programming), uma importante área
da optimização convexa com uma vasta gama de aplicações. A teoria
de dualidade, condições de optimalidade e métodos para SDP assentam
em certos pressupostos de regularidade que nem sempre são satisfeitos.
A ausência de regularidade, isto é, não regularidade, pode afetar a caracterização da optimalidade de soluções e os solvers podem apresentar
dificuldades numéricas, conduzindo a resultados pouco fiáveis.
Existem diferentes noções associadas a regularidade. Nesta tese, estudamos
em particular, os conceitos de problemas bem-postos, bem
comportados e condições de qualificação de restrições (CQ - constraint
qualifications), bem como as relações entre eles. Uma das CQs mais
utilizadas em SDP é a condição de Slater. Esta condição garante
que as condições de optimalidade de primeira ordem, conhecidas como
condições de Karush-Kuhn-Tucker, estão satisfeitas. Os solvers atuais
não verificam se um problema a resolver satisfaz a condição de Slater,
mas trabalham nesse pressuposto. Desenvolvemos e implementamos
em MATLAB procedimentos numéricos para verificar se um dado problema
de SDP é regular em termos da condição de Slater e determinar o
grau de irregularidade no caso de problemas não regulares. Os resultados
das experiências numéricas apresentados neste trabalho mostram
que os procedimentos propostos são eficientes e confirmam as conclusões obtidas sobre a relação entre a condição de Slater e outras
noções de regularidade.
Outra contribuição da tese consiste no desenvolvimento e na implementação em MATLAB de um procedimento numérico para gerar problemas
de SDP não regulares com um determinado grau de irregularidade.
A colecção de problemas não regulares construídos usando este
gerador é de acesso livre e permite testar novos métodos e solvers para
SDP.
Uma outra contribuição desta tese está relacionada com uma aplicação
de SDP em análise de dados. Consideramos um modelo de SDP não
linear, bem como as suas relaxações lineares para problemas de clusterização, e estudamos a sua regularidade. Mostramos que o modelo não
linear é não regular, enquanto que as suas relaxações são regulares.
Sugerimos um algoritmo baseado em modelos de SDP para resolver
problemas de clusterização e redução de dimensionalidade, e implementámo-lo em R. Os testes numéricos usando vários conjuntos de
dados confirmam a rapidez e eficiência deste procedimento numérico