133 research outputs found
Súlyozott lineáris komplementaritási feladatok teljes Newton-lépéses algoritmusának implementációja
A súlyozott lineáris komplementaritási feladatot megoldó útkövető belsőpontos algoritmust az implementáció szemszögéből nézve mutatjuk be. Két változatot vizsgáltunk, amelyek csak a centrális út pontjait jellemző paraméter változtatási módjában térnek el egymástól. A megvalósítás C++ programozási nyelvben történt, és a kapott numerikus eredmények igazolják az általunk javasolt módszerek hatékonyságát
Assessing target centring algorithms for use in near-real-time-photogrammetry
Bibliography: leaves 140-146.Target Centring Algorithms were investigated for use in the Near-Real-Time-Photogrammetry NRTP system: PHOENICS. PHOENICS, a Photogrammetric Engineering and Industrial digital Camera System, has been developed over the past three years in the Surveying Department of UCT to provide a semi-automatic system to determine three dimensional co-ordinates of surfaces and objects using a photogrammetric method. Targets are attached to an object in order to facilitate measurement of the shape, size and orientation of the object. The centre of the target uniquely defines the target co-ordinate. Target centres (from images of the same object) are used in photogrammetric models to locate the three dimensional (3-D) coordinates of the target. The accuracy of the target 3-D location is dependent on the accuracy of the target centring algorithm. A series of sub-algorithms were employed to arrive at a single target centring algorithm. Various combinations of these sub- algorithms were compared in order to obtain the optimal target centring algorithm. Three images were used to test various aspects of the target centring algorithms: their potential accuracy was tested on an image having symmetric synthetic targets their robustness was tested on an image having targets with artificial blemishes their performance in a real (noisy) environment was tested on an image with real targets on a control frame, captured by PHOENICS. When the target centring algorithms were run on the three images, target location with an accuracy of from 1/10 of a pixel for real images, to 1/1000 of a pixel for ideal synthetic targets was obtained
Quality by Design through multivariate latent structures
La presente tesis doctoral surge ante la necesidad creciente por parte de la mayoría de empresas, y en especial (pero no únicamente) aquellas dentro de los sectores farmacéu-tico, químico, alimentación y bioprocesos, de aumentar la flexibilidad en su rango ope-rativo para reducir los costes de fabricación, manteniendo o mejorando la calidad del producto final obtenido. Para ello, esta tesis se centra en la aplicación de los conceptos del Quality by Design para la aplicación y extensión de distintas metodologías ya exis-tentes y el desarrollo de nuevos algoritmos que permitan la implementación de herra-mientas adecuadas para el diseño de experimentos, el análisis multivariante de datos y la optimización de procesos en el ámbito del diseño de mezclas, pero sin limitarse ex-clusivamente a este tipo de problemas.
Parte I - Prefacio, donde se presenta un resumen del trabajo de investigación realiza-do y los objetivos principales que pretende abordar y su justificación, así como una introducción a los conceptos más importantes relativos a los temas tratados en partes posteriores de la tesis, tales como el diseño de experimentos o diversas herramientas estadísticas de análisis multivariado.
Parte II - Optimización en el diseño de mezclas, donde se lleva a cabo una recapitu-lación de las diversas herramientas existentes para el diseño de experimentos y análisis de datos por medios tradicionales relativos al diseño de mezclas, así como de algunas herramientas basadas en variables latentes, tales como la Regresión en Mínimos Cua-drados Parciales (PLS). En esta parte de la tesis también se propone una extensión del PLS basada en kernels para el análisis de datos de diseños de mezclas, y se hace una comparativa de las distintas metodologías presentadas. Finalmente, se incluye una breve presentación del programa MiDAs, desarrollado con la finalidad de ofrecer a sus usuarios la posibilidad de comparar de forma sencilla diversas metodologías para el diseño de experimentos y análisis de datos para problemas de mezclas.
Parte III - Espacio de diseño y optimización a través del espacio latente, donde se aborda el problema fundamental dentro de la filosofía del Quality by Design asociado a la definición del llamado 'espacio de diseño', que comprendería todo el conjunto de posibles combinaciones de condiciones de proceso, materias primas, etc. que garanti-zan la obtención de un producto con la calidad deseada. En esta parte también se trata el problema de la definición del problema de optimización como herramienta para la mejora de la calidad, pero también para la exploración y flexibilización de los procesos productivos, con el objeto de definir un procedimiento eficiente y robusto de optimiza-ción que se adapte a los diversos problemas que exigen recurrir a dicha optimización.
Parte IV - Epílogo, donde se presentan las conclusiones finales, la consecución de objetivos y posibles líneas futuras de investigación. En esta parte se incluyen además los anexos.Aquesta tesi doctoral sorgeix davant la necessitat creixent per part de la majoria d'em-preses, i especialment (però no únicament) d'aquelles dins dels sectors farmacèutic, químic, alimentari i de bioprocessos, d'augmentar la flexibilitat en el seu rang operatiu per tal de reduir els costos de fabricació, mantenint o millorant la qualitat del producte final obtingut. La tesi se centra en l'aplicació dels conceptes del Quality by Design per a l'aplicació i extensió de diferents metodologies ja existents i el desenvolupament de nous algorismes que permeten la implementació d'eines adequades per al disseny d'ex-periments, l'anàlisi multivariada de dades i l'optimització de processos en l'àmbit del disseny de mescles, però sense limitar-se exclusivament a aquest tipus de problemes.
Part I- Prefaci, en què es presenta un resum del treball de recerca realitzat i els objec-tius principals que pretén abordar i la seua justificació, així com una introducció als conceptes més importants relatius als temes tractats en parts posteriors de la tesi, com ara el disseny d'experiments o diverses eines estadístiques d'anàlisi multivariada.
Part II - Optimització en el disseny de mescles, on es duu a terme una recapitulació de les diverses eines existents per al disseny d'experiments i anàlisi de dades per mit-jans tradicionals relatius al disseny de mescles, així com d'algunes eines basades en variables latents, tals com la Regressió en Mínims Quadrats Parcials (PLS). En aquesta part de la tesi també es proposa una extensió del PLS basada en kernels per a l'anàlisi de dades de dissenys de mescles, i es fa una comparativa de les diferents metodologies presentades. Finalment, s'inclou una breu presentació del programari MiDAs, que ofe-reix la possibilitat als usuaris de comparar de forma senzilla diverses metodologies per al disseny d'experiments i l'anàlisi de dades per a problemes de mescles.
Part III- Espai de disseny i optimització a través de l'espai latent, on s'aborda el problema fonamental dins de la filosofia del Quality by Design associat a la definició de l'anomenat 'espai de disseny', que comprendria tot el conjunt de possibles combina-cions de condicions de procés, matèries primeres, etc. que garanteixen l'obtenció d'un producte amb la qualitat desitjada. En aquesta part també es tracta el problema de la definició del problema d'optimització com a eina per a la millora de la qualitat, però també per a l'exploració i flexibilització dels processos productius, amb l'objecte de definir un procediment eficient i robust d'optimització que s'adapti als diversos pro-blemes que exigeixen recórrer a aquesta optimització.
Part IV- Epíleg, on es presenten les conclusions finals i la consecució d'objectius i es plantegen possibles línies futures de recerca arran dels resultats de la tesi. En aquesta part s'inclouen a més els annexos.The present Ph.D. thesis is motivated by the growing need in most companies, and specially (but not solely) those in the pharmaceutical, chemical, food and bioprocess fields, to increase the flexibility in their operating conditions in order to reduce production costs while maintaining or even improving the quality of their products. To this end, this thesis focuses on the application of the concepts of the Quality by Design for the exploitation and development of already existing methodologies, and the development of new algorithms aimed at the proper implementation of tools for the design of experiments, multivariate data analysis and process optimization, specially (but not only) in the context of mixture design.
Part I - Preface, where a summary of the research work done, the main goals it aimed at and their justification, are presented. Some of the most relevant concepts related to the developed work in subsequent chapters are also introduced, such as those regarding design of experiments or latent variable-based multivariate data analysis techniques.
Part II - Mixture design optimization, in which a review of existing mixture design tools for the design of experiments and data analysis via traditional approaches, as well as some latent variable-based techniques, such as Partial Least Squares (PLS), is provided. A kernel-based extension of PLS for mixture design data analysis is also proposed, and the different available methods are compared to each other. Finally, a brief presentation of the software MiDAs is done. MiDAs has been developed in order to provide users with a tool to easily approach mixture design problems for the construction of Designs of Experiments and data analysis with different methods and compare them.
Part III - Design Space and optimization through the latent space, where one of the fundamental issues within the Quality by Design philosophy, the definition of the so-called 'design space' (i.e. the subspace comprised by all possible combinations of process operating conditions, raw materials, etc. that guarantee obtaining a product meeting a required quality standard), is addressed. The problem of properly defining the optimization problem is also tackled, not only as a tool for quality improvement but also when it is to be used for exploration of process flexibilisation purposes, in order to establish an efficient and robust optimization method in accordance with the nature of the different problems that may require such optimization to be resorted to.
Part IV - Epilogue, where final conclusions are drawn, future perspectives suggested, and annexes are included.Palací López, DG. (2018). Quality by Design through multivariate latent structures [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/115489TESI
Pattern recognition and machine learning for magnetic resonance images with kernel methods
The aim of this thesis is to apply a particular category of machine learning and
pattern recognition algorithms, namely the kernel methods, to both functional and
anatomical magnetic resonance images (MRI). This work specifically focused on
supervised learning methods. Both methodological and practical aspects are described
in this thesis.
Kernel methods have the computational advantage for high dimensional data,
therefore they are idea for imaging data. The procedures can be broadly divided into
two components: the construction of the kernels and the actual kernel algorithms
themselves. Pre-processed functional or anatomical images can be computed into a
linear kernel or a non-linear kernel. We introduce both kernel regression and kernel
classification algorithms in two main categories: probabilistic methods and
non-probabilistic methods. For practical applications, kernel classification methods
were applied to decode the cognitive or sensory states of the subject from the fMRI
signal and were also applied to discriminate patients with neurological diseases from
normal people using anatomical MRI. Kernel regression methods were used to predict
the regressors in the design of fMRI experiments, and clinical ratings from the
anatomical scans
Information theoretic stochastic search
The MAP-i Doctoral Programme in Informatics, of the Universities of Minho, Aveiro and PortoOptimization is the research field that studies the design of algorithms for finding the
best solutions to problems we may throw at them. While the whole domain is practically
important, the present thesis will focus on the subfield of continuous black-box
optimization, presenting a collection of novel, state-of-the-art algorithms for solving
problems in that class. In this thesis, we introduce two novel general-purpose
stochastic search algorithms for black box optimisation. Stochastic search algorithms
aim at repeating the type of mutations that led to fittest search points in a population.
We can model those mutations by a stochastic distribution. Typically the stochastic
distribution is modelled as a multivariate Gaussian distribution. The key idea is to
iteratively change the parameters of the distribution towards higher expected fitness.
However we leverage information theoretic trust regions and limit the change of the
new distribution. We show how plain maximisation of the fitness expectation without
bounding the change of the distribution is destined to fail because of overfitting
and the results in premature convergence. Being derived from first principles, the
proposed methods can be elegantly extended to contextual learning setting which allows
for learning context dependent stochastic distributions that generates optimal
individuals for a given context, i.e, instead of learning one task at a time, we can
learn multiple related tasks at once. However, the search distribution typically uses
a parametric model using some hand-defined context features. Finding good context
features is a challenging task, and hence, non-parametric methods are often preferred
over their parametric counter-parts. Therefore, we further propose a non-parametric
contextual stochastic search algorithm that can learn a non-parametric search distribution
for multiple tasks simultaneously.Otimização é área de investigação que estuda o projeto de algoritmos para encontrar
as melhores soluções, tendo em conta um conjunto de critérios, para problemas
complexos. Embora todo o domínio de otimização tenha grande importância,
este trabalho está focado no subcampo da otimização contínua de caixa preta,
apresentando uma coleção de novos algoritmos novos de última geração para resolver
problemas nessa classe. Nesta tese, apresentamos dois novos algoritmos de
pesquisa estocástica de propósito geral para otimização de caixa preta. Os algoritmos
de pesquisa estocástica visam repetir o tipo de mutações que levaram aos
melhores pontos de pesquisa numa população. Podemos modelar essas mutações
por meio de uma distribuição estocástica e, tipicamente, a distribuição estocástica
é modelada como uma distribuição Gaussiana multivariada. A ideia chave é mudar
iterativamente os parâmetros da distribuição incrementando a avaliação. No entanto,
alavancamos as regiões de confiança teóricas de informação e limitamos a mudança
de distribuição. Deste modo, demonstra-se como a maximização simples da expectativa
de “fitness”, sem limites da mudança da distribuição, está destinada a falhar
devido ao “overfitness” e à convergência prematura resultantes. Sendo derivado dos
primeiros princípios, as abordagens propostas podem ser ampliadas, de forma elegante,
para a configuração de aprendizagem contextual que permite a aprendizagem
de distribuições estocásticas dependentes do contexto que geram os indivíduos ideais
para um determinado contexto. No entanto, a distribuição de pesquisa geralmente usa
um modelo paramétrico linear em algumas das características contextuais definidas
manualmente. Encontrar uma contextos bem definidos é uma tarefa desafiadora e,
portanto, os métodos não paramétricos são frequentemente preferidos em relação às
seus semelhantes paramétricos. Portanto, propomos um algoritmo não paramétrico
de pesquisa estocástica contextual que possa aprender uma distribuição de pesquisa
não-paramétrica para várias tarefas simultaneamente.FCT - Fundação para a Ciência e a Tecnologia. As well as fundings by European Union’s
FP7 under EuRoC grant agreement CP-IP 608849 and by LIACC (UID/CEC/00027/2015)
and IEETA (UID/CEC/00127/2015)
New development of the inclusive-cone-based method for linear optimization
The purpose of this dissertation is to present a simple method for linear optimization including linear programming and linear semi-infinite programming, which is termed “the inclusive-cone-based method”. Using the inclusive cone as an analytic tool, theoretical aspects of linear programming are investigated. Sensitivity analysis in linear programming is examined from the perspective of an inclusive cone. The relationship of inclusiveness between correlated linear programming problems is also studied. New inclusive-cone-based ladder algorithms are proposed to solve linear programming problems in inequality form. Numerical experiments are implemented to show effectiveness and efficiency of the new linear programming ladder algorithms. To start the ladder method for linear programming problems, a single artificial constraint technique is introduced to find an initial ladder. Further, in the context of a new category of linear programming problems, an inclusive-cone-based solvability criterion is established to distinguish that a linear programming problem is inclusive-feasible (i.e., optimal), noninclusive-feasible (i.e., unbounded), inclusive-infeasible or noninclusive-infeasible. The inclusive-cone-based method for linear programming is also generalized to linear semi-infinite programming. An optimality result, based upon the concept of the generalized base point, is established. With this optimality result as a theoretical foundation, a ladder algorithm for solving linear semi-infinite programming problems is developed. The new algorithm has several features: at each iteration it only deals with a small fraction of constraints; at each iteration it selects a constraint most violated along a “parameterized centreline”, by solving a one-dimensional global optimization problem using the efficient bridging algorithm; at each iteration the selection of the incoming constraint has a great degree of freedom, which is controlled by a parameter arising in the global optimization problem; it can detect infeasibility and unboundedness after a finite number of iterations; it obviates extra work for feasibility verification as it handles feasibility and optimality simultaneously. A simple convergent result is presented. Numerical behaviour of the algorithm is examined on several test problems
Advances in interior point methods and column generation
In this thesis we study how to efficiently combine the column generation technique (CG)
and interior point methods (IPMs) for solving the relaxation of a selection of integer
programming problems. In order to obtain an efficient method a change in the column
generation technique and a new reoptimization strategy for a primal-dual interior point
method are proposed.
It is well-known that the standard column generation technique suffers from unstable
behaviour due to the use of optimal dual solutions that are extreme points of
the restricted master problem (RMP). This unstable behaviour slows down column
generation so variations of the standard technique which rely on interior points of the
dual feasible set of the RMP have been proposed in the literature. Among these techniques,
there is the primal-dual column generation method (PDCGM) which relies on
sub-optimal and well-centred dual solutions. This technique dynamically adjusts the
column generation tolerance as the method approaches optimality. Also, it relies on
the notion of the symmetric neighbourhood of the central path so sub-optimal and
well-centred solutions are obtained. We provide a thorough theoretical analysis that
guarantees the convergence of the primal-dual approach even though sub-optimal solutions
are used in the course of the algorithm. Additionally, we present a comprehensive
computational study of the solution of linear relaxed formulations obtained after applying
the Dantzig-Wolfe decomposition principle to the cutting stock problem (CSP), the
vehicle routing problem with time windows (VRPTW), and the capacitated lot sizing
problem with setup times (CLSPST). We compare the performance of the PDCGM
with the standard column generation method (SCGM) and the analytic centre cutting
planning method (ACCPM). Overall, the PDCGM achieves the best performance when
compared to the SCGM and the ACCPM when solving challenging instances from a
column generation perspective. One important characteristic of this column generation
strategy is that no speci c tuning is necessary and the algorithm poses the same level
of difficulty as standard column generation method. The natural stabilization available
in the PDCGM due to the use of sub-optimal well-centred interior point solutions is a
very attractive feature of this method. Moreover, the larger the instance, the better is
the relative performance of the PDCGM in terms of column generation iterations and
CPU time.
The second part of this thesis is concerned with the development of a new warmstarting
strategy for the PDCGM. It is well known that taking advantage of the previously
solved RMP could lead to important savings in solving the modified RMP. However,
this is still an open question for applications arising in an integer optimization context
and the PDCGM. Despite the current warmstarting strategy in the PDCGM working
well in practice, it does not guarantee full feasibility restorations nor considers the
quality of the warmstarted iterate after new columns are added. The main motivation
of the design of the new warmstarting strategy presented in this thesis is to close this
theoretical gap. Under suitable assumptions, the warmstarting procedure proposed in this thesis restores primal and dual feasibilities after the addition of new columns in
one step. The direction is determined so that the modi cation of small components at
a particular solution is not large. Additionally, the strategy enables control over the
new duality gap by considering an expanded symmetric neighbourhood of the central
path. As observed from our computational experiments solving CSP and VRPTW, one
can conclude that the warmstarting strategies for the PDCGM are useful when dense
columns are added to the RMP (CSP), since they consistently reduce the CPU time
and also the number of iterations required to solve the RMPs on average. On the other
hand, when sparse columns are added (VRPTW), the coldstart used by the interior
point solver HOPDM becomes very efficient so warmstarting does not make the task
of solving the RMPs any easier
Mobile robot vavigation using a vision based approach
PhD ThesisThis study addresses the issue of vision based mobile robot navigation in a partially
cluttered indoor environment using a mapless navigation strategy. The work focuses on
two key problems, namely vision based obstacle avoidance and vision based reactive
navigation strategy.
The estimation of optical flow plays a key role in vision based obstacle avoidance
problems, however the current view is that this technique is too sensitive to noise and
distortion under real conditions. Accordingly, practical applications in real time robotics
remain scarce. This dissertation presents a novel methodology for vision based obstacle
avoidance, using a hybrid architecture. This integrates an appearance-based obstacle
detection method into an optical flow architecture based upon a behavioural control
strategy that includes a new arbitration module. This enhances the overall performance
of conventional optical flow based navigation systems, enabling a robot to successfully
move around without experiencing collisions.
Behaviour based approaches have become the dominant methodologies for designing
control strategies for robot navigation. Two different behaviour based navigation
architectures have been proposed for the second problem, using monocular vision as the
primary sensor and equipped with a 2-D range finder. Both utilize an accelerated
version of the Scale Invariant Feature Transform (SIFT) algorithm. The first
architecture employs a qualitative-based control algorithm to steer the robot towards a
goal whilst avoiding obstacles, whereas the second employs an intelligent control
framework. This allows the components of soft computing to be integrated into the
proposed SIFT-based navigation architecture, conserving the same set of behaviours
and system structure of the previously defined architecture. The intelligent framework
incorporates a novel distance estimation technique using the scale parameters obtained
from the SIFT algorithm. The technique employs scale parameters and a corresponding
zooming factor as inputs to train a neural network which results in the determination of
physical distance. Furthermore a fuzzy controller is designed and integrated into this
framework so as to estimate linear velocity, and a neural network based solution is
adopted to estimate the steering direction of the robot. As a result, this intelligent
iv
approach allows the robot to successfully complete its task in a smooth and robust
manner without experiencing collision.
MS Robotics Studio software was used to simulate the systems, and a modified Pioneer
3-DX mobile robot was used for real-time implementation. Several realistic scenarios
were developed and comprehensive experiments conducted to evaluate the performance
of the proposed navigation systems.
KEY WORDS: Mobile robot navigation using vision, Mapless navigation, Mobile
robot architecture, Distance estimation, Vision for obstacle avoidance, Scale Invariant
Feature Transforms, Intelligent framework
Greedy routing and virtual coordinates for future networks
At the core of the Internet, routers are continuously struggling with
ever-growing routing and forwarding tables. Although hardware advances
do accommodate such a growth, we anticipate new requirements e.g. in
data-oriented networking where each content piece has to be referenced
instead of hosts, such that current approaches relying on global
information will not be viable anymore, no matter the hardware
progress. In this thesis, we investigate greedy routing methods that
can achieve similar routing performance as today but use much less
resources and which rely on local information only. To this end, we
add specially crafted name spaces to the network in which virtual
coordinates represent the addressable entities. Our scheme enables participating
routers to make forwarding decisions using only neighbourhood information,
as the overarching pseudo-geometric name space structure already
organizes and incorporates "vicinity" at a global level.
A first challenge to the application of greedy routing on virtual
coordinates to future networks is that of "routing dead-ends"
that are local minima due to the difficulty of consistent coordinates
attribution. In this context, we propose a routing recovery scheme
based on a multi-resolution embedding of the network in low-dimensional Euclidean spaces.
The recovery is performed by routing greedily on a blurrier view of the network. The
different network detail-levels are obtained though the embedding of
clustering-levels of the graph. When compared with
higher-dimensional embeddings of a given network, our method shows a
significant diminution of routing failures for similar header and
control-state sizes.
A second challenge to the application of virtual coordinates and
greedy routing to future networks is the support of
"customer-provider" as well as "peering" relationships between
participants, resulting in a differentiated services
environment. Although an application of greedy routing within such a
setting would combine two very common fields of today's networking
literature, such a scenario has, surprisingly, not been studied so
far. In this context we propose two approaches to address this scenario.
In a first approach we implement a path-vector protocol similar to
that of BGP on top of a greedy embedding of the network. This allows
each node to build a spatial map associated with each of its
neighbours indicating the accessible regions. Routing is then
performed through the use of a decision-tree classifier taking the
destination coordinates as input. When applied on a real-world dataset
(the CAIDA 2004 AS graph) we demonstrate an up to 40% compression ratio of
the routing control information at the network's core as well as a computationally efficient
decision process comparable to methods such as binary trees and tries.
In a second approach, we take inspiration from consensus-finding in social
sciences and transform the three-dimensional distance data structure
(where the third dimension encodes the service differentiation) into a
two-dimensional matrix on which classical embedding tools can be used.
This transformation is achieved by agreeing on a set of
constraints on the inter-node distances guaranteeing an
administratively-correct greedy routing. The computed distances are
also enhanced to encode multipath support. We demonstrate a good
greedy routing performance as well as an above 90% satisfaction of multipath constraints
when relying on the non-embedded obtained distances on synthetic datasets.
As various embeddings of the consensus distances do not fully exploit their multipath potential, the use of compression techniques such as transform coding to
approximate the obtained distance allows for better routing performances
- …