191 research outputs found
Effective and efficient estimation of distribution algorithms for permutation and scheduling problems.
Estimation of Distribution Algorithm (EDA) is a branch of evolutionary computation that learn a probabilistic model of good solutions. Probabilistic models are used to represent relationships between solution variables which may give useful, human-understandable insights into real-world problems. Also, developing an effective PM has been shown to significantly reduce function evaluations needed to reach good solutions. This is also useful for real-world problems because their representations are often complex needing more computation to arrive at good solutions. In particular, many real-world problems are naturally represented as permutations and have expensive evaluation functions. EDAs can, however, be computationally expensive when models are too complex. There has therefore been much recent work on developing suitable EDAs for permutation representation. EDAs can now produce state-of-the-art performance on some permutation benchmark problems. However, models are still complex and computationally expensive making them hard to apply to real-world problems. This study investigates some limitations of EDAs in solving permutation and scheduling problems. The focus of this thesis is on addressing redundancies in the Random Key representation, preserving diversity in EDA, simplifying the complexity attributed to the use of multiple local improvement procedures and transferring knowledge from solving a benchmark project scheduling problem to a similar real-world problem. In this thesis, we achieve state-of-the-art performance on the Permutation Flowshop Scheduling Problem benchmarks as well as significantly reducing both the computational effort required to build the probabilistic model and the number of function evaluations. We also achieve competitive results on project scheduling benchmarks. Methods adapted for solving a real-world project scheduling problem presents significant improvements
Tactical Problems in Vehicle Routing Applications
The class of Vehicle Routing Problems (VRPs) is one the most
studied topics in the Operations Research community. The vast
majority of the published papers focus on single-period problems,
with a few branches of the literature considering multiperiod
generalisations. All of these problems though, consider a short
horizon and aim at optimising the decisions at an operational
level, i.e. that will have to be taken in the near future. One
step above are tactical problems, i.e. problems concerning a
longer time horizon. Tactical problems are of a fundamental
importance as they directly influence the daily operations, and
therefore a part of the incurred costs, for a long time. The main
focus of this thesis is to study tactical problems arising in
routing applications. The first problem considered concerns the
design of a fleet of vehicles. Transportation providers often
have to design a fleet that will be used for daily operations
across a long-time span. Trucks used for transportation are very
expensive to purchase, maintain or hire. On the other side, the
composition of the fleet strongly influences the daily plans, and
therefore costs such as fuel or drivers’ wages. Balancing these
two components is challenging, and optimisation models can lead
to substantial savings or provide a useful basis for informed
decisions.
The second problem presented focuses on the use of a split
deliveries policy in multi-period routing problems. It is known
that the combined optimisation of delivery scheduling and routing
can be very beneficial, and lead to significant reductions in
costs. However, it also adds complexity to the model. The same is
true when split deliveries are introduced. The problem studied
considers the possibility of splitting the deliveries over
different days. An analysis, both theoretical and numerical, of
the impact of this approach on the overall cost is provided.
Finally, a districting problem for routing applications is
considered. These types of problems typically arise when
transportation providers wish to increase their service
consistency. There are several reasons a company may wish to do
so: to strengthen the customer-driver relationship, to increase
drivers’ familiarity with their service area, or, to simplify
the management of the service area. A typical approach,
considered here, is to divide the area under consideration in
sectors that will be subsequently assigned to specific drivers.
This type of problem is inherently of a multi-period and tactical
nature. A new formulation is proposed, integrating standard
routing models into the design of territories. This makes it
possible to investigate how operational constraints and other
requirements, such as having a fair workload division amongst
drivers, influence the effectiveness of the approach. An analysis
of the cost of districting, in terms of increased routing cost
and decreased routing flexibility, and of several operational
constraints, is presented
Optimizing transportation systems and logistics network configurations : From biased-randomized algorithms to fuzzy simheuristics
242 páginasTransportation and logistics (T&L) are currently highly relevant functions in any competitive industry. Locating facilities or distributing goods to hundreds or thousands of customers are activities with a high degree of complexity, regardless of whether facilities and customers are placed all over the globe or in the same city. A countless number of alternative strategic, tactical, and operational decisions can be made in T&L systems; hence, reaching an optimal solution –e.g., a solution with the minimum cost or the maximum profit– is a really difficult challenge, even by the most powerful existing computers. Approximate methods, such as heuristics, metaheuristics, and simheuristics, are then proposed to solve T&L problems. They do not guarantee optimal results, but they yield good solutions in short computational times. These characteristics become even more important when considering uncertainty conditions, since they increase T&L problems’ complexity. Modeling uncertainty implies to introduce complex mathematical formulas and procedures, however, the model realism increases and, therefore, also its reliability to represent real world situations. Stochastic approaches, which require the use of probability distributions, are one of the most employed approaches to model uncertain parameters. Alternatively, if the real world does not provide enough information to reliably estimate a probability distribution, then fuzzy logic approaches become an alternative to model uncertainty. Hence, the main objective of this thesis is to design hybrid algorithms that combine fuzzy and stochastic simulation with approximate and exact methods to solve T&L problems considering operational, tactical, and strategic decision levels. This thesis is organized following a layered structure, in which each introduced layer enriches the previous one.El transporte y la logística (T&L) son actualmente funciones de gran relevancia en cual quier industria competitiva. La localización de instalaciones o la distribución de mercancías
a cientos o miles de clientes son actividades con un alto grado de complejidad, indepen dientemente de si las instalaciones y los clientes se encuentran en todo el mundo o en la
misma ciudad. En los sistemas de T&L se pueden tomar un sinnúmero de decisiones al ternativas estratégicas, tácticas y operativas; por lo tanto, llegar a una solución óptima –por
ejemplo, una solución con el mínimo costo o la máxima utilidad– es un desafío realmente di fícil, incluso para las computadoras más potentes que existen hoy en día. Así pues, métodos
aproximados, tales como heurísticas, metaheurísticas y simheurísticas, son propuestos para
resolver problemas de T&L. Estos métodos no garantizan resultados óptimos, pero ofrecen
buenas soluciones en tiempos computacionales cortos. Estas características se vuelven aún
más importantes cuando se consideran condiciones de incertidumbre, ya que estas aumen tan la complejidad de los problemas de T&L. Modelar la incertidumbre implica introducir
fórmulas y procedimientos matemáticos complejos, sin embargo, el realismo del modelo
aumenta y, por lo tanto, también su confiabilidad para representar situaciones del mundo
real. Los enfoques estocásticos, que requieren el uso de distribuciones de probabilidad, son
uno de los enfoques más empleados para modelar parámetros inciertos. Alternativamente,
si el mundo real no proporciona suficiente información para estimar de manera confiable
una distribución de probabilidad, los enfoques que hacen uso de lógica difusa se convier ten en una alternativa para modelar la incertidumbre. Así pues, el objetivo principal de
esta tesis es diseñar algoritmos híbridos que combinen simulación difusa y estocástica con
métodos aproximados y exactos para resolver problemas de T&L considerando niveles de
decisión operativos, tácticos y estratégicos. Esta tesis se organiza siguiendo una estructura
por capas, en la que cada capa introducida enriquece a la anterior. Por lo tanto, en primer
lugar se exponen heurísticas y metaheurísticas sesgadas-aleatorizadas para resolver proble mas de T&L que solo incluyen parámetros determinísticos. Posteriormente, la simulación
Monte Carlo se agrega a estos enfoques para modelar parámetros estocásticos. Por último,
se emplean simheurísticas difusas para abordar simultáneamente la incertidumbre difusa
y estocástica. Una serie de experimentos numéricos es diseñada para probar los algoritmos
propuestos, utilizando instancias de referencia, instancias nuevas e instancias del mundo
real. Los resultados obtenidos demuestran la eficiencia de los algoritmos diseñados, tanto
en costo como en tiempo, así como su confiabilidad para resolver problemas realistas que
incluyen incertidumbre y múltiples restricciones y condiciones que enriquecen todos los
problemas abordados.Doctorado en Logística y Gestión de Cadenas de SuministrosDoctor en Logística y Gestión de Cadenas de Suministro
Proceedings of the 22nd Conference on Formal Methods in Computer-Aided Design – FMCAD 2022
The Conference on Formal Methods in Computer-Aided Design (FMCAD) is an annual conference on the theory and applications of formal methods in hardware and system verification. FMCAD provides a leading forum to researchers in academia and industry for presenting and discussing groundbreaking methods, technologies, theoretical results, and tools for reasoning formally about computing systems. FMCAD covers formal aspects of computer-aided system design including verification, specification, synthesis, and testing
Proceedings of the 22nd Conference on Formal Methods in Computer-Aided Design – FMCAD 2022
The Conference on Formal Methods in Computer-Aided Design (FMCAD) is an annual conference on the theory and applications of formal methods in hardware and system verification. FMCAD provides a leading forum to researchers in academia and industry for presenting and discussing groundbreaking methods, technologies, theoretical results, and tools for reasoning formally about computing systems. FMCAD covers formal aspects of computer-aided system design including verification, specification, synthesis, and testing
Quantum Computing for Airline Planning and Operations
Classical algorithms and mathematical optimization techniques have beenused extensively by airlines to optimize their profit and ensure that regulationsare followed. In this thesis, we explore which role quantum algorithmscan have for airlines. Specifically, we have considered the two quantum optimizationalgorithms; the Quantum Approximate Optimization Algorithm(QAOA) and Quantum Annealing (QA). We present a heuristic that integratesthese quantum algorithms into the existing classical algorithm, whichis currently employed to solve airline planning problems in a state-of-the-artcommercial solver. We perform numerical simulations of QAOA circuits andfind that linear and quadratic algorithm depth in the input size can be requiredto obtain a one-shot success probability of 0.5. Unfortunately, we areunable to find performance guarantees. Finally, we perform experiments withD-wave’s newly released QA machine and find that it outperforms 2000Q formost instances
Dynamic multi-objective optimization using evolutionary algorithms
Dynamic Multi-objective Optimization Problems (DMOPs) offer an opportunity to examine and solve challenging real world scenarios where trade-off solutions between conflicting objectives change over time. Definition of benchmark problems allows modelling of industry scenarios across transport, power and communications networks, manufacturing and logistics. Recently, significant progress has been made in the variety and complexity of DMOP benchmarks and the incorporation of realistic dynamic characteristics. However, significant gaps still exist in standardised methodology for DMOPs, specific problem domain examples and in the understanding of the impacts and explanations of dynamic characteristics. This thesis provides major contributions on these three topics within evolutionary dynamic multi-objective optimization. Firstly, experimental protocols for DMOPs are varied. This limits the applicability and relevance of results produced and conclusions made in the field. A major source of the inconsistency lies in the parameters used to define specific problem instances being examined. The uninformed selection of these has historically held back understanding of their impacts and standardisation in experimental approach to these parameters in the multi-objective problem domain. Using the frequency and severity (or magnitude) of change events, a more informed approach to DMOP experimentation is conceptualized, implemented and evaluated. Establishment of a baseline performance expectation across a comprehensive range of dynamic instances for well-studied DMOP benchmarks is analyzed. To maximize relevance, these profiles are composed from the performance of evolutionary algorithms commonly used for baseline comparisons and those with simple dynamic responses. Comparison and contrast with the coverage of parameter combinations in the sampled literature highlights the importance of these contributions. Secondly, the provision of useful and realistic DMOPs in the combinatorial domain is limited in previous literature. A novel dynamic benchmark problem is presented by the extension of the Travelling Thief Problem (TTP) to include a variety of realistic and contextually justified dynamic changes. Investigation of problem information exploitation and it's potential application as a dynamic response is a key output of these results and context is provided through comparison to results obtained by adapting existing TTP heuristics. Observation driven iterative development prompted the investigation of multi-population island model strategies, together with improvements in the approaches to accurately describe and compare the performance of algorithm models for DMOPs, a contribution which is applicable beyond the dynamic TTP. Thirdly, the purpose of DMOPs is to reconstruct realistic scenarios, or features from them, to allow for experimentation and development of better optimization algorithms. However, numerous important characteristics from real systems still require implementation and will drive research and development of algorithms and mechanisms to handle these industrially relevant problem classes. The novel challenges associated with these implementations are significant and diverse, even for a simple development such as consideration of DMOPs with multiple time dependencies. Real world systems with dynamics are likely to contain multiple temporally changing aspects, particularly in energy and transport domains. Problems with more than one dynamic problem component allow for asynchronous changes and a differing severity between components that leads to an explosion in the size of the possible dynamic instance space. Both continuous and combinatorial problem domains require structured investigation into the best practices for experimental design, algorithm application and performance measurement, comparison and visualization. Highlighting the challenges, the key requirements for effective progress and recommendations on experimentation are explored here
Automated application-specific optimisation of interconnects in multi-core systems
In embedded computer systems there are often tasks, implemented as stand-alone devices,
that are both application-specific and compute intensive. A recurring problem
in this area is to design these application-specific embedded systems as close to the
power and efficiency envelope as possible. Work has been done on optimizing singlecore
systems and memory organisation, but current methods for achieving system design
goals are proving limited as the system capabilities and system size increase in the
multi- and many-core era. To address this problem, this thesis investigates machine
learning approaches to managing the design space presented in the interconnect design
of embedded multi-core systems. The design space presented is large due to the
system scale and level of interconnectivity, and also feature inter-dependant parameters,
further complicating analysis. The results presented in this thesis demonstrate
that machine learning approaches, particularly wkNN and random forest, work well
in handling the complexity of the design space. The benefits of this approach are in
automation, saving time and effort in the system design phase as well as energy and
execution time in the finished system
Routing optimization algorithms in integrated fronthaul/backhaul networks supporting multitenancy
Mención Internacional en el título de doctorEsta tesis pretende ayudar en la definición y el diseño de la quinta generación de
redes de telecomunicaciones (5G) a través del modelado matemático de las diferentes
cualidades que las caracterizan. En general, la ambición de estos modelos es realizar
una optimización de las redes, ensalzando sus capacidades recientemente adquiridas para
mejorar la eficiencia de los futuros despliegues tanto para los usuarios como para los
operadores. El periodo de realización de esta tesis se corresponde con el periodo de
investigación y definición de las redes 5G, y, por lo tanto, en paralelo y en el contexto
de varios proyectos europeos del programa H2020. Por lo tanto, las diferentes partes
del trabajo presentado en este documento cuadran y ofrecen una solución a diferentes
retos que han ido apareciendo durante la definición del 5G y dentro del ámbito de estos
proyectos, considerando los comentarios y problemas desde el punto de vista de todos los
usuarios finales, operadores y proveedores.
Así, el primer reto a considerar se centra en el núcleo de la red, en particular en
cómo integrar tráfico fronthaul y backhaul en el mismo estrato de transporte. La solución
propuesta es un marco de optimización para el enrutado y la colocación de recursos que
ha sido desarrollado teniendo en cuenta restricciones de retardo, capacidad y caminos,
maximizando el grado de despliegue de Unidades Distribuidas (DU) mientras se minimizan
los agregados de las Unidades Centrales (CU) que las soportan. El marco y los algoritmos
heurísticos desarrollados (para reducir la complexidad computacional) son validados y
aplicados a redes tanto a pequeña como a gran (nivel de producción) escala. Esto los
hace útiles para los operadores de redes tanto para la planificación de la red como para
el ajuste dinámico de las operaciones de red en su infraestructura (virtualizada).
Moviéndonos más cerca de los usuarios, el segundo reto considerado se centra en
la colocación de servicios en entornos de nube y borde (cloud/edge). En particular, el
problema considerado consiste en seleccionar la mejor localización para cada función
de red virtual (VNF) que compone un servicio en entornos de robots en la nube, que
implica restricciones estrictas en las cotas de retardo y fiabilidad. Los robots, vehículos y
otros dispositivos finales proveen competencias significativas como impulsores, sensores y
computación local que son esenciales para algunos servicios. Por contra, estos dispositivos
están en continuo movimiento y pueden perder la conexión con la red o quedarse sin batería, cosa que reta aún más la entrega de servicios en este entorno dinámico. Así, el
análisis realizado y la solución propuesta abordan las restricciones de movilidad y batería.
Además, también se necesita tener en cuenta los aspectos temporales y los objetivos
conflictivos de fiabilidad y baja latencia en el despliegue de servicios en una red volátil,
donde los nodos de cómputo móviles actúan como una extensión de la infraestructura
de cómputo de la nube y el borde. El problema se formula como un problema de
optimización para colocación de VNFs minimizando el coste y también se propone un
heurístico eficiente. Los algoritmos son evaluados de forma extensiva desde varios aspectos
por simulación en escenarios que reflejan la realidad de forma detallada.
Finalmente, el último reto analizado se centra en dar soporte a servicios basados en
el borde, en particular, aprendizaje automático (ML) en escenarios del Internet de las
Cosas (IoT) distribuidos. El enfoque tradicional al ML distribuido se centra en adaptar
los algoritmos de aprendizaje a la red, por ejemplo, reduciendo las actualizaciones para
frenar la sobrecarga. Las redes basadas en el borde inteligente, en cambio, hacen posible
seguir un enfoque opuesto, es decir, definir la topología de red lógica alrededor de la
tarea de aprendizaje a realizar, para así alcanzar el resultado de aprendizaje deseado.
La solución propuesta incluye un modelo de sistema que captura dichos aspectos en
el contexto de ML supervisado, teniendo en cuenta tanto nodos de aprendizaje (que
realizan las computaciones) como nodos de información (que proveen datos). El problema
se formula para seleccionar (i) qué nodos de aprendizaje e información deben cooperar
para completar la tarea de aprendizaje, y (ii) el número de iteraciones a realizar, para
minimizar el coste de aprendizaje mientras se garantizan los objetivos de error predictivo y
tiempo de ejecución. La solución también incluye un algoritmo heurístico que es evaluado
ensalzando una topología de red real y considerando tanto las tareas de clasificación
como de regresión, y cuya solución se acerca mucho al óptimo, superando las soluciones
alternativas encontradas en la literatura.This thesis aims to help in the definition and design of the 5th generation of
telecommunications networks (5G) by modelling the different features that characterize
them through several mathematical models. Overall, the aim of these models is to perform
a wide optimization of the network elements, leveraging their newly-acquired capabilities
in order to improve the efficiency of the future deployments both for the users and the
operators. The timeline of this thesis corresponds to the timeline of the research and
definition of 5G networks, and thus in parallel and in the context of several European
H2020 programs. Hence, the different parts of the work presented in this document
match and provide a solution to different challenges that have been appearing during
the definition of 5G and within the scope of those projects, considering the feedback and
problems from the point of view of all the end users, operators and providers.
Thus, the first challenge to be considered focuses on the core network, in particular
on how to integrate fronthaul and backhaul traffic over the same transport stratum.
The solution proposed is an optimization framework for routing and resource placement
that has been developed taking into account delay, capacity and path constraints,
maximizing the degree of Distributed Unit (DU) deployment while minimizing the
supporting Central Unit (CU) pools. The framework and the developed heuristics (to
reduce the computational complexity) are validated and applied to both small and largescale
(production-level) networks. They can be useful to network operators for both
network planning as well as network operation adjusting their (virtualized) infrastructure
dynamically.
Moving closer to the user side, the second challenge considered focuses on the
allocation of services in cloud/edge environments. In particular, the problem tackled
consists of selecting the best the location of each Virtual Network Function (VNF)
that compose a service in cloud robotics environments, that imply strict delay bounds
and reliability constraints. Robots, vehicles and other end-devices provide significant
capabilities such as actuators, sensors and local computation which are essential for some
services. On the negative side, these devices are continuously on the move and might
lose network connection or run out of battery, which further challenge service delivery in
this dynamic environment. Thus, the performed analysis and proposed solution tackle the mobility and battery restrictions. We further need to account for the temporal aspects and
conflicting goals of reliable, low latency service deployment over a volatile network, where
mobile compute nodes act as an extension of the cloud and edge computing infrastructure.
The problem is formulated as a cost-minimizing VNF placement optimization and an
efficient heuristic is proposed. The algorithms are extensively evaluated from various
aspects by simulation on detailed real-world scenarios.
Finally, the last challenge analyzed focuses on supporting edge-based services, in
particular, Machine Learning (ML) in distributed Internet of Things (IoT) scenarios. The
traditional approach to distributed ML is to adapt learning algorithms to the network, e.g.,
reducing updates to curb overhead. Networks based on intelligent edge, instead, make
it possible to follow the opposite approach, i.e., to define the logical network topology
around the learning task to perform, so as to meet the desired learning performance.
The proposed solution includes a system model that captures such aspects in the context
of supervised ML, accounting for both learning nodes (that perform computations) and
information nodes (that provide data). The problem is formulated to select (i) which
learning and information nodes should cooperate to complete the learning task, and (ii)
the number of iterations to perform, in order to minimize the learning cost while meeting
the target prediction error and execution time. The solution also includes an heuristic
algorithm that is evaluated leveraging a real-world network topology and considering
both classification and regression tasks, and closely matches the optimum, outperforming
state-of-the-art alternatives.This work has been supported by IMDEA Networks InstitutePrograma de Doctorado en Ingeniería Telemática por la Universidad Carlos III de MadridPresidente: Pablo Serrano Yáñez-Mingot.- Secretario: Andrés García Saavedra.- Vocal: Luca Valcarengh
- …