300 research outputs found

    An Adaptive Scheme to Generate the Pareto Front Based on the Epsilon-Constraint Method

    Get PDF
    We discuss methods for generating or approximating the Pareto set of multiobjective optimization problems by solving a sequence of constrained single-objective problems. The necessity of determining the constraint value a priori is shown to be a serious drawback of the original epsilon-constraint method. We therefore propose a new, adaptive scheme to generate appropriate constraint values during the run. A simple example problem is presented, where the running time (measured by the number of constrained single-objective sub-problems to be solved) of the original epsilon-constraint method is exponential in the problem size (number of decision variables), although the size of the Pareto set grows only linearly. We prove that --- independent of the problem or the problem size --- the time complexity of the new scheme is O(k^{m-1}), where k is the number of Pareto-optimal solutions to be found and m the number of objectives. Simulation results for the example problem as well as for different instances of the multiobjective knapsack problem demonstrate the behavior of the method, and links to reference implementations are provided

    Extended Production Planning of Reconfigurable Manufacturing Systems by Means of Simulation-based Optimization

    Get PDF
    Reconfigurable manufacturing systems (RMS) are capable of adjusting their operating point to the requirements of current customer demand with high degrees of freedom. In light of recent events, such as the covid crisis or the chip crisis, this reconfigurability proves to be crucial for efficient manufacturing of goods. Reconfigurability aims thereby not only at adjust production capacities but also for fast integration of new product variants or technologies. However, the operation of such systems is linked to high efforts concerning manual work in production planning and control. Simulation-based optimization provides the possibility to automate processes in production planning and control with the advantage of relying on mostly existing models such as material flow simulations. This paper studies the capabilities of the meta heuristics evolutionary algorithm, linear annealing and tabu search to automate the search for optimal production reconfiguration strategies. Two distinct use cases are regarded: an increase of customer demand and the introduction of a previously unknown product variant. A parametrized material flow simulation is used as function approximator for the optimizers, whereby the production system's structure as well as logic are target variables of the optimizers. The analysis shows that meta-heuristics find good solutions in a short time with only little manual configuration needed. Thus, metaheuristics illustrate the potential to automate the production planning of RMS. However, the results indicate that the performance of the three meta-heuristics considering optimization quality and speed differs strongly

    A bi-criteria evolutionary algorithm for a constrained multi-depot vehicle routing problem

    Get PDF
    Most research about the vehicle routing problem (VRP) does not collectively address many of the constraints that real-world transportation companies have regarding route assignments. Consequently, our primary objective is to explore solutions for real-world VRPs with a heterogeneous fleet of vehicles, multi-depot subcontractors (drivers), and pickup/delivery time window and location constraints. We use a nested bi-criteria genetic algorithm (GA) to minimize the total time to complete all jobs with the fewest number of route drivers. Our model will explore the issue of weighting the objectives (total time vs. number of drivers) and provide Pareto front solutions that can be used to make decisions on a case-by-case basis. Three different real-world data sets were used to compare the results of our GA vs. transportation field experts’ job assignments. For the three data sets, all 21 Pareto efficient solutions yielded improved overall job completion times. In 57 % (12/21) of the cases, the Pareto efficient solutions also utilized fewer drivers than the field experts’ job allocation strategies

    A new method for evaluating the distribution of aggregate claims

    Get PDF
    In the present paper, we propose a method of practical utility for calculating the aggregate claims distribution in a discrete framework. It is an approximated method but unlike the other approximated methods proposed in the literature: the approximation concerns both the counting distribution and the convolution of the severity distributions; the approximation does not consist in truncating the original distribution up to a given number of terms nor in replacing it with another distribution or a more general function (but simply in considering only the significant numerical realizations and in neglecting the others); the resulting approximation of the aggregate claims distribution is lower than a prefixed maximum error (10(-6) in our applications). In particular, the probability distribution and also the first three moments are exact with the prefixed maximum error. The proposed method does not require special assumptions on the counting distribution nor the identical distribution of the severity random variables and it does not incur in underflow and overflow computational problems. It proves to be more flexible, easier and cheaper than the (exact and approximated) methods using recursion and Fast Fourier Transform. We show some applications using both a Poisson distribution and a Generalized Pareto mixture of Poisson distributions as counting distribution. In addition to the specific application proposed in this paper, the method can be applied in many other (life and nonlife) actuarial fields where the sum of discrete random variables and the calculation of compound distributions are involved. Besides, it can be extended in multivariate cases. (c) 2005 Elsevier Inc. All rights reserved

    Maintenance of Automated Test Suites in Industry: An Empirical study on Visual GUI Testing

    Full text link
    Context: Verification and validation (V&V) activities make up 20 to 50 percent of the total development costs of a software system in practice. Test automation is proposed to lower these V&V costs but available research only provides limited empirical data from industrial practice about the maintenance costs of automated tests and what factors affect these costs. In particular, these costs and factors are unknown for automated GUI-based testing. Objective: This paper addresses this lack of knowledge through analysis of the costs and factors associated with the maintenance of automated GUI-based tests in industrial practice. Method: An empirical study at two companies, Siemens and Saab, is reported where interviews about, and empirical work with, Visual GUI Testing is performed to acquire data about the technique's maintenance costs and feasibility. Results: 13 factors are observed that affect maintenance, e.g. tester knowledge/experience and test case complexity. Further, statistical analysis shows that developing new test scripts is costlier than maintenance but also that frequent maintenance is less costly than infrequent, big bang maintenance. In addition a cost model, based on previous work, is presented that estimates the time to positive return on investment (ROI) of test automation compared to manual testing. Conclusions: It is concluded that test automation can lower overall software development costs of a project whilst also having positive effects on software quality. However, maintenance costs can still be considerable and the less time a company currently spends on manual testing, the more time is required before positive, economic, ROI is reached after automation

    Facing-up Challenges of Multiobjective Clustering Based on Evolutionary Algorithms: Representations, Scalability and Retrieval Solutions

    Get PDF
    Aquesta tesi es centra en algorismes de clustering multiobjectiu, que estan basats en optimitzar varis objectius simultàniament obtenint una col•lecció de solucions potencials amb diferents compromisos entre objectius. El propòsit d'aquesta tesi consisteix en dissenyar i implementar un nou algorisme de clustering multiobjectiu basat en algorismes evolutius per afrontar tres reptes actuals relacionats amb aquest tipus de tècniques. El primer repte es centra en definir adequadament l'àrea de possibles solucions que s'explora per obtenir la millor solució i que depèn de la representació del coneixement. El segon repte consisteix en escalar el sistema dividint el conjunt de dades original en varis subconjunts per treballar amb menys dades en el procés de clustering. El tercer repte es basa en recuperar la solució més adequada tenint en compte la qualitat i la forma dels clusters a partir de la regió més interessant de la col•lecció de solucions ofertes per l’algorisme.Esta tesis se centra en los algoritmos de clustering multiobjetivo, que están basados en optimizar varios objetivos simultáneamente obteniendo una colección de soluciones potenciales con diferentes compromisos entre objetivos. El propósito de esta tesis consiste en diseñar e implementar un nuevo algoritmo de clustering multiobjetivo basado en algoritmos evolutivos para afrontar tres retos actuales relacionados con este tipo de técnicas. El primer reto se centra en definir adecuadamente el área de posibles soluciones explorada para obtener la mejor solución y que depende de la representación del conocimiento. El segundo reto consiste en escalar el sistema dividiendo el conjunto de datos original en varios subconjuntos para trabajar con menos datos en el proceso de clustering El tercer reto se basa en recuperar la solución más adecuada según la calidad y la forma de los clusters a partir de la región más interesante de la colección de soluciones ofrecidas por el algoritmo.This thesis is focused on multiobjective clustering algorithms, which are based on optimizing several objectives simultaneously obtaining a collection of potential solutions with different trade¬offs among objectives. The goal of the thesis is to design and implement a new multiobjective clustering technique based on evolutionary algorithms for facing up three current challenges related to these techniques. The first challenge is focused on successfully defining the area of possible solutions that is explored in order to find the best solution, and this depends on the knowledge representation. The second challenge tries to scale-up the system splitting the original data set into several data subsets in order to work with less data in the clustering process. The third challenge is addressed to the retrieval of the most suitable solution according to the quality and shape of the clusters from the most interesting region of the collection of solutions returned by the algorithm

    Efficient targeted optimisation for the design of pressure swing adsorption systems for CO2 capture in power plants

    Get PDF
    Pressure swing adsorption (PSA) is a cyclic adsorption process for gas separation and purification, and can be used in a variety of industrial applications, for example, hydrogen purification and dehydration. PSA is, due to its low operational cost and its ability to efficiently separate CO2 from flue gas, a promising candidate for post-combustion carbon capture in power plants, which is an important link in the Carbon Capture and Storage technology chain. PSA offers many design possibilities, but to optimise the performance of a PSA system over a wide range of design choices, by experimental means, is typically too costly, in terms of time and resources required. To address this challenge, computer experiments are used to emulate the real system and to predict the performance. The system of PDAEs that describes the PSA process behaviour is however typically computationally expensive to simulate, especially as the cyclic steady state condition has to be met. Over the past decade, significant progress has been made in computational strategies for PSA design, but more efficient optimisation procedures are needed. One popular class of optimisation methods are the Evolutionary algorithms (EAs). EAs are however less efficient for computationally expensive models. The use of surrogate models in optimisation is an exciting research direction that allows the strengths of EAs to be used for expensive models. A surrogate based optimisation (SBO) procedure is here developed for the design of PSA systems. The procedure is applicable for constrained and multi-objective optimisation. This SBO procedure relies on Kriging, a popular surrogate model, and is used with EAs. The main application of this work is the design of PSA systems for CO2 capture. A 2- bed/6-step PSA system for CO2 separation is used as an example. The cycle configuration used is sufficiently complex to provide a challenging, multi-criteria example

    Automated Improvement of Software Architecture Models for Performance and Other Quality Attributes

    Get PDF

    Estimation and detection of transmission line characteristics in the copper access network

    Get PDF
    The copper access-network operators face the challenge of developing and maintaining cost-effective digital subscriber line (DSL) services that are competitive to other broadband access technologies. The way forward is dictated by the demand of ever increasing data rates on the twisted-pair copper lines. To meet this demand, a relocation of the DSL transceivers in cabinets closer to the customers are often necessary combined with a joint expansion of the accompanying optical-fiber backhaul network. The equipment of the next generation copper network are therefore becoming more scattered and geographically distributed, which increases the requirements of automated line qualification with fault detection and localization. This scenario is addressed in the first five papers of this dissertation where the focus is on estimation and detection of transmission line characteristics in the copper access network. The developed methods apply model-based optimization with an emphasis on using low-order modeling and a priori information of the given problem. More specifically, in Paper I a low-order and causal cable model is derived based on the Hilbert transform. This model is successfully applied in three contributions of this dissertation. In Paper II, a class of low-complexity unbiased estimators for the frequency-dependent characteristic impedance is presented that uses one-port measurements only. The so obtained characteristic impedance paves the way for enhanced time domain reflectometry (a.k.a. TDR) on twisted-pair lines. In Paper III, the problem of estimating a nonhomogeneous and dispersive transmission line is investigated and a space-frequency optimization approach is developed for the DSL application. The accompanying analysis shows which parameters are of interest to estimate and further suggests the introduction of the concept capacitive length that overcomes the necessity of a priori knowledge of the physical line length. In Paper IV, two methods are developed for detection and localization of load coils present in so-called loaded lines. In Paper V, line topology identification is addressed with varying degree of a priori information. In doing so, a model-based optimization approach is employed that utilizes multi-objective evolutionary computation based on one/two-port measurements. A complement to transceiver relocation that potentially enhances the total data throughput in the copper access network is dynamic spectrum management (DSM). This promising multi-user transmission technique aims at maximizing the transmission rates, and/or minimizing the power consumption, by mitigating or cancelling the dominating crosstalk interference between twisted-pair lines in the same cable binder. Hence the spectral utilization is improved by optimizing the transmit signals in order to minimize the crosstalk interference. However, such techniques rely on accurate information of the (usually) unknown crosstalk channels. This issue is the main focus of Paper VI and VII of this dissertation in which Paper VI deals with estimation of the crosstalk channels between twisted-pair lines. More specifically, an unbiased estimator for the square-magnitude of the crosstalk channels is derived from which a practical procedure is developed that can be implemented with standardized DSL modems already installed in the copper access network. In Paper VII the impact such a non-ideal estimator has on the performance of DSM is analyzed and simulated. Finally, in Paper VIII a novel echo cancellation algorithm for DMT-based DSL modems is presented
    corecore