362 research outputs found

    About Designing an Observer Pattern-Based Architecture for a Multi-objective Metaheuristic Optimization Framework

    Get PDF
    Multi-objective optimization with metaheuristics is an active and popular research field which is supported by the availability of software frameworks providing algorithms, benchmark problems, quality indicators and other related components. Most of these tools follow a monolithic architecture that frequently leads to a lack of flexibility when a user intends to add new features to the included algorithms. In this paper, we explore a different approach by designing a component-based architecture for a multi-objective optimization framework based on the observer pattern. In this architecture, most of the algorithmic components are observable entities that naturally allows to register a number of observers. This way, a metaheuristic is composed of a set of observable and observer elements, which can be easily extended without requiring to modify the algorithm. We have developed a prototype of this architecture and implemented the NSGA-II evolutionary algorithm on top of it as a case study. Our analysis confirms the improvement of flexibility using this architecture, pointing out the requirements it imposes and how performance is affected when adopting it.Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech

    Big Data Optimization : Algorithmic Framework for Data Analysis Guided by Semantics

    Get PDF
    Fecha de Lectura de Tesis: 9 noviembre 2018.Over the past decade the rapid rise of creating data in all domains of knowledge such as traffic, medicine, social network, industry, etc., has highlighted the need for enhancing the process of analyzing large data volumes, in order to be able to manage them with more easiness and in addition, discover new relationships which are hidden in them Optimization problems, which are commonly found in current industry, are not unrelated to this trend, therefore Multi-Objective Optimization Algorithms (MOA) should bear in mind this new scenario. This means that, MOAs have to deal with problems, which have either various data sources (typically streaming) of huge amount of data. Indeed these features, in particular, are found in Dynamic Multi-Objective Problems (DMOPs), which are related to Big Data optimization problems. Mostly with regards to velocity and variability. When dealing with DMOPs, whenever there exist changes in the environment that affect the solutions of the problem (i.e., the Pareto set, the Pareto front, or both), therefore in the fitness landscape, the optimization algorithm must react to adapt the search to the new features of the problem. Big Data analytics are long and complex processes therefore, with the aim of simplify them, a series of steps are carried out through. A typical analysis is composed of data collection, data manipulation, data analysis and finally result visualization. In the process of creating a Big Data workflow the analyst should bear in mind the semantics involving the problem domain knowledge and its data. Ontology is the standard way for describing the knowledge about a domain. As a global target of this PhD Thesis, we are interested in investigating the use of the semantic in the process of Big Data analysis, not only focused on machine learning analysis, but also in optimization

    Comparing Deep Recurrent Networks Based on the MAE Random Sampling, a First Approach

    Get PDF
    Recurrent neural networks have demonstrated to be good at tackling prediction problems, however due to their high sensitivity to hyper-parameter configuration, finding an appropriate network is a tough task. Automatic hyper-parameter optimization methods have emerged to find the most suitable configuration to a given problem, but these methods are not generally adopted because of their high computational cost. Therefore, in this study we extend the MAE random sampling, a low-cost method to compare single-hidden layer architectures, to multiple-hidden-layer ones. We validate empirically our proposal and show that it is possible to predict and compare the expected performance of an hyper-parameter configuration in a low-cost way.Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech. This research was partially funded by Ministerio de Economı́a, Industria y Competitividad, Gobierno de España, and European Regional Development Fund grant numbers TIN2016-81766-REDT (http://cirti.es) and TIN2017-88213-R (http://6city.lcc.uma.es)

    Freeze-drying modeling and monitoring using a new neuro-evolutive technique

    Get PDF
    This paper is focused on the design of a black-box model for the process of freeze-drying of pharmaceuticals. A new methodology based on a self-adaptive differential evolution scheme is combined with a back-propagation algorithm, as local search method, for the simultaneous structural and parametric optimization of the model represented by a neural network. Using the model of the freeze-drying process, both the temperature and the residual ice content in the product vs. time can be determine off-line, given the values of the operating conditions (the temperature of the heating shelf and the pressure in the drying chamber). This makes possible to understand if the maximum temperature allowed by the product is trespassed and when the sublimation drying is complete, thus providing a valuable tool for recipe design and optimization. Besides, the black box model can be applied to monitor the freeze-drying process: in this case, the measurement of product temperature is used as input variable of the neural network in order to provide in-line estimation of the state of the product (temperature and residual amount of ice). Various examples are presented and discussed, thus pointing out the strength of the too

    Evolutionary computation for software testing

    Get PDF
    A variety of products undergo a transformation from a pure mechanical design to more and more software and electronic components. A polarized example are watches. Several decades ago they have been purely mechanical. Modern smart watches are almost completely electronic devices which heavily rely on software. Further, a smart watch offers a lot more features than just the information about the current time. This change had a crucial impact on how software is being developed. A first attempt to control the rising complexity was to move to agile development practices such as extreme programming or scrum. This rise in complexity is not only affecting the development process but also quality assurance and software testing. If a product contains more and more features then this leads to a higher number of tests necessary to ensure quality standards. Furthermore agile development practices work in an iterative manner which leads to repetitive testing that puts more effort on the testing team. We aimed within the thesis to ease the pain of testing. Thereby we examined a series of subproblems that arise. A key complexity is the number of test cases. We intended to reduce the number of test cases before they are executed manually or implemented as automated tests. Thereby we examined the test specification and based on the requirements coverage of the individual tests, we were able to identify redundant tests. We relied on a novel metaheuristic called GCAIS which we improved upon iteratively. Another task is to control the remaining complexity. Testing is often time crucial and an appropriate subset of the available tests must be chosen in order to get a quick insight into the status of the device under test. We examined this challenge in two different testing scenarios. The first scenario is located in semi-automated testing where engineers execute a set of automated tests locally and closely observe the behaviour of the system under test. We extended GCAIS to compute test suites that satisfy different criteria if provided with sufficient search time. The second use case is located in fully automated testing in a continuous integration (CI) setting. CI focuses on frequent software build cycles which also include testing. These builds contain a testing stage which greatly emphasizes speed. Thus there we also have to compute crucial tests. However, due to the nature of the process we have to continuously recompute a test suite for each build as the software and maybe even the test cases at hand have changed. Hence it is hard to compute the test suite ahead of time and these tests have to be determined as part of the CI execution. Thus we switched to a computational lightweight learning classifier system (LCS) to prioritize and select test cases. We integrated a series of innovations we made into an LCS known as XCSF such as continuous priorities, experience replay and transfer learning. This enabled us to outperform a state of the art artificial neural network which is used by companies such as Netflix. We further investigated how LCS can be made faster using parallelism. We developed generic approaches which may run on any multicore computing device. This is of interest for our CI use case as the build server's architecture is unknown. However, the methods are also independent of the concrete LCS and are not linked to our testing problem. We identified that many of the challenges that need to be faced in the CI use case have been tackled by Organic Computing (OC), for example the need to adapt to an ever changing environment. Hence we relied on OC design principles to create a system architecture which wraps the LCS developed and integrates it into existing CI processes. The final system is robust and highly autonomous. A side-effect of the high degree of autonomy is a high level of automatization which fits CI well. We also gave insight on the usability and delivery of the full system to our industrial partner. Test engineers can easily integrate it with a few lines of code and need no knowledge about LCS and OC in order to use it. Another implication of the developed system is that OC's ideas and design principles can also be employed outside the field of embedded systems. This shows that OC has a greater level of generality. The process of testing and correcting found errors is still only partially automated. We make a first step into automating the entire process and thereby take an analogy to the concept of self-healing of OC. As a first proof of concept of this school of thought we take a look at touch interfaces. There we can automatically manipulate the software to fulfill the specified behaviour. Thus only a minimalistic amount of manual work is required

    A Hybrid Tabu/Scatter Search Algorithm for Simulation-Based Optimization of Multi-Objective Runway Operations Scheduling

    Get PDF
    As air traffic continues to increase, air traffic flow management is becoming more challenging to effectively and efficiently utilize airport capacity without compromising safety, environmental and economic requirements. Since runways are often the primary limiting factor in airport capacity, runway operations scheduling emerge as an important problem to be solved to alleviate flight delays and air traffic congestion while reducing unnecessary fuel consumption and negative environmental impacts. However, even a moderately sized real-life runway operations scheduling problem tends to be too complex to be solved by analytical methods, where all mathematical models for this problem belong to the complexity class of NP-Hard in a strong sense due to combinatorial nature of the problem. Therefore, it is only possible to solve practical runway operations scheduling problem by making a large number of simplifications and assumptions in a deterministic context. As a result, most analytical models proposed in the literature suffer from too much abstraction, avoid uncertainties and, in turn, have little applicability in practice. On the other hand, simulation-based methods have the capability to characterize complex and stochastic real-life runway operations in detail, and to cope with several constraints and stakeholders’ preferences, which are commonly considered as important factors in practice. This dissertation proposes a simulation-based optimization (SbO) approach for multi-objective runway operations scheduling problem. The SbO approach utilizes a discrete-event simulation model for accounting for uncertain conditions, and an optimization component for finding the best known Pareto set of solutions. This approach explicitly considers uncertainty to decrease the real operational cost of the runway operations as well as fairness among aircraft as part of the optimization process. Due to the problem’s large, complex and unstructured search space, a hybrid Tabu/Scatter Search algorithm is developed to find solutions by using an elitist strategy to preserve non-dominated solutions, a dynamic update mechanism to produce high-quality solutions and a rebuilding strategy to promote solution diversity. The proposed algorithm is applied to bi-objective (i.e., maximizing runway utilization and fairness) runway operations schedule optimization as the optimization component of the SbO framework, where the developed simulation model acts as an external function evaluator. To the best of our knowledge, this is the first SbO approach that explicitly considers uncertainties in the development of schedules for runway operations as well as considers fairness as a secondary objective. In addition, computational experiments are conducted using real-life datasets for a major US airport to demonstrate that the proposed approach is effective and computationally tractable in a practical sense. In the experimental design, statistical design of experiments method is employed to analyze the impacts of parameters on the simulation as well as on the optimization component’s performance, and to identify the appropriate parameter levels. The results show that the implementation of the proposed SbO approach provides operational benefits when compared to First-Come-First-Served (FCFS) and deterministic approaches without compromising schedule fairness. It is also shown that proposed algorithm is capable of generating a set of solutions that represent the inherent trade-offs between the objectives that are considered. The proposed decision-making algorithm might be used as part of decision support tools to aid air traffic controllers in solving the real-life runway operations scheduling problem

    A Model-Based Framework for the Smart Manufacturing of Polymers

    Get PDF
    It is hard to point a daily activity in which polymeric materials or plastics are not involved. The synthesis of polymers occurs by reacting small molecules together to form, under certain conditions, long molecules. In polymer synthesis, it is mandatory to assure uniformity between batches, high-quality of end-products, efficiency, minimum environmental impact, and safety. It remains as a major challenge the establishment of operational conditions capable of achieving all objectives together. In this dissertation, different model-centric strategies are combined, assessed, and tested for two polymerization systems. The first system is the synthesis of polyacrylamide in aqueous solution using potassium persulfate as initiator in a semi-batch reactor. In this system, the proposed framework integrates nonlinear modelling, dynamic optimization, advanced control, and nonlinear state estimation. The objectives include the achievement of desired polymer characteristics through feedback control and a complete motoring during the reaction. The estimated properties are close to experimental values, and there is a visible noise reduction. A 42% improvement of set point accomplishment in average is observed when comparing feedback control combined with a hybrid discrete-time extended Kalman filter (h-DEKF) and feedback control only. The 4-state geometric observer (GO) with passive structure, another state estimation strategy, shows the best performance. Besides achieving smooth signal processing, the observer improves 52% the estimation of the final molecular weight distribution when compared with the h-DEKF. The second system corresponds to the copolymerization of ethylene with 1,9-decadiene using a metallocene catalyst in a semi-batch reactor. The evaluated operating conditions consider different diene concentrations and reaction temperatures. Initially, the nonlinear model is validated followed by a global sensitivity analysis, which permits the selection of the important parameters. Afterwards, the most important kinetic parameters are estimated online using an extended Kalman filter (EKF), a variation of the GO that uses a preconditioner, and a data-driven strategy referred as the retrospective cost model refinement (RCMR) algorithm. The first two strategies improve the measured signal, but fail to predict other properties. The RCMR algorithm demonstrates an adequate estimation of the unknown parameters, and the estimates converge close to theoretical values without requiring prior knowledge

    Tracking control of redundant mobile manipulator: An RNN based metaheuristic approach

    Get PDF
    In this paper, we propose a topology of Recurrent Neural Network (RNN) based on a metaheuristic optimization algorithm for the tracking control of mobile-manipulator while enforcing nonholonomic constraints. Traditional approaches for tracking control of mobile robots usually require the computation of Jacobian-inverse or linearization of its mathematical model. The proposed algorithm uses a nature-inspired optimization approach to directly solve the nonlinear optimization problem without any further transformation. First, we formulate the tracking control as a constrained optimization problem. The optimization problem is formulated on position-level to avoid the computationally expensive Jacobian-inversion. The nonholonomic limitation is ensured by adding equality constraints to the formulated optimization problem. We then present the Beetle Antennae Olfactory Recurrent Neural Network (BAORNN) algorithm to solve the optimization problem efficiently using very few mathematical operations. We present a theoretical analysis of the proposed algorithm and show that its computational cost is linear with respect to the degree of freedoms (DOFs), i.e., O(m). Additionally, we also prove its stability and convergence. Extensive simulation results are prepared using a simulated model of IIWA14, a 7-DOF industrial-manipulator, mounted on a differentially driven cart. Comparison results with particle swarm optimization (PSO) algorithm are also presented to prove the accuracy and numerical efficiency of the proposed controller. The results demonstrate that the proposed algorithm is several times (around 75 in the worst case) faster in execution as compared to PSO, and suitable for real-time implementation. The tracking results for three different trajectories; circular, rectangular, and rhodonea paths are presented

    Meta-parametric design: Developing a computational approach for early stage collaborative practice

    Get PDF
    Computational design is the study of how programmable computers can be integrated into the process of design. It is not simply the use of pre-compiled computer aided design software that aims to replicate the drawing board, but rather the development of computer algorithms as an integral part of the design process. Programmable machines have begun to challenge traditional modes of thinking in architecture and engineering, placing further emphasis on process ahead of the final result. Just as Darwin and Wallace had to think beyond form and inquire into the development of biological organisms to understand evolution, so computational methods enable us to rethink how we approach the design process itself. The subject is broad and multidisciplinary, with influences from design, computer science, mathematics, biology and engineering. This thesis begins similarly wide in its scope, addressing both the technological aspects of computational design and its application on several case study projects in professional practice. By learning through participant observation in combination with secondary research, it is found that design teams can be most effective at the early stage of projects by engaging with the additional complexity this entails. At this concept stage, computational tools such as parametric models are found to have insufficient flexibility for wide design exploration. In response, an approach called Meta-Parametric Design is proposed, inspired by developments in genetic programming (GP). By moving to a higher level of abstraction as computational designers, a Meta-Parametric approach is able to adapt to changing constraints and requirements whilst maintaining an explicit record of process for collaborative working
    corecore