2,427 research outputs found

    Evolutionary Optimization for Active Debris Removal Mission Planning

    Get PDF
    Active debris removal missions require an accurate planning for maximizing mission payout, by reaching the maximum number of potential orbiting targets in a given region of space. Such a problem is known to be computationally demanding and the present paper provides a technique for preliminary mission planning based on a novel evolutionary optimization algorithm, which identifies the best sequence of debris to be captured and/or deorbited. A permutation-based encoding is introduced, which may handle multiple spacecraft trajectories. An original archipelago structure is also adopted for improving algorithm capabilities to explore the search space. As a further contribution, several crossover and mutation operators and migration schemes are tested in order to identify the best set of algorithm parameters for the considered class of optimization problems. The algorithm is numerically tested for a fictitious cloud of debris in the neighborhood of Sun-synchronous orbits, including cases with multiple chasers

    Reinforcement learning in continuous state- and action-space

    Get PDF
    Reinforcement learning in the continuous state-space poses the problem of the inability to store the values of all state-action pairs in a lookup table, due to both storage limitations and the inability to visit all states sufficiently often to learn the correct values. This can be overcome with the use of function approximation techniques with generalisation capability, such as artificial neural networks, to store the value function. When this is applied we can select the optimal action by comparing the values of each possible action; however, when the action-space is continuous this is not possible. In this thesis we investigate methods to select the optimal action when artificial neural networks are used to approximate the value function, through the application of numerical optimization techniques. Although it has been stated in the literature that gradient-ascent methods can be applied to the action selection [47], it is also stated that solving this problem would be infeasible, and therefore, is claimed that it is necessary to utilise a second artificial neural network to approximate the policy function [21, 55]. The major contributions of this thesis include the investigation of the applicability of action selection by numerical optimization methods, including gradient-ascent along with other derivative-based and derivative-free numerical optimization methods,and the proposal of two novel algorithms which are based on the application of two alternative action selection methods: NM-SARSA [40] and NelderMead-SARSA. We empirically compare the proposed methods to state-of-the-art methods from the literature on three continuous state- and action-space control benchmark problems from the literature: minimum-time full swing-up of the Acrobot; Cart-Pole balancing problem; and a double pole variant. We also present novel results from the application of the existing direct policy search method genetic programming to the Acrobot benchmark problem [12, 14]

    Convex optimization of launch vehicle ascent trajectories

    Get PDF
    This thesis investigates the use of convex optimization techniques for the ascent trajectory design and guidance of a launch vehicle. An optimized mission design and the implementation of a minimum-propellant guidance scheme are key to increasing the rocket carrying capacity and cutting the costs of access to space. However, the complexity of the launch vehicle optimal control problem (OCP), due to the high sensitivity to the optimization parameters and the numerous nonlinear constraints, make the application of traditional optimization methods somewhat unappealing, as either significant computational costs or accurate initialization points are required. Instead, recent convex optimization algorithms theoretically guarantee convergence in polynomial time regardless of the initial point. The main challenge consists in converting the nonconvex ascent problem into an equivalent convex OCP. To this end, lossless and successive convexification methods are employed on the launch vehicle problem to set up a sequential convex optimization algorithm that converges to the solution of the original problem in a short time. Motivated by the computational efficiency and reliability of the devised optimization strategy, the thesis also investigates the suitability of the convex optimization approach for the computational guidance of a launch vehicle upper stage in a model predictive control (MPC) framework. Being MPC based on recursively solving onboard an OCP to determine the optimal control actions, the resulting guidance scheme is not only performance-oriented but intrinsically robust to model uncertainties and random disturbances thanks to the closed-loop architecture. The characteristics of real-world launch vehicles are taken into account by considering rocket configurations inspired to SpaceX's Falcon 9 and ESA's VEGA as case studies. Extensive numerical results prove the convergence properties and the efficiency of the approach, posing convex optimization as a promising tool for launch vehicle ascent trajectory design and guidance algorithms

    Learning the Designer's Preferences to Drive Evolution

    Full text link
    This paper presents the Designer Preference Model, a data-driven solution that pursues to learn from user generated data in a Quality-Diversity Mixed-Initiative Co-Creativity (QD MI-CC) tool, with the aims of modelling the user's design style to better assess the tool's procedurally generated content with respect to that user's preferences. Through this approach, we aim for increasing the user's agency over the generated content in a way that neither stalls the user-tool reciprocal stimuli loop nor fatigues the user with periodical suggestion handpicking. We describe the details of this novel solution, as well as its implementation in the MI-CC tool the Evolutionary Dungeon Designer. We present and discuss our findings out of the initial tests carried out, spotting the open challenges for this combined line of research that integrates MI-CC with Procedural Content Generation through Machine Learning.Comment: 16 pages, Accepted and to appear in proceedings of the 23rd European Conference on the Applications of Evolutionary and bio-inspired Computation, EvoApplications 202

    Investigating hybrids of evolution and learning for real-parameter optimization

    Get PDF
    In recent years, more and more advanced techniques have been developed in the field of hybridizing of evolution and learning, this means that more applications with these techniques can benefit from this progress. One example of these advanced techniques is the Learnable Evolution Model (LEM), which adopts learning as a guide for the general evolutionary search. Despite this trend and the progress in LEM, there are still many ideas and attempts which deserve further investigations and tests. For this purpose, this thesis has developed a number of new algorithms attempting to combine more learning algorithms with evolution in different ways. With these developments, we expect to understand the effects and relations between evolution and learning, and also achieve better performances in solving complex problems. The machine learning algorithms combined into the standard Genetic Algorithm (GA) are the supervised learning method k-nearest-neighbors (KNN), the Entropy-Based Discretization (ED) method, and the decision tree learning algorithm ID3. We test these algorithms on various real-parameter function optimization problems, especially the functions in the special session on CEC 2005 real-parameter function optimization. Additionally, a medical cancer chemotherapy treatment problem is solved in this thesis by some of our hybrid algorithms. The performances of these algorithms are compared with standard genetic algorithms and other well-known contemporary evolution and learning hybrid algorithms. Some of them are the CovarianceMatrix Adaptation Evolution Strategies (CMAES), and variants of the Estimation of Distribution Algorithms (EDA). Some important results have been derived from our experiments on these developed algorithms. Among them, we found that even some very simple learning methods hybridized properly with evolution procedure can provide significant performance improvement; and when more complex learning algorithms are incorporated with evolution, the resulting algorithms are very promising and compete very well against the state of the art hybrid algorithms both in well-defined real-parameter function optimization problems and a practical evaluation-expensive problem

    Fuzzy Differential Evolution Algorithm

    Get PDF
    The Differential Evolution (DE) algorithm is a powerful search technique for solving global optimization problems over continuous space. The search initialization for this algorithm does not adequately capture vague preliminary knowledge from the problem domain. This thesis proposes a novel Fuzzy Differential Evolution (FDE) algorithm, as an alternative approach, where the vague information of the search space can be represented and used to deliver a more efficient search. The proposed FDE algorithm utilizes fuzzy set theory concepts to modify the traditional DE algorithm search initialization and mutation components. FDE, alongside other key DE features, is implemented in a convenient decision support system software package. Four benchmark functions are used to demonstrate performance of the new FDE and its practical utility. Additionally, the application of the algorithm is illustrated through a water management case study problem. The new algorithm shows faster convergence for most of the benchmark functions

    Multivariate discretization of continuous valued attributes.

    Get PDF
    The area of Knowledge discovery and data mining is growing rapidly. Feature Discretization is a crucial issue in Knowledge Discovery in Databases (KDD), or Data Mining because most data sets used in real world applications have features with continuously values. Discretization is performed as a preprocessing step of the data mining to make data mining techniques useful for these data sets. This thesis addresses discretization issue by proposing a multivariate discretization (MVD) algorithm. It begins withal number of common discretization algorithms like Equal width discretization, Equal frequency discretization, Naïve; Entropy based discretization, Chi square discretization, and orthogonal hyper planes. After that comparing the results achieved by the multivariate discretization (MVD) algorithm with the accuracy results of other algorithms. This thesis is divided into six chapters, covering a few common discretization algorithms and tests these algorithms on a real world datasets which varying in size and complexity, and shows how data visualization techniques will be effective in determining the degree of complexity of the given data set. We have examined the multivariate discretization (MVD) algorithm with the same data sets. After that we have classified discrete data using artificial neural network single layer perceptron and multilayer perceptron with back propagation algorithm. We have trained the Classifier using the training data set, and tested its accuracy using the testing data set. Our experiments lead to better accuracy results with some data sets and low accuracy results with other data sets, and this is subject ot the degree of data complexity then we have compared the accuracy results of multivariate discretization (MVD) algorithm with the results achieved by other discretization algorithms. We have found that multivariate discretization (MVD) algorithm produces good accuracy results in comparing with the other discretization algorithm

    Algorithmic and Statistical Perspectives on Large-Scale Data Analysis

    Full text link
    In recent years, ideas from statistics and scientific computing have begun to interact in increasingly sophisticated and fruitful ways with ideas from computer science and the theory of algorithms to aid in the development of improved worst-case algorithms that are useful for large-scale scientific and Internet data analysis problems. In this chapter, I will describe two recent examples---one having to do with selecting good columns or features from a (DNA Single Nucleotide Polymorphism) data matrix, and the other having to do with selecting good clusters or communities from a data graph (representing a social or information network)---that drew on ideas from both areas and that may serve as a model for exploiting complementary algorithmic and statistical perspectives in order to solve applied large-scale data analysis problems.Comment: 33 pages. To appear in Uwe Naumann and Olaf Schenk, editors, "Combinatorial Scientific Computing," Chapman and Hall/CRC Press, 201

    Effective retrieval and new indexing method for case based reasoning: Application in chemical process design

    Get PDF
    In this paper we try to improve the retrieval step for case based reasoning for preliminary design. This improvement deals with three major parts of our CBR system. First, in the preliminary design step, some uncertainties like imprecise or unknown values remain in the description of the problem, because they need a deeper analysis to be withdrawn. To deal with this issue, the faced problem description is soften with the fuzzy sets theory. Features are described with a central value, a percentage of imprecision and a relation with respect to the central value. These additional data allow us to build a domain of possible values for each attributes. With this representation, the calculation of the similarity function is impacted, thus the characteristic function is used to calculate the local similarity between two features. Second, we focus our attention on the main goal of the retrieve step in CBR to find relevant cases for adaptation. In this second part, we discuss the assumption of similarity to find the more appropriated case. We put in highlight that in some situations this classical similarity must be improved with further knowledge to facilitate case adaptation. To avoid failure during the adaptation step, we implement a method that couples similarity measurement with adaptability one, in order to approximate the cases utility more accurately. The latter gives deeper information for the reusing of cases. In a last part, we present a generic indexing technique for the base, and a new algorithm for the research of relevant cases in the memory. The sphere indexing algorithm is a domain independent index that has performances equivalent to the decision tree ones. But its main strength is that it puts the current problem in the center of the research area avoiding boundaries issues. All these points are discussed and exemplified through the preliminary design of a chemical engineering unit operation

    Automatic Identification and Representation of the Cornea–Contact Lens Relationship Using AS-OCT Images

    Get PDF
    [Abstract] The clinical study of the cornea–contact lens relationship is widely used in the process of adaptation of the scleral contact lens (SCL) to the ocular morphology of patients. In that sense, the measurement of the adjustment between the SCL and the cornea can be used to study the comfort or potential damage that the lens may produce in the eye. The current analysis procedure implies the manual inspection of optical coherence tomography of the anterior segment images (AS-OCT) by the clinical experts. This process presents several limitations such as the inability to obtain complex metrics, the inaccuracies of the manual measurements or the requirement of a time-consuming process by the expert in a tedious process, among others. This work proposes a fully-automatic methodology for the extraction of the areas of interest in the study of the cornea–contact lens relationship and the measurement of representative metrics that allow the clinicians to measure quantitatively the adjustment between the lens and the eye. In particular, three distance metrics are herein proposed: Vertical, normal to the tangent of the region of interest and by the nearest point. Moreover, the images are classified to characterize the analysis as belonging to the central cornea, peripheral cornea, limbus or sclera (regions where the inner layer of the lens has already joined the cornea). Finally, the methodology graphically presents the results of the identified segmentations using an intuitive visualization that facilitates the analysis and diagnosis of the patients by the clinical experts.This work is supported by the Instituto de Salud Carlos III, Government of Spain and FEDER funds of the European Union through the DTS18/00136 research projects and by the Ministerio de Ciencia, Innovación y Universidades, Government of Spain through the DPI2015-69948-R and RTI2018-095894-B-I00 research projects. Moreover, this work has received financial support from the European Union (European Regional Development Fund—ERDF) and the Xunta de Galicia, Grupos de Referencia Competitiva, Ref. ED431C 2016-047.Xunta de Galicia; ED431C 2016-047
    corecore