21 research outputs found

    Genetic Programming to Optimise 3D Trajectories

    Get PDF
    Dissertation submitted in partial fulfilment of the requirements for the Degree of Master of Science in Geospatial TechnologiesTrajectory optimisation is a method of finding the optimal route connecting a start and end point. The suitability of a trajectory depends on non-intersection with any obstacles as well as predefined performance metrics. In the context of UAVs, the goal is to minimise the cost of the route, in terms of energy or time, while avoiding restricted flight zones. Artificial intelligence techniques including evolutionary computation have been applied to trajectory optimisation with various degrees of success. This thesis explores the use of genetic programming (GP) to optimise trajectories in 3D space, by encoding 3D geographic trajectories as syntax trees representing a curve. A comprehensive review of the relevant literature is presented, covering the theory and techniques of GP, as well as the principles and challenges of 3D trajectory optimisation. The main contribution of this work is the development and implementation of a novel GP algorithm using function trees to encode 3D geographical trajectories. The trajectories are validated and evaluated using a realworld dataset and multiple objectives. The results demonstrate the effectiveness of the proposed algorithm, which outperforms existing methods in terms of speed, automaticity, and robustness. Finally, insights and recommendations for future research in this area are provided, highlighting the potential for GP to be applied to other complex optimisation problems in engineering and science

    Impact of armed conflict on agricultural production the case of Libya, The: 1970-2017

    Get PDF
    2022 Spring.Includes bibliographical references.I examine the long-term impacts of a recent civil war on the agricultural sector within Libya. Due to the associated destruction and market disruption, armed conflict affects the agricultural sector in complex ways including reducing future growth potential by eroding physical and environmental capital. Libya, with its arid climate, low soil fertility and low agricultural productivity, represents an underdeveloped sector that minimizes the inherent complexity to investigate this impact. However, governmental interest in the agricultural sector has been inconsistent as the dominant oil revenue compensates for agricultural deficits through large subsidies. This absence of attention and oversight has resulted in a lack of quality agricultural data, making it difficult to develop beneficial policies to improve sector growth. Based on its simplicity and ease of interpretation, a Cobb-Douglas style production function with Solow-Swan modification is used to characterize the agricultural sector. Though limited, data was collected from FAO and ILO on land, irrigation, fertilizer, machinery, and labor in Libya spanning from 1970 to 2017 covering periods of stability and conflict in order to estimate agricultural sector growth compared to the status quo. To account for the long-term impacts of conflict on growth, inputs are divided into environmental capital, physical capital, and labor. Next, elasticity parameters are estimated through an OLS regression of the Cobb-Douglas production function before and after conflict. A Chow/QLR test is used to confirm the existence and timing of the structural break in the production function arising at the onset of the 2011 conflict. Finally, the impact of post-conflict growth rates are compared using the pre-conflict and conflict regression parameters. Changes in the estimated parameters from the start of the conflict were significant at the 5% level for both the labor and physical inputs, while the environmental elasticity parameter change was not significant. The conflict elasticity estimates were -0.518, -0.803, and -18.9 for the Physical, Environmental and Labor inputs, compared to their pre-con ict values of 0.107, 0.146, and 1.315, respectively. The two key questions are whether the growth path can recover to pre-conflict levels and the associated production losses resulting over the period the sector takes to return to those pre-conflict rates. A preliminary cost-analysis was applied to estimate the required investment to generate an increase in agricultural GDP. The most cost-efficient way to increase the production after conflict (under the assumption of a return to pre-conflict elasticities) is to increase the quantity/quality of fertilizer used. Increasing machinery is the least efficient way to grow the sector GDP. This may reflect two realities in Libya: weak soil quality and inefficient use of machinery, due to diseconomies of size with smaller plots. Lessons from conflicts in other post-conflict countries suggest that a necessary but insufficient condition is the application of good agricultural policies to rebuild the sector. New policies could improve agricultural returns to surpass losses due to conflict if post-conflict productivity is improved. These policies must be combined with good management and reliable data to effect positive changes within the sector. In Libya's case, the primary post-conflict policies should include improving data collection and focusing on increased education and training to enhance the agricultural sector's rehabilitation. I estimated 3 specific scenarios of the post-conflict future consisting of business-as-usual (BAU), a scenario with convergence between the pre-conflict and post-conflict growth paths within 50 years, and another with a convergence of 20 years. Based on the experiences of other post-conflict countries, Libya's agricultural production will likely converge back to the pre-conflict agricultural GDP trajectory within 10-15 years, so long as there is a minimal transition period and agricultural policies are consistent and well managed. The expected cost to the economy is measured by the discounted difference between the pre-conflict trajectory GDP and the estimated post-conflict GDP until the convergence point. For the likely 20 year convergence, there is an estimated opportunity cost of USD2010 25.0 billion. Should the sector return to business-as-usual, the present discounted value of the conflict is USD2010 49.0 billion. The impact of the conflict is lessened by poor productivity before the conflict. It appears that the conflict slowed business-as-usual, but did not significantly erode environmental capital, which would further cripple the recovery

    Evolutionary Algorithms in Engineering Design Optimization

    Get PDF
    Evolutionary algorithms (EAs) are population-based global optimizers, which, due to their characteristics, have allowed us to solve, in a straightforward way, many real world optimization problems in the last three decades, particularly in engineering fields. Their main advantages are the following: they do not require any requisite to the objective/fitness evaluation function (continuity, derivability, convexity, etc.); they are not limited by the appearance of discrete and/or mixed variables or by the requirement of uncertainty quantification in the search. Moreover, they can deal with more than one objective function simultaneously through the use of evolutionary multi-objective optimization algorithms. This set of advantages, and the continuously increased computing capability of modern computers, has enhanced their application in research and industry. From the application point of view, in this Special Issue, all engineering fields are welcomed, such as aerospace and aeronautical, biomedical, civil, chemical and materials science, electronic and telecommunications, energy and electrical, manufacturing, logistics and transportation, mechanical, naval architecture, reliability, robotics, structural, etc. Within the EA field, the integration of innovative and improvement aspects in the algorithms for solving real world engineering design problems, in the abovementioned application fields, are welcomed and encouraged, such as the following: parallel EAs, surrogate modelling, hybridization with other optimization techniques, multi-objective and many-objective optimization, etc

    Virtual sensing and sensors selection for efficient temperature monitoring in indoor environments†

    Get PDF
    5Real-time estimation of temperatures in indoor environments is critical for several reasons, including the upkeep of comfort levels, the fulfillment of legal requirements, and energy efficiency. Unfortunately, setting an adequate number of sensors at the desired locations to ensure a uniform monitoring of the temperature in a given premise may be troublesome. Virtual sensing is a set of techniques to replace a subset of physical sensors by virtual ones, allowing the monitoring of unreachable locations, reducing the sensors deployment costs, and providing a fallback solution for sensor failures. In this paper, we deal with temperature monitoring in an open space office, where a set of physical sensors is deployed at uneven locations. Our main goal is to develop a black-box virtual sensing framework, completely independent of the physical characteristics of the considered scenario, that, in principle, can be adapted to any indoor environment. We first perform a systematic analysis of various distance metrics that can be used to determine the best sensors on which to base temperature monitoring. Then, following a genetic programming approach, we design a novel metric that combines and summarizes information brought by the considered distance metrics, outperforming their effectiveness. Thereafter, we propose a general and automatic approach to the problem of determining the best subset of sensors that are worth keeping in a given room. Leveraging the selected sensors, we then conduct a comprehensive assessment of different strategies for the prediction of temperatures observed by physical sensors based on other sensors’ data, also evaluating the reliability of the generated outputs. The results show that, at least in the given scenario, the proposed black-box approach is capable of automatically selecting a subset of sensors and of deriving a virtual sensing model for an accurate and efficient monitoring of the environment.openopenBrunello A.; Urgolo A.; Pittino F.; Montvay A.; Montanari A.Brunello, A.; Urgolo, A.; Pittino, F.; Montvay, A.; Montanari, A

    Inclusive genetic programming

    Get PDF
    The promotion and maintenance of the population diversity in a Genetic Programming (GP) algorithm was proved to be an important part of the evolutionary process. Such diversity maintenance improves the exploration capabilities of the GP algorithm, which as a consequence improves the quality of the found solutions by avoiding local optima. This paper aims to further investigate and prove the efficacy of a GP heuristic proposed in a previous work: the Inclusive Genetic Programming (IGP). Such heuristic can be classified as a niching technique, which performs the evolutionary operations like crossover, mutation and selection by considering the individuals belonging to different niches in order to maintain and exploit a certain degree of diversity in the population, instead of evolving the niches separately to find different local optima. A comparison between a standard formulation of GP and the IGP is carried out on nine different benchmarks coming from synthetic and real world data. The obtained results highlight how the greater diversity in the population, measured in terms of entropy, leads to better results on both training and test data, showing that an improvement on the generalization capabilities is also achieved

    Contrôle de la croissance de la taille des individus en programmation génétique

    Get PDF
    La programmation génétique (GP) est une hyperheuristique d’optimisation ayant été appliquée avec succès à un large éventail de problèmes. Cependant, son intérêt est souvent considérablement diminué du fait de son utilisation élevée en ressources de calcul et de sa convergence laborieuse. Ces problèmes sont causés par une croissance immodérée de la taille des solutions et par l’apparition de structures inutiles dans celles-ci. Dans ce mémoire, nous présentons HARM-GP, une nouvelle approche résolvant en grande partie ces problèmes en permettant une adaptation dynamique de la distribution des tailles des solutions, tout en minimisant l’effort de calcul requis. Les performances de HARM-GP ont été testées sur un ensemble de douze problèmes et comparées avec celles de neuf techniques issues de la littérature. Les résultats montrent que HARM-GP excelle au contrôle de la croissance des arbres et du surapprentissage, tout en maintenant de bonnes performances sur les autres aspects.Genetic programming is a hyperheuristic optimization approach that has been applied to a wide range of problems involving symbolic representations or complex data structures. However, the method can be severely hindered by the increased computational resources required and premature convergence caused by uncontrolled code growth. We introduce HARM-GP, a novel operator equalization approach that adaptively shapes the genotype size distribution of individuals in order to effectively control code growth. Its probabilistic nature minimizes the overhead on the evolutionary process while its generic formulation allows this approach to remain independent of the problem and genetic operators used. Comparative results are provided over twelve problems with different dynamics, and over nine other algorithms taken from the literature. They show that HARM-GP is excellent at controlling code growth while maintaining good overall performances. Results also demonstrate the effectiveness of HARM-GP at limiting overtraining and overfitting in real-world supervised learning problems

    A Corpus-driven Approach toward Teaching Vocabulary and Reading to English Language Learners in U.S.-based K-12 Context through a Mobile App

    Get PDF
    In order to decrease teachers’ decisions of which vocabulary the focus of the instruction should be upon, a recent line of research argues that pedagogically-prepared word lists may offer the most efficient order of learning vocabulary with an optimized context for instruction in each of four K-12 content areas (math, science, social studies, and language arts) through providing English Language Learners (ELLs) with the most frequent words in each area. Educators and school experts have acknowledged the need for developing new materials, including computerized enhanced texts and effective strategies aimed at improving ELLs’ mastery of academic and STEM-related lexicon. Not all words in a language are equal in their role in comprehending the language and expressing ideas or thoughts. For this study, I used a corpus-driven approach which is operationalized by applying a text analysis method. For the purpose of this research study, I made two corpora, Teacher’s U.S. Corpus (TUSC) and Science and Math Academic Corpus for Kids (SMACK) with a focus on word lemma rather than inflectional and derivational variants of word families. To create the corpora, I collected and analyzed a total of 122 textbooks used commonly in the states of Florida and California. Recruiting, scanning and converting of textbooks had been carried out over a period of more than two years from October 2014 to March 2017. In total, this school corpus contains 10,519,639 running words and 16,344 lemmas saved in 16,315 word document pages. From the corpora, I developed six word lists, namely three frequency-based word lists (high-, mid-, and low-frequency), academic and STEM-related word lists, and essential word list (EWL). I then applied the word lists as the database and developed a mobile app, Vocabulary in Reading Study – VIRS, (available on App Store, Android and Google Play) alongside a website (www.myvirs.com). Also, I developed a new K-12 dictionary which targets the vocabulary needs of ELLs in K-12 context. This is a frequency-based dictionary which categorizes words into three groups of high, medium and low frequency words as well as two separate sections for academic and STEM words. The dictionary has 16,500 lemmas with derivational and inflectional forms

    Advanced Techniques for Search-Based Program Repair

    Get PDF
    Debugging and repairing software defects costs the global economy hundreds of billions of dollars annually, and accounts for as much as 50% of programmers' time. To tackle the burgeoning expense of repair, researchers have proposed the use of novel techniques to automatically localise and repair such defects. Collectively, these techniques are referred to as automated program repair. Despite promising, early results, recent studies have demonstrated that existing automated program repair techniques are considerably less effective than previously believed. Current approaches are limited either in terms of the number and kinds of bugs they can fix, the size of patches they can produce, or the programs to which they can be applied. To become economically viable, automated program repair needs to overcome all of these limitations. Search-based repair is the only approach to program repair which may be applied to any bug or program, without assuming the existence of formal specifications. Despite its generality, current search-based techniques are restricted; they are either efficient, or capable of fixing multiple-line bugs---no existing technique is both. Furthermore, most techniques rely on the assumption that the material necessary to craft a repair already exists within the faulty program. By using existing code to craft repairs, the size of the search space is vastly reduced, compared to generating code from scratch. However, recent results, which show that almost all repairs generated by a number of search-based techniques can be explained as deletion, lead us to question whether this assumption is valid. In this thesis, we identify the challenges facing search-based program repair, and demonstrate ways of tackling them. We explore if and how the knowledge of candidate patch evaluations can be used to locate the source of bugs. We use software repository mining techniques to discover the form of a better repair model capable of addressing a greater number of bugs. We conduct a theoretical and empirical analysis of existing search algorithms for repair, before demonstrating a more effective alternative, inspired by greedy algorithms. To ensure reproducibility, we propose and use a methodology for conducting high-quality automated program research. Finally, we assess our progress towards solving the challenges of search-based program repair, and reflect on the future of the field
    corecore