2,057 research outputs found

    A PARETO-FRONTIER ANALYSIS OF PERFORMANCE TRENDS FOR SMALL REGIONAL COVERAGE LEO CONSTELLATION SYSTEMS

    Get PDF
    As satellites become smaller, cheaper, and quicker to manufacture, constellation systems will be an increasingly attractive means of meeting mission objectives. Optimizing satellite constellation geometries is therefore a topic of considerable interest. As constellation systems become more achievable, providing coverage to specific regions of the Earth will become more common place. Small countries or companies that are currently unable to afford large and expensive constellation systems will now, or in the near future, be able to afford their own constellation systems to meet their individual requirements for small coverage regions. The focus of this thesis was to optimize constellation geometries for small coverage regions with the constellation design limited between 1-6 satellites in a Walker-delta configuration, at an altitude of 200-1500km, and to provide remote sensing coverage with a minimum ground elevation angle of 60 degrees. Few Pareto-frontiers have been developed and analyzed to show the tradeoffs among various performance metrics, especially for this type of constellation system. The performance metrics focus on geometric coverage and include revisit time, daily visibility time, constellation altitude, ground elevation angle, and the number of satellites. The objective space containing these performance metrics were characterized for 5 different regions at latitudes of 0, 22.5, 45, 67.5, and 90 degrees. In addition, the effect of minimum ground elevation angle was studied on the achievable performance of this type of constellation system. Finally, the traditional Walker-delta pattern constraint was relaxed to allow for asymmetrical designs. These designs were compared to see how the Walker-delta pattern performs compared to a more relaxed design space. The goal of this thesis was to provide both a framework as well as obtain and analyze Pareto-frontiers for constellation performance relating to small regional coverage LEO constellation systems. This work provided an in-depth analysis of the trends in both the design and objective space of the obtained Pareto-frontiers. A variation on the εNSGA-II algorithm was utilized along with a MATLAB/STK interface to produce these Pareto-frontiers. The εNSGA-II algorithm is an evolutionary algorithm that was developed by Kalyanmoy Deb to solve complex multi-objective optimization problems. The algorithm used in this study proved to be very efficient at obtaining various Pareto-frontiers. This study was also successful in characterizing the design and solution space surrounding small LEO remote sensing constellation systems providing small regional coverage

    Evolutionary model type selection for global surrogate modeling

    Get PDF
    Due to the scale and computational complexity of currently used simulation codes, global surrogate (metamodels) models have become indispensable tools for exploring and understanding the design space. Due to their compact formulation they are cheap to evaluate and thus readily facilitate visualization, design space exploration, rapid prototyping, and sensitivity analysis. They can also be used as accurate building blocks in design packages or larger simulation environments. Consequently, there is great interest in techniques that facilitate the construction of such approximation models while minimizing the computational cost and maximizing model accuracy. Many surrogate model types exist ( Support Vector Machines, Kriging, Neural Networks, etc.) but no type is optimal in all circumstances. Nor is there any hard theory available that can help make this choice. In this paper we present an automatic approach to the model type selection problem. We describe an adaptive global surrogate modeling environment with adaptive sampling, driven by speciated evolution. Different model types are evolved cooperatively using a Genetic Algorithm ( heterogeneous evolution) and compete to approximate the iteratively selected data. In this way the optimal model type and complexity for a given data set or simulation code can be dynamically determined. Its utility and performance is demonstrated on a number of problems where it outperforms traditional sequential execution of each model type

    Efficient Transition Probability Computation for Continuous-Time Branching Processes via Compressed Sensing

    Full text link
    Branching processes are a class of continuous-time Markov chains (CTMCs) with ubiquitous applications. A general difficulty in statistical inference under partially observed CTMC models arises in computing transition probabilities when the discrete state space is large or uncountable. Classical methods such as matrix exponentiation are infeasible for large or countably infinite state spaces, and sampling-based alternatives are computationally intensive, requiring a large integration step to impute over all possible hidden events. Recent work has successfully applied generating function techniques to computing transition probabilities for linear multitype branching processes. While these techniques often require significantly fewer computations than matrix exponentiation, they also become prohibitive in applications with large populations. We propose a compressed sensing framework that significantly accelerates the generating function method, decreasing computational cost up to a logarithmic factor by only assuming the probability mass of transitions is sparse. We demonstrate accurate and efficient transition probability computations in branching process models for hematopoiesis and transposable element evolution.Comment: 18 pages, 4 figures, 2 table

    Orbital Constellation Design and Analysis Using Spherical Trigonometry and Genetic Algorithms: A Mission Level Design Tool for Single Point Coverage on Any Planet

    Get PDF
    Recent interest surrounding large scale satellite constellations has increased analysis efforts to create the most efficient designs. Multiple studies have successfully optimized constellation patterns using equations of motion propagation methods and genetic algorithms to arrive at optimal solutions. However, these approaches are computationally expensive for large scale constellations, making them impractical for quick iterative design analysis. Therefore, a minimalist algorithm and efficient computational method could be used to improve solution times. This thesis will provide a tool for single target constellation optimization using spherical trigonometry propagation, and an evolutionary genetic algorithm based on a multi-objective optimization function. Each constellation will be evaluated on a normalized fitness scale to determine optimization. The performance objective functions are based on average coverage time, average revisits, and a minimized number of satellites. To adhere to a wider audience, this design tool was written using traditional Matlab, and does not require any additional toolboxes. To create an efficient design tool, spherical trigonometry propagation will be utilized to evaluate constellations for both coverage time and revisits over a single target. This approach was chosen to avoid solving complex ordinary differential equations for each satellite over a long period of time. By converting the satellite and planetary target into vectors of latitude and longitude in a common celestial sphere (i.e. ECI), the angle can be calculated between each set of vectors in three-dimensional space. A comparison of angle against a maximum view angle, , controlled by the elevation angle of the target and the satellite’s altitude, will determine coverage time and number of revisits during a single orbital period. Traditional constellations are defined by an altitude (a), inclination (I), and Walker Delta Pattern notation: T/P/F. Where T represents the number of satellites, P is the number of orbital planes, and F indirectly defines the number of adjacent planes with satellite offsets. Assuming circular orbits, these five parameters outline any possible constellation design. The optimization algorithm will use these parameters as evolutionary traits to iterate through the solutions space. This process will pass down the best traits from one generation to the next, slowly evolving and converging the population towards an optimal solution. Utilizing tournament style selection, multi-parent recombination, and mutation techniques, each generation of children will improve on the last by evaluating the three performance objectives listed. The evolutionary algorithm will iterate through 100 generations (G) with a population (n) of 100. The results of this study explore optimal constellation designs for seven targets evenly spaced from 0° to 90° latitude on Earth, Mars and Jupiter. Each test case reports the top ten constellations found based on optimal fitness. Scatterplots of the constellation design solution space and the multi-objective fitness function breakdown are provided to showcase convergence of the evolutionary genetic algorithm. The results highlight the ratio between constellation altitude and planetary radius as the most influential aspects for achieving optimal constellations due to the increased field of view ratio achievable on smaller planetary bodies. The multi-objective fitness function however, influences constellation design the most because it is the main optimization driver. All future constellation optimization problems should critically determine the best multi-objective fitness function needed for a specific study or mission

    Prescriptive formalism for constructing domain-specific evolutionary algorithms

    Get PDF
    It has been widely recognised in the computational intelligence and machine learning communities that the key to understanding the behaviour of learning algorithms is to understand what representation is employed to capture and manipulate knowledge acquired during the learning process. However, traditional evolutionary algorithms have tended to employ a fixed representation space (binary strings), in order to allow the use of standardised genetic operators. This approach leads to complications for many problem domains, as it forces a somewhat artificial mapping between the problem variables and the canonical binary representation, especially when there are dependencies between problem variables (e.g. problems naturally defined over permutations). This often obscures the relationship between genetic structure and problem features, making it difficult to understand the actions of the standard genetic operators with reference to problem-specific structures. This thesis instead advocates m..

    "Going back to our roots": second generation biocomputing

    Full text link
    Researchers in the field of biocomputing have, for many years, successfully "harvested and exploited" the natural world for inspiration in developing systems that are robust, adaptable and capable of generating novel and even "creative" solutions to human-defined problems. However, in this position paper we argue that the time has now come for a reassessment of how we exploit biology to generate new computational systems. Previous solutions (the "first generation" of biocomputing techniques), whilst reasonably effective, are crude analogues of actual biological systems. We believe that a new, inherently inter-disciplinary approach is needed for the development of the emerging "second generation" of bio-inspired methods. This new modus operandi will require much closer interaction between the engineering and life sciences communities, as well as a bidirectional flow of concepts, applications and expertise. We support our argument by examining, in this new light, three existing areas of biocomputing (genetic programming, artificial immune systems and evolvable hardware), as well as an emerging area (natural genetic engineering) which may provide useful pointers as to the way forward.Comment: Submitted to the International Journal of Unconventional Computin

    A Markov chain approach to ABM calibration

    Get PDF
    Agent based model are nowadays widely used, however the lack of general methods and rules for their calibration still prevent to exploit completely their potentiality. Rarely such a kind of models can be studied analytically, more often they are studied by using simulation. Reference [1] show that many computer simulation models, like ABM, can be represented as Markov Chains. Exploting such an idea we illustrate an example of how to calibrate an ABM when it can be revisited as a Markov chain
    corecore