973 research outputs found

    A Methodology to Enable Concurrent Trade Space Exploration of Space Campaigns and Transportation Systems

    Get PDF
    Space exploration campaigns detail the ways and means to achieve goals for our human spaceflight programs. Significant strategic, financial, and programmatic investments over long timescales are required to execute them, and therefore must be justified to decision makers. To make an informed down-selection, many alternative campaign designs are presented at the conceptual-level, as a set and sequence of individual missions to perform that meets the goals and constraints of the campaign, either technical or programmatic. Each mission is executed by in-space transportation systems, which deliver either crew or cargo payloads to various destinations. Design of each of these transportation systems is highly dependent on campaign goals and even small changes in subsystem design parameters can prompt significant changes in the overall campaign strategy. However, the current state of the art describes campaign and vehicle design processes that are generally performed independently, which limits the ability to assess these sensitive impacts. The objective of this research is to establish a methodology for space exploration campaign design that represents transportation systems as a collection of subsystems and integrates its design process to enable concurrent trade space exploration. More specifically, the goal is to identify existing campaign and vehicle design processes to use as a foundation for improvement and eventual integration. In the past two decades, researchers have adopted terrestrial logistics and supply chain optimization processes to the space campaign design problem by accounting for the challenges that accompany space travel. Fundamentally, a space campaign is formulated as a network design problem where destinations, such as orbits or surfaces of planetary bodies, are represented as nodes with the routes between them as arcs. The objective of this design problem is to optimize the flow of commodities within network using available transport systems. Given the dynamic nature and the number of commodities involved, each campaign can be modeled as a time-expanded, generalized multi-commodity network flow and solved using a mixed integer programming algorithm. To address the challenge of modeling complex concept of operations (ConOps), this formulation was extended to include paths as a set of arcs, further enabling the inclusion of vehicle stacks and payload transfers in the campaign optimization process. Further, with the focus of transportation system within this research, the typical fixed orbital nodes in the logistics network are modified to represent ranges of orbits, categorized by their characteristic energy. This enables the vehicle design process to vary each orbit in the mission as it desires to find the best one per vehicle. By extension, once integrated, arc costs of dV and dT are updated each iteration. Once campaign goals and external constraints are included, the formulated campaign design process generates alternatives at the conceptual level, where each one identifies the optimal set and sequence of missions to perform. Representing transportation systems as a collection of subsystems introduces challenges in the design of each vehicle, with a high degree of coupling between each subsystem as well as the driving mission. Additionally, sizing of each subsystem can have many inputs and outputs linked across the system, resulting in a complex, multi-disciplinary analysis, and optimization problem. By leveraging the ontology within the Dynamic Rocket Equation Tool, DYREQT, this problem can be solved rapidly by defining each system as a hierarchy of elements and subelements, the latter corresponding to external subsystem-level sizing models. DYREQT also enables the construction of individual missions as a series of events, which can be directly driven and generated by the mission set found by the campaign optimization process. This process produces sized vehicles iteratively by using the mission input, subsystem level sizing models, and the ideal rocket equation. By conducting a literature review of campaign and vehicle design processes, the different pieces of the overall methodology are identified, but not the structure. The specific iterative solver, the corresponding convergence criteria, and initialization scheme are the primary areas for experimentation of this thesis. Using NASA’s reference 3-element Human Landing System campaign, the results of these experiments show that the methodology performs best with the vehicle sizing and synthesis process initializing and a path guess that minimizes dV. Further, a converged solution is found faster using non-linear Gauss Seidel fixed point iteration over Jacobi and set of convergence criteria that covers vehicle masses and mission data. To show improvement over the state of the art, and how it enables concurrent trade studies, this methodology is used at scale in a demonstration using NASA’s Design Reference Architecture 5.0. The LH2 Nuclear Thermal Propulsion (NTP) option is traded with NH3and H2O at the vehicle-level as a way to show the impacts of alternative propellants on the vehicle sizing and campaign strategy. Martian surface stay duration is traded at the campaign-level through two options: long-stay and short-stay. The methodology was able to produce four alternative campaigns over the course of two weeks, which provided data about the launch and aggregation strategy, mission profiles, high-level figures of merit, and subsystem-level vehicle sizes for each alternative. Expectedly, with their lower specific impulses, alternative NTP propellants showed significant growth in the overall mass required to execute each campaign, subsequently represented the number of drop tanks and launches. Further, the short-stay campaign option showed a similar overall mass required compared to its long-stay counterpart, but higher overall costs even given the fewer elements required. Both trade studies supported the overall hypothesis and that integrating the campaign and vehicle design processes addresses the coupling between then and directly shows the impacts of their sensitivities on each other. As a result, the research objective was fulfilled by producing a methodology that was able to address the key gaps identified in the current state of the art.Ph.D

    Geometric Data Analysis: Advancements of the Statistical Methodology and Applications

    Get PDF
    Data analysis has become fundamental to our society and comes in multiple facets and approaches. Nevertheless, in research and applications, the focus was primarily on data from Euclidean vector spaces. Consequently, the majority of methods that are applied today are not suited for more general data types. Driven by needs from fields like image processing, (medical) shape analysis, and network analysis, more and more attention has recently been given to data from non-Euclidean spaces–particularly (curved) manifolds. It has led to the field of geometric data analysis whose methods explicitly take the structure (for example, the topology and geometry) of the underlying space into account. This thesis contributes to the methodology of geometric data analysis by generalizing several fundamental notions from multivariate statistics to manifolds. We thereby focus on two different viewpoints. First, we use Riemannian structures to derive a novel regression scheme for general manifolds that relies on splines of generalized Bézier curves. It can accurately model non-geodesic relationships, for example, time-dependent trends with saturation effects or cyclic trends. Since Bézier curves can be evaluated with the constructive de Casteljau algorithm, working with data from manifolds of high dimensions (for example, a hundred thousand or more) is feasible. Relying on the regression, we further develop a hierarchical statistical model for an adequate analysis of longitudinal data in manifolds, and a method to control for confounding variables. We secondly focus on data that is not only manifold- but even Lie group-valued, which is frequently the case in applications. We can only achieve this by endowing the group with an affine connection structure that is generally not Riemannian. Utilizing it, we derive generalizations of several well-known dissimilarity measures between data distributions that can be used for various tasks, including hypothesis testing. Invariance under data translations is proven, and a connection to continuous distributions is given for one measure. A further central contribution of this thesis is that it shows use cases for all notions in real-world applications, particularly in problems from shape analysis in medical imaging and archaeology. We can replicate or further quantify several known findings for shape changes of the femur and the right hippocampus under osteoarthritis and Alzheimer's, respectively. Furthermore, in an archaeological application, we obtain new insights into the construction principles of ancient sundials. Last but not least, we use the geometric structure underlying human brain connectomes to predict cognitive scores. Utilizing a sample selection procedure, we obtain state-of-the-art results

    Moving Horizon Estimation for the Two-tank System

    Get PDF
    This thesis presents the application and evaluation of Moving Horizon Estimation (MHE) for the nonlinear two-tank system. MHE is an iterative optimization-based approach that continuously updates the estimates of the states by solving an optimization problem over a fixed-size, receding horizon. Linear and nonlinear MHE-based estimators are designed and implemented in Matlab for evaluation in simulation environment and Simulink for on-line realization and validation. The linear and nonlinear MHE are evaluated in comparison with the Kalman and Extended Kalman filter through extensive simulations and experimental validation, assessing their accuracy, efficiency, and overall performance. The results of the two-tank state and unmeasured disturbance estimation shows the benefit of the MHE

    Duality, Derivative-Based Training Methods and Hyperparameter Optimization for Support Vector Machines

    Get PDF
    In this thesis we consider the application of Fenchel's duality theory and gradient-based methods for the training and hyperparameter optimization of Support Vector Machines. We show that the dualization of convex training problems is possible theoretically in a rather general formulation. For training problems following a special structure (for instance, standard training problems) we find that the resulting optimality conditions can be interpreted concretely. This approach immediately leads to the well-known notion of support vectors and a formulation of the Representer Theorem. The proposed theory is applied to several examples such that dual formulations of training problems and associated optimality conditions can be derived straightforwardly. Furthermore, we consider different formulations of the primal training problem which are equivalent under certain conditions. We also argue that the relation of the corresponding solutions to the solution of the dual training problem is not always intuitive. Based on the previous findings, we consider the application of customized optimization methods to the primal and dual training problems. A particular realization of Newton's method is derived which could be used to solve the primal training problem accurately. Moreover, we introduce a general convergence framework covering different types of decomposition methods for the solution of the dual training problem. In doing so, we are able to generalize well-known convergence results for the SMO method. Additionally, a discussion of the complexity of the SMO method and a motivation for a shrinking strategy reducing the computational effort is provided. In a last theoretical part, we consider the problem of hyperparameter optimization. We argue that this problem can be handled efficiently by means of gradient-based methods if the training problems are formulated appropriately. Finally, we evaluate the theoretical results concerning the training and hyperparameter optimization approaches practically by means of several example training problems

    Learning Stable Koopman Models for Identification and Control of Dynamical Systems

    Get PDF
    Learning models of dynamical systems from data is a widely-studied problem in control theory and machine learning. One recent approach for modelling nonlinear systems considers the class of Koopman models, which embeds the nonlinear dynamics in a higher-dimensional linear subspace. Learning a Koopman embedding would allow for the analysis and control of nonlinear systems using tools from linear systems theory. Many recent methods have been proposed for data-driven learning of such Koopman embeddings, but most of these methods do not consider the stability of the Koopman model. Stability is an important and desirable property for models of dynamical systems. Unstable models tend to be non-robust to input perturbations and can produce unbounded outputs, which are both undesirable when the model is used for prediction and control. In addition, recent work has shown that stability guarantees may act as a regularizer for model fitting. As such, a natural direction would be to construct Koopman models with inherent stability guarantees. Two new classes of Koopman models are proposed that bridge the gap between Koopman-based methods and learning stable nonlinear models. The first model class is guaranteed to be stable, while the second is guaranteed to be stabilizable with an explicit stabilizing controller that renders the model stable in closed-loop. Furthermore, these models are unconstrained in their parameter sets, thereby enabling efficient optimization via gradient-based methods. Theoretical connections between the stability of Koopman models and forms of nonlinear stability such as contraction are established. To demonstrate the effect of the stability guarantees, the stable Koopman model is applied to a system identification problem, while the stabilizable model is applied to an imitation learning problem. Experimental results show empirically that the proposed models achieve better performance over prior methods without stability guarantees

    Data-Driven Exploration of Coarse-Grained Equations: Harnessing Machine Learning

    Get PDF
    In scientific research, understanding and modeling physical systems often involves working with complex equations called Partial Differential Equations (PDEs). These equations are essential for describing the relationships between variables and their derivatives, allowing us to analyze a wide range of phenomena, from fluid dynamics to quantum mechanics. Traditionally, the discovery of PDEs relied on mathematical derivations and expert knowledge. However, the advent of data-driven approaches and machine learning (ML) techniques has transformed this process. By harnessing ML techniques and data analysis methods, data-driven approaches have revolutionized the task of uncovering complex equations that describe physical systems. The primary goal in this thesis is to develop methodologies that can automatically extract simplified equations by training models using available data. ML algorithms have the ability to learn underlying patterns and relationships within the data, making it possible to extract simplified equations that capture the essential behavior of the system. This study considers three distinct learning categories: black-box, gray-box, and white-box learning. The initial phase of the research focuses on black-box learning, where no prior information about the equations is available. Three different neural network architectures are explored: multi-layer perceptron (MLP), convolutional neural network (CNN), and a hybrid architecture combining CNN and long short-term memory (CNN-LSTM). These neural networks are applied to uncover the non-linear equations of motion associated with phase-field models, which include both non-conserved and conserved order parameters. The second architecture explored in this study addresses explicit equation discovery in gray-box learning scenarios, where a portion of the equation is unknown. The framework employs eXtended Physics-Informed Neural Networks (X-PINNs) and incorporates domain decomposition in space to uncover a segment of the widely-known Allen-Cahn equation. Specifically, the Laplacian part of the equation is assumed to be known, while the objective is to discover the non-linear component of the equation. Moreover, symbolic regression techniques are applied to deduce the precise mathematical expression for the unknown segment of the equation. Furthermore, the final part of the thesis focuses on white-box learning, aiming to uncover equations that offer a detailed understanding of the studied system. Specifically, a coarse parametric ordinary differential equation (ODE) is introduced to accurately capture the spreading radius behavior of Calcium-magnesium-aluminosilicate (CMAS) droplets. Through the utilization of the Physics-Informed Neural Network (PINN) framework, the parameters of this ODE are determined, facilitating precise estimation. The architecture is employed to discover the unknown parameters of the equation, assuming that all terms of the ODE are known. This approach significantly improves our comprehension of the spreading dynamics associated with CMAS droplets

    Survey on Motion Planning for Multirotor Aerial Vehicles in Plan-based Control Paradigm

    Get PDF
    In general, optimal motion planning can be performed both locally and globally. In such a planning, the choice in favour of either local or global planning technique mainly depends on whether the environmental conditions are dynamic or static. Hence, the most adequate choice is to use local planning or local planning alongside global planning. When designing optimal motion planning both local and global, the key metrics to bear in mind are execution time, asymptotic optimality, and quick reaction to dynamic obstacles. Such planning approaches can address the aforesaid target metrics more efficiently compared to other approaches such as path planning followed by smoothing. Thus, the foremost objective of this study is to analyse related literature in order to understand how the motion planning, especially trajectory planning, problem is formulated, when being applied for generating optimal trajectories in real-time for Multirotor Aerial Vehicles, impacts the listed metrics. As a result of the research, the trajectory planning problem was broken down into a set of subproblems, and the lists of methods for addressing each of the problems were identified and described in detail. Subsequently, the most prominent results from 2010 to 2022 were summarized and presented in the form of a timeline
    • …
    corecore