741 research outputs found
Three-dimensional CFD simulations with large displacement of the geometries using a connectivity-change moving mesh approach
This paper deals with three-dimensional (3D) numerical simulations involving 3D moving geometries with large displacements on unstructured meshes. Such simulations are of great value to industry, but remain very time-consuming. A robust moving mesh algorithm coupling an elasticity-like mesh deformation solution and mesh optimizations was proposed in previous works, which removes the need for global remeshing when performing large displacements. The optimizations, and in particular generalized edge/face swapping, preserve the initial quality of the mesh throughout the simulation. We propose to integrate an Arbitrary Lagrangian Eulerian compressible flow solver into this process to demonstrate its capabilities in a full CFD computation context. This solver relies on a local enforcement of the discrete geometric conservation law to preserve the order of accuracy of the time integration. The displacement of the geometries is either imposed, or driven by fluid–structure interaction (FSI). In the latter case, the six degrees of freedom approach for rigid bodies is considered. Finally, several 3D imposed-motion and FSI examples are given to validate the proposed approach, both in academic and industrial configurations
Doctor of Philosophy
dissertationWith modern computational resources rapidly advancing towards exascale, large-scale simulations useful for understanding natural and man-made phenomena are becoming in- creasingly accessible. As a result, the size and complexity of data representing such phenom- ena are also increasing, making the role of data analysis to propel science even more integral. This dissertation presents research on addressing some of the contemporary challenges in the analysis of vector fields--an important type of scientific data useful for representing a multitude of physical phenomena, such as wind flow and ocean currents. In particular, new theories and computational frameworks to enable consistent feature extraction from vector fields are presented. One of the most fundamental challenges in the analysis of vector fields is that their features are defined with respect to reference frames. Unfortunately, there is no single ""correct"" reference frame for analysis, and an unsuitable frame may cause features of interest to remain undetected, thus creating serious physical consequences. This work develops new reference frames that enable extraction of localized features that other techniques and frames fail to detect. As a result, these reference frames objectify the notion of ""correctness"" of features for certain goals by revealing the phenomena of importance from the underlying data. An important consequence of using these local frames is that the analysis of unsteady (time-varying) vector fields can be reduced to the analysis of sequences of steady (time- independent) vector fields, which can be performed using simpler and scalable techniques that allow better data management by accessing the data on a per-time-step basis. Nevertheless, the state-of-the-art analysis of steady vector fields is not robust, as most techniques are numerical in nature. The residing numerical errors can violate consistency with the underlying theory by breaching important fundamental laws, which may lead to serious physical consequences. This dissertation considers consistency as the most fundamental characteristic of computational analysis that must always be preserved, and presents a new discrete theory that uses combinatorial representations and algorithms to provide consistency guarantees during vector field analysis along with the uncertainty visualization of unavoidable discretization errors. Together, the two main contributions of this dissertation address two important concerns regarding feature extraction from scientific data: correctness and precision. The work presented here also opens new avenues for further research by exploring more-general reference frames and more-sophisticated domain discretizations
Within-Cluster Variability Exponent for Identifying Coherent Structures in Dynamical Systems
We propose a clustering-based approach for identifying coherent flow
structures in continuous dynamical systems. We first treat a particle
trajectory over a finite time interval as a high-dimensional data point and
then cluster these data from different initial locations into groups. The
method then uses the normalized standard deviation or mean absolute deviation
to quantify the deformation. Unlike the usual finite-time Lyapunov exponent
(FTLE), the proposed algorithm considers the complete traveling history of the
particles. We also suggest two extensions of the method. To improve the
computational efficiency, we develop an adaptive approach that constructs
different subsamples of the whole particle trajectory based on a finite time
interval. To start the computation in parallel to the flow trajectory data
collection, we also develop an on-the-fly approach to improve the solution as
we continue to provide more measurements for the algorithm. The method can
efficiently compute the WCVE over a different time interval by modifying the
available data points
Integration-free Learning of Flow Maps
We present a method for learning neural representations of flow maps from
time-varying vector field data. The flow map is pervasive within the area of
flow visualization, as it is foundational to numerous visualization techniques,
e.g. integral curve computation for pathlines or streaklines, as well as
computing separation/attraction structures within the flow field. Yet
bottlenecks in flow map computation, namely the numerical integration of vector
fields, can easily inhibit their use within interactive visualization settings.
In response, in our work we seek neural representations of flow maps that are
efficient to evaluate, while remaining scalable to optimize, both in
computation cost and data requirements. A key aspect of our approach is that we
can frame the process of representation learning not in optimizing for samples
of the flow map, but rather, a self-consistency criterion on flow map
derivatives that eliminates the need for flow map samples, and thus numerical
integration, altogether. Central to realizing this is a novel neural network
design for flow maps, coupled with an optimization scheme, wherein our
representation only requires the time-varying vector field for learning,
encoded as instantaneous velocity. We show the benefits of our method over
prior works in terms of accuracy and efficiency across a range of 2D and 3D
time-varying vector fields, while showing how our neural representation of flow
maps can benefit unsteady flow visualization techniques such as streaklines,
and the finite-time Lyapunov exponent
Reduced order modeling of convection-dominated flows, dimensionality reduction and stabilization
We present methodologies for reduced order modeling of convection dominated flows. Accordingly, three main problems are addressed.
Firstly, an optimal manifold is realized to enhance reducibility of convection dominated flows. We design a low-rank auto-encoder to specifically reduce the dimensionality of solution arising from convection-dominated nonlinear physical systems. Although existing nonlinear manifold learning methods seem to be compelling tools to reduce the dimensionality of data characterized by large Kolmogorov n-width, they typically lack a straightforward mapping from the latent space to the high-dimensional physical space. Also, considering that the latent variables are often hard to interpret, many of these methods are dismissed in the reduced order modeling of dynamical systems governed by partial differential equations (PDEs). This deficiency is of importance to the extent that linear methods, such as principle component analysis (PCA) and Koopman operators, are still prevalent. Accordingly, we propose an interpretable nonlinear dimensionality reduction algorithm. An unsupervised learning problem is constructed that learns a diffeomorphic spatio-temporal grid which registers the output sequence of the PDEs on a non-uniform time-varying grid. The Kolmogorov n-width of the mapped data on the learned grid is minimized.
Secondly, the reduced order models are constructed on the realized manifolds. We project the high fidelity models on the learned manifold, leading to a time-varying system of equations. Moreover, as a data-driven model free architecture, recurrent neural networks on the learned manifold are trained, showing versatility of the proposed framework.
Finally, a stabilization method is developed to maintain stability and accuracy of the projection based ROMs on the learned manifold a posteriori. We extend the eigenvalue reassignment method of stabilization of linear time-invariant ROMs, to the more general case of linear time-varying systems. Through a post-processing step, the ROMs are controlled using a constrained nonlinear lease-square minimization problem. The controller and the input signals are defined at the algebraic level, using left and right singular vectors of the reduced system matrices. The proposed stabilization method is general and applicable to a large variety of linear time-varying ROMs
Visuelle Analyse großer Partikeldaten
Partikelsimulationen sind eine bewährte und weit verbreitete numerische Methode in der Forschung und Technik. Beispielsweise werden Partikelsimulationen zur Erforschung der Kraftstoffzerstäubung in Flugzeugturbinen eingesetzt. Auch die Entstehung des Universums wird durch die Simulation von dunkler Materiepartikeln untersucht. Die hierbei produzierten Datenmengen sind immens. So enthalten aktuelle Simulationen Billionen von Partikeln, die sich über die Zeit bewegen und miteinander interagieren. Die Visualisierung bietet ein großes Potenzial zur Exploration, Validation und Analyse wissenschaftlicher Datensätze sowie der zugrundeliegenden
Modelle. Allerdings liegt der Fokus meist auf strukturierten Daten mit einer regulären Topologie. Im Gegensatz hierzu bewegen sich Partikel frei durch Raum und Zeit. Diese Betrachtungsweise ist aus der Physik als das lagrange Bezugssystem bekannt. Zwar können Partikel aus dem lagrangen in ein reguläres eulersches Bezugssystem, wie beispielsweise in ein uniformes Gitter, konvertiert werden. Dies ist bei einer großen Menge an Partikeln jedoch mit einem erheblichen Aufwand verbunden. Darüber hinaus führt diese Konversion meist zu einem Verlust der Präzision bei gleichzeitig erhöhtem Speicherverbrauch. Im Rahmen dieser Dissertation werde ich neue Visualisierungstechniken erforschen, welche speziell auf der lagrangen Sichtweise basieren. Diese ermöglichen eine effiziente und effektive visuelle Analyse großer Partikeldaten
- …