933 research outputs found

    Numerical Simulation in Automotive Design

    Get PDF

    Time-resolved velocity and pressure field quantification in a flow-focusing device for ultrafast microbubble production

    Get PDF
    Flow-focusing devices have gained great interest in the past decade, due to their capability to produce monodisperse microbubbles for diagnostic and therapeutic medical ultrasound applications. However, up-scaling production to industrial scale requires a paradigm shift from single chip operation to highly parallelized systems. Parallelization gives rise to fluidic interactions between nozzles that, in turn, may lead to a decreased monodispersity. Here, we study the velocity and pressure field fluctuations in a single flow-focusing nozzle during bubble production. We experimentally quantify the velocity field inside the nozzle at 100 ns time resolution, and a numerical model provides insight into both the oscillatory velocity and pressure fields. Our results demonstrate that, at the length scale of the flow focusing channel, the velocity oscillations propagate at fluid dynamical time scale (order of microseconds) whereas the dominant pressure oscillations are linked to the bubble pinch-off and propagate at a much faster time scale (order of nanoseconds).Comment: 30 pages, 7 figure

    Austrian High-Performance-Computing meeting (AHPC2020)

    Get PDF
    This booklet is a collection of abstracts presented at the AHPC conference

    HPC-enabling technologies for high-fidelity combustion simulations

    Get PDF
    With the increase in computational power in the last decade and the forthcoming Exascale supercomputers, a new horizon in computational modelling and simulation is envisioned in combustion science. Considering the multiscale and multiphysics characteristics of turbulent reacting flows, combustion simulations are considered as one of the most computationally demanding applications running on cutting-edge supercomputers. Exascale computing opens new frontiers for the simulation of combustion systems as more realistic conditions can be achieved with high-fidelity methods. However, an efficient use of these computing architectures requires methodologies that can exploit all levels of parallelism. The efficient utilization of the next generation of supercomputers needs to be considered from a global perspective, that is, involving physical modelling and numerical methods with methodologies based on High-Performance Computing (HPC) and hardware architectures. This review introduces recent developments in numerical methods for large-eddy simulations (LES) and direct-numerical simulations (DNS) to simulate combustion systems, with focus on the computational performance and algorithmic capabilities. Due to the broad scope, a first section is devoted to describe the fundamentals of turbulent combustion, which is followed by a general description of state-of-the-art computational strategies for solving these problems. These applications require advanced HPC approaches to exploit modern supercomputers, which is addressed in the third section. The increasing complexity of new computing architectures, with tightly coupled CPUs and GPUs, as well as high levels of parallelism, requires new parallel models and algorithms exposing the required level of concurrency. Advances in terms of dynamic load balancing, vectorization, GPU acceleration and mesh adaptation have permitted to achieve highly-efficient combustion simulations with data-driven methods in HPC environments. Therefore, dedicated sections covering the use of high-order methods for reacting flows, integration of detailed chemistry and two-phase flows are addressed. Final remarks and directions of future work are given at the end. }The research leading to these results has received funding from the European Union’s Horizon 2020 Programme under the CoEC project, grant agreement No. 952181 and the CoE RAISE project grant agreement no. 951733.Peer ReviewedPostprint (published version

    Deep Learning for Inverting Borehole Resistivity Measurements.

    Get PDF
    139 p.El subsuelo terrestre está formado por diferentes materiales, principalmente por rocas porosas que posiblemente contienen minerales y están rellenas de agua salada y/o hidrocarburos. Por lo general, las formaciones que crean estos materiales son irregulares y con materiales de diferentes propiedades mezclados en el mismo estrato.Uno de los principales objetivos en geofísica es determinar las propiedades petrofísicas del subsuelo de la Tierra. De este modo, las compañías pueden determinar la localización de las reservas de hidrocarburos para maximizar su producción o descubrir localizaciones óptimas para el almacenamiento de hidrógeno o el depósito de CO2_2. Para este propósito, las compañías registran mediciones electromagnéticas utilizando herramientas de Medición Durante Perforación (LWD por sus siglas en inglés -- Logging While Drilling), las cuales son capaces de recabar datos mientras se lleva a cabo el proceso de prospección. Los datos obtenidos se procesan para producir un mapa del subsuelo de la Tierra. Basándose en el mapa generado, el operador ajusta en tiempo real la trayectoria de la herramienta de prospección para seguir explorando objetivos de explotación, incluidos los yacimientos de petróleo y gas, y maximizar la posterior productividad de las reservas disponibles. Esta técnica de ajuste en tiempo real se denomina geo-navegación.Hoy en día, la geo-navegación desempeña un papel esencial en geofísica. Sin embargo, requiere la resolución de problemas inversos en tiempo real. Esto supone un reto, ya que los problemas inversos suelen estar mal planteados.Existen múltiples métodos tradicionales para resolver los problemas inversos, principalmente, los métodos basados en el gradiente o en la estadística. Sin embargo, estos métodos tienen graves limitaciones. En particular, a menudo necesitan calcular el problema inverso cientos de veces para cada conjunto de mediciones, lo que es computacionalmente caro en problemas tridimensionales (3D).Para superar estas limitaciones, proponemos el uso de técnicas de Aprendizaje Profundo (DL por sus siglas en inglés -- Deep Learning) para resolver los problemas inversos. Aunque la etapa de entrenamiento de una Red Neuronal Profunda (DNN por sus siglas en inglés Deep Neural Network) puede requerir mucho tiempo, una vez que la red está correctamente entrenada puede predecir la solución en una fracción de segundo, facilitando las operaciones de geo-navegación en tiempo real. En la primera parte de esta tesis, investigamos las funciones de pérdida apropiadas para entrenar una DNN cuando se trata de un problema inverso.Además, para entrenar adecuadamente una DNN que se aproxime a la solución inversa, necesitamos un gran conjunto de datos que contenga la solución del problema directo para muchos modelos terrestres diferentes. Para crear dicho conjunto de datos, necesitamos resolver una Ecuación en Derivadas Parciales (PDE por sus siglas en inglés -- Partial Differential Equation) miles de veces. La creación de un conjunto de datos puede llevar mucho tiempo, especialmente para los problemas bidimensionales y tridimensionales, ya que la resolución de la PDE mediante métodos tradicionales, como el Método de Elementos Finitos (FEM por sus siglas en inglés -- Finite Element Method), es computacionalmente caro. Por lo tanto, queremos reducir el coste computacional de la construcción de la base de datos necesaria para entrenar la DNN. Para ello, proponemos el uso de métodos de Análisis Isogeométrico refinado (rIGA por sus siglas en inglés -- refined Isogeometric Analysis).Además, exploramos la posibilidad de utilizar técnicas de DL para resolver PDE, que es la limitación computacional principal al resolver problemas inversos. Nuestro objetivo principal es desarrollar un simulador rápido para resolver PDE paramétricas. Como primer paso, en esta tesis analizamos los problemas de cuadratura que aparecen al resolver PDE utilizando DNN y proponemos diferentes métodos de integración para superar estas limitacionesbca

    Deep Learning for Inverting Borehole Resistivity Measurements

    Get PDF
    There exist multiple traditional methods to solve inverse problems, mainly, gradient-based or statistics-based methods. However, these methods have severe limitations. In particular, they often need to compute the forward problem hundreds of times, which is computationally expensive in three-dimensional (3D) problems. In this dissertation, we propose the use of Deep Learning (DL) techniques to solve inverse problems. Although the training stage of a Deep Neural Network (DNN) may be time-consuming, after the network is properly trained it can forecast the solution in a fraction of a second, facilitating real-time operations. In the first part of this dissertation, we investigate appropriate loss functions to train a DNN when dealing with an inverse problem. Additionally, to properly train a DNN that approximates the inverse solution, we require a large dataset containing the solution of the forward problem. To create such dataset, we need to solve aPartial Differential Equation (PDE) thousands of times. Building a dataset may be time-consuming, especially for two and three-dimensional problems since solving PDEs using traditional methods, such as the Finite Element Method (FEM), is computationally expensive. Thus, we want to reduce the computational cost of building the database needed to train the DNN. For this, we propose the use of rIGA methods. In addition, we explore the possibility of using DL techniques to solve PDEs, which is the main computational bottleneck when solving inverse problems. Our main goal is to develop a fast forward simulator for solving parametric PDEs. As a first step, in this dissertation we analyze the quadrature problems that appear while solving PDEs using DNNs and propose different integration methods to overcome these limitations

    Analysis of hybrid parallelization strategies: simulation of Anderson localization and Kalman Filter for LHCb triggers

    Get PDF
    This thesis presents two experiences of hybrid programming applied to condensed matter and high energy physics. The two projects differ in various aspects, but both of them aim to analyse the benefits of using accelerated hardware to speedup the calculations in current science-research scenarios. The first project enables massively parallelism in a simulation of the Anderson localisation phenomenon in a disordered quantum system. The code represents a Hamiltonian in momentum space, then it executes a diagonalization of the corresponding matrix using linear algebra libraries, and finally it analyses the energy-levels spacing statistics averaged over several realisations of the disorder. The implementation combines different parallelization approaches in an hybrid scheme. The averaging over the ensemble of disorder realisations exploits massively parallelism with a master-slave configuration based on both multi-threading and message passing interface (MPI). This framework is designed and implemented to easily interface similar application commonly adopted in scientific research, for example in Monte Carlo simulations. The diagonalization uses multi-core and GPU hardware interfacing with MAGMA, PLASMA or MKL libraries. The access to the libraries is modular to guarantee portability, maintainability and the extension in a near future. The second project is the development of a Kalman Filter, including the porting on GPU architectures and autovectorization for online LHCb triggers. The developed codes provide information about the viability and advantages for the application of GPU technologies in the first triggering step for Large Hadron Collider beauty experiment (LHCb). The optimisation introduced on both codes for CPU and GPU delivered a relevant speedup on the Kalman Filter. The two GPU versions in CUDA R and OpenCLTM have similar performances and are adequate to be considered in the upgrade and in the corresponding implementations of the Gaudi framework. In both projects we implement optimisation techniques in the CPU code. This report presents extensive benchmark analyses of the correctness and of the performances for both projects

    A parallel algorithm for deformable contact problems

    Get PDF
    In the field of nonlinear computational solid mechanics, contact problems deal with the deformation of separate bodies which interact when they come in touch. Usually, these problems are formulated as constrained minimization problems which may be solved using optimization techniques such as penalty method, Lagrange multipliers, Augmented Lagrangian method, etc. This classical approach is based on node connectivities between the contacting bodies. These connectivities are created through the construction of contact elements introduced for the discretization of the contact interface, which incorporate the contact constraints in the global weak form. These methods are well known and widely used in the resolution of contact problems in engineering and science. As parallel computing platforms are nowadays widely available, solving large engineering problems on high performance computers is a real possibility for any engineer or researcher. Due to the memory and compute power that contact problems require and consume, they are good candidates for parallel computation. Industrial and scientific realistic contact problems involve different physical domains and a large number of degrees of freedom, so algorithms designed to run efficiently in high performance computers are needed. Nevertheless, the parallelization of the numerical solution methods that arises from the classical optimization techniques and discretization approaches presents some drawbacks which must be considered. Mainly, for general contact cases where sliding occurs, the introduction of contact elements requires the update of the mesh graph in a fixed number of time steps. From the point of view of the domain decomposition method for parallel resolution of numerical problems this is a major drawback due to its computational expensiveness, since dynamic repartitioning must be done to redistribute the updated mesh graph to the different processors. On the other hand, some of the optimization techniques modify dynamically the number of degrees of freedom in the problem, by introducing Lagrange multipliers as unknowns. In this work we introduce a Dirichlet-Neumann type parallel algorithm for the numerical solution of nonlinear frictional contact problems, putting a strong focus on its computational implementation. Among its main characteristics it can be highlighted that there is no need to update the mesh graph during the simulation, as no contact elements are used. Also, no additional degrees of freedom are introduced into the system, since no Lagrange multipliers are required. In this algorithm the bodies in contact are treated separately, in a segregated way. The coupling between the contacting bodies is performed through boundary conditions transfer at the contact zone. From a computational point of view, this feature allows to use a multi-code approach. Furthermore, the algorithm can be interpreted as a black-box method as it solves each body separately even with different computational codes. In addition, the contact algorithm proposed in this thesis can also be formulated as a general fixed-point solver for the solution of interface problems. This generalization gives us the theoretical basis to extrapolate and implement numerical techniques that were already developed and widely tested in the field of fluid-structure interaction (FSI) problems, especially those related to convergence ensurance and acceleration. We describe the parallel implementation of the proposed algorithm and analyze its parallel behaviour and performance in both validation and realistic test cases executed in HPC machines using several processors.En el ámbito de la mecánica de contacto computacional, los problemas de contacto tratan con la deformación que sufren cuerpos separados cuando interactúan entre ellos. Comunmente, estos problemas son formulados como problemas de minimización con restricciones, que pueden ser resueltos utilizando técnicas de optimización como la penalización, los multiplicadores de Lagrange, el Lagrangiano Aumentado, etc. Este enfoque clásico está basado en la conectividad de nodos entre los cuerpos, que se realiza a través de la construcción de los elementos de contacto que surgen de la discretización de la interfaz. Estos elementos incorporan las restricciones de contacto en forma débil. Debido al consumo de memoria y a los requerimientos de potencia de cálculo que los problemas de contacto requieren, resultan ser muy buenos candidatos para su paralelización computacional. Sin embargo, tanto la paralelización de los métodos numéricos que surgen de las técnicas clásicas de optimización como los distintos enfoques para su discretización, presentan algunas desventajas que deben ser consideradas. Por un lado, el principal problema aparece ya que en los casos más generales de la mecánica de contacto ocurre un deslizamiento entre cuerpos. Por este motivo, la introducción de los elementos de contacto vuelve necesaria una actualización del grafo de la malla cada cierto número de pasos de tiempo. Desde el punto de vista del método de descomposición de dominios utilizado en la resolución paralela de problemas numéricos, esto es una gran desventaja debidoa su coste computacional. En estos casos, un reparticionamiento dinámico debe ser realizado para redistribuir el grafo actualizado de la malla entre los diferentes procesadores. Por otro lado, algunas técnicas de optimización modifican dinámicamente el número de grados de libertad del problema al introducir multiplicadores de Lagrange como incógnitas. En este trabajo presentamos un algoritmo paralelo del tipo Dirichlet-Neumann para la resolución numérica de problemas de contacto no lineales con fricción, poniendo un especial énfasis en su implementación computacional. Entre sus principales características se puede destacar que no hay necesidad de actualizar el grafo de la malla durante la simulación, ya que en este algoritmo no se utilizan elementos de contacto. Adicionalmente, ningún grado de libertad extra es introducido al sistema, ya que los multiplicadores de Lagrange no son requeridos. En este algoritmo los cuerpos en contacto son tratados de forma separada, de una manera segregada. El acople entre estos cuerpos es realizado a través del intercambio de condiciones de contorno en la interfaz de contacto. Desde un punto de vista computacional, esta característica permite el uso de un enfoque multi-código. Además, este algoritmo puede ser interpretado como un método del tipo black-box ya que permite resolver cada cuerpo por separado, aún utilizando distintos códigos computacionales. Adicionalmente, el algoritmo de contacto propuesto en esta tesis puede ser formulado como un esquema de resolución de punto fijo, empleado de forma general en la solución de problemas de interfaz. Esta generalización permite extrapolar técnicas numéricas ya utilizadas en los problemas de interacción fluido-estructura e implementarlas en la mecánica de contacto, en especial aquellas relacionadas con el aseguramiento y aceleración de la convergencia. En este trabajo describimos la implementación paralela del algoritmo propuesto y analizamos su comportamiento y performance paralela tanto en casos de validación como reales, ejecutados en computadores de alta performance utilizando varios procesadores.Postprint (published version
    corecore