43 research outputs found

    Topology optimization for additive manufacture

    Get PDF
    Additive manufacturing (AM) offers a way to manufacture highly complex designs with potentially enhanced performance as it is free from many of the constraints associated with traditional manufacturing. However, current design and optimisation tools, which were developed much earlier than AM, do not allow efficient exploration of AM's design space. Among these tools are a set of numerical methods/algorithms often used in the field of structural optimisation called topology optimisation (TO). These powerful techniques emerged in the 1980s and have since been used to achieve structural solutions with superior performance to those of other types of structural optimisation. However, such solutions are often constrained during optimisation to minimise structural complexities, thereby, ensuring that solutions can be manufactured via traditional manufacturing methods. With the advent of AM, it is necessary to restructure these techniques to maximise AM's capabilities. Such restructuring should involve identification and relaxation of the optimisation constraints within the TO algorithms that restrict design for AM. These constraints include the initial design, optimisation parameters and mesh characteristics of the optimisation problem being solved. A typical TO with certain mesh characteristics would involve the movement of an assumed initial design to another with improved structural performance. It was anticipated that the complexity and performance of a solution would be affected by the optimisation constraints. This work restructured a TO algorithm called the bidirectional evolutionary structural optimisation (BESO) for AM. MATLAB and MSC Nastran were coupled to study and investigate BESO for both two and three dimensional problems. It was observed that certain parametric values promote the realization of complex structures and this could be further enhanced by including an adaptive meshing strategy (AMS) in the TO. Such a strategy reduced the degrees of freedom initially required for this solution quality without the AMS

    Modeling and Simulation of Compositional Engineering in Sige Films Using Patterned Stress Fields

    Get PDF
    Semiconductor alloys such as silicon-germanium (SiGe) offer attractive environments for engineering quantum-confined structures that are the basis for a host of current and future optoelectronic devices. Although vertical stacking of such structures is routinely achieved via heteroepitaxy, lateral manipulation has proven much more challenging. I describe a new approach that suggests that a patterned elastic stress field generated with an array of nanoscale indenters in an initially compositionally uniform SiGe substrate will drive atomic interdiffusion, leading to compositional patterns in the near-surface region of the substrate. While this approach may offer a potentially efficient and robust pathway to producing laterally ordered arrays of quantum-confined structures, there is a large set of parameters important to the process. Thus, it is difficult to consider this approach using only costly experiments, which necessitates detailed computational analysis. First, I review computational approaches to simulating the long length and time scales required for this process, and I develop and present a mesoscopic model based on coarse-grained lattice kinetic Monte Carlo that quantitatively describes the atomic interdiffusion processes in SiGe alloy film subjected to applied stress. I show that the model provides predictions that are quantitatively consistent with experimental measurements, and I examine the impact of basic indenter geometries on the patterning process. Second, I extend the model to investigate the impact of several process parameters, such as more complicated indenter shapes and pitches. I find that certain indenter configurations produce compositional patterns that are favorable for use as lateral arrays of quantum-confined structures. Finally, I measure a set of important physical parameters, the so-called “activation volumes” that describes the impact of stress on diffusion. The values of these parameters are not well established in the literature. I make quantitative connections to the range of values found in the literature and characterize the effects of different stress states on the overall patterning process. Finally, I conclude with ideas about alternative pathways to quantum confined structure generation and possible extensions of the framework developed

    High performance scientific computing in applications with direct finite element simulation

    Get PDF
    xiii, 133 p.La predicción del flujo separado, incluida la pérdida de un avión completo mediantela dinámica de fluidos computacional (CFD) se considera uno de los grandes desaf¿¿os que seresolverán en 2030, según NASA. Las ecuaciones no lineales de Navier-Stokes proporcionan laformulación matemática para flujo de fluidos en espacios tridimensionales. Sin embargo, todaviafaltan soluciones clásicas, existencia y singularidad. Ya que el cálculo de la fuerza bruta esintratable para realizar simulación predictiva para un avión completo, uno puede usar la simulaciónnumérica directa (DNS); sin embargo, prohibitivamente caro ya que necesita resolver laturbulencia a escala de magnitud Re power (9/4). Considerando otros métodos como el estad¿¿sticopromedio Reynolds¿s Average Navier Stokes (RANS), spatial average Large Eddy Simulation(LES), y Hybrid Detached Eddy Simulation (DES), que requieren menos cantidad de grados delibertad. Todos estos métodos deben ajustarse a los problemas de referencia y, además, cerca las paredes, la malla tieneque ser muy fina para resolver las capas l¿¿mite (lo cual significa que el costo computacional es muycostoso). Por encima de todo, los resultados son sensibles a, por ejemplo, parámetros expl¿¿citos enel método, la malla, etc.Como una solución al desaf¿¿o, aqu¿¿ presentamos la adaptación Metodolog¿¿a de solución directa deFEM (DFS) con resolución numérica disparo, como una familia predictiva, libre de parámetros demétodos para flujo turbulento. Resolvimos el modelo de avión JAXA Standard Model (JSM) ennúmero realista de Reynolds, presentado como parte del High Lift Taller de predicción 3.Predijimos un aumento de Cl dentro de un error de 5 % vs experimento, arrastre Cd dentro de 10 %error y detenga 1 ¿ dentro del ángulo de ataque.El taller identificó un probable experimento error depedido 10 % para los resultados de arrastre. La simulación es 10 veces más rápido y más barato encomparación con CFD tradicional o existente enfoques. La eficiencia proviene principalmente dell¿¿mite de deslizamiento condición que permite mallas gruesas cerca de las paredes, orientada aobjetivos control de error adaptativo que refina la malla solo donde es necesario y grandes pasos detiempo utilizando un método de iteración de punto fijo tipo Schur, sin comprometer la precisión delos resultados de la simulación.También presentamos una generalización de DFS a densidad variable y validado contra el problemade referencia MARIN bien establecido. los Los resultados muestran un buen acuerdo con losresultados experimentales en forma de sensores de presión. Más tarde, usamos esta metodolog¿¿apara resolver dos aplicaciones en problemas de flujo multifásico. Uno tiene que ver con un flashtanque de almacenamiento de agua de lluvia (consorcio de agua de Bilbao), y el segundo es sobre eldiseño de una boquilla para impresión 3D. En el agua de lluvia tanque de almacenamiento,predijimos que la altura del agua en el tanque tiene un influencia significativa sobre cómo secomporta el flujo aguas abajo de la puerta del tanque (válvula). Para la impresión 3D,desarrollamos un diseño eficiente con El flujo de chorro enfocado para evitar la oxidación y elcalentamiento en la punta del boquilla durante un proceso de fusión.Finalmente, presentamos aqu¿¿ el paralelismo en múltiples GPU y el incrustado sistema dearquitectura Kalray. Casi todas las supercomputadoras de hoy tienen arquitecturas heterogéneas,1 See the UNESCO Internacional Standard nomenclature for fields of Science and Technologyacomo CPU+GPU u otros aceleradores, y, por lo tanto, es esencial desarrollar marcoscomputacionales para aprovecha de ellos. Como lo hemos visto antes, se comienza a desarrollar eseCFD más tarde en la década de 1060 cuando podemos tener poder computacional, por lo tanto, Esesencial utilizar y probar estos aceleradores para los cálculos de CFD. Las GPU tienen unaarquitectura diferente en comparación con las CPU tradicionales. Técnicamente, la GPU tienemuchos núcleos en comparación con las CPU que hacen de la GPU una buena opción para elcómputo paralelo.Para múltiples GPU, desarrollamos un cálculo de plantilla, aplicado a simulación depliegues geológicos. Exploramos la computación de halo y utilizamos Secuencias CUDA paraoptimizar el tiempo de computación y comunicación. La ganancia de rendimiento resultante fue de23 % para cuatro GPU con arquitectura Fermi, y la mejora correspondiente obtenida en cuatro LasGPU Kepler fueron de 47 %.This research was carried out at the Basque Center for Applied Mathematics (BCAM) within the CFD Computational Technology (CFDCT) and also at the School of Electrical Engineering and Computer Science(Royal Institue of Technology, Stockholm, Sweden). Which is suported by Fundacion Obra Social “la Caixa“, Severo Ochoa Excellence research centre 2014-2018 SEV-2013-0323, Severo Ochoa Excellence research centre 2018-2022 SEV-2017-0718, BERC program 2014-2017, BERC program 2018-2021, MSO4SC European project, Elkartek. This work has been performed using the computing infrastructure from SNIC (Swedish National Infrastructure for Computing)

    High Performance Scientific Computing in Applications with Direct Finite Element Simulation

    Get PDF
    To predict separated flow including stall of a full aircraft with Computational Fluid Dynamics (CFD) is considered one of the problems of the grand challenges to be solved by 2030, according to NASA [1]. The nonlinear Navier- Stokes equations provide the mathematical formulation for fluid flow in 3- dimensional spaces. However, classical solutions, existence, and uniqueness are still missing. Since brute-force computation is intractable, to perform predictive simulation for a full aircraft, one can use Direct Numerical Simulation (DNS); however, it is prohibitively expensive as it needs to resolve the turbulent scales of order Re4 . Considering other methods such as statistical average Reynolds’s Average Navier Stokes (RANS), spatial average Large Eddy Simulation (LES), and hybrid Detached Eddy Simulation (DES), which require less number of degrees of freedom. All of these methods have to be tuned to benchmark problems, and moreover, near the walls, the mesh has to be very fine to resolve boundary layers (which means the computational cost is very expensive). Above all, the results are sensitive to, e.g. explicit parameters in the method, the mesh, etc. As a resolution to the challenge, here we present the adaptive time- resolved Direct FEM Solution (DFS) methodology with numerical tripping, as a predictive, parameter-free family of methods for turbulent flow. We solved the JAXA Standard Model (JSM) aircraft model at realistic Reynolds number, presented as part of the High Lift Prediction Workshop 3. We predicted lift Cl within 5% error vs. experiment, drag Cd within 10% error and stall 1◦ within the angle of attack. The workshop identified a likely experimental error of order 10% for the drag results. The simulation is 10 times faster and cheaper when compared to traditional or existing CFD approaches. The efficiency mainly comes from the slip boundary condition that allows coarse meshes near walls, goal-oriented adaptive error control that refines the mesh only where needed and large time steps using a Schur-type fixed-point iteration method, without compromising the accuracy of the simulation results. As a follow-up, we were invited to the Fifth High Order CFD Workshop, where the approach was validated for a tandem sphere problem (low Reynolds number turbulent flow) wherein a second sphere is placed a certain distance downstream from a first sphere. The results capture the expected slipstream phenomenon, with appx. 2% error. A comparison with the higher-order frameworks Nek500 and PyFR was done. The PyFR framework has demonstrated high effectiveness for GPUs with an unstructured mesh, which is a hard problem in this field. This is achieved by an explicit time-stepping approach. Our study showed that our large time step approach enabled appx. 3 orders of magnitude larger time steps than the explicit time steps in PyFR, which made our method more effective for solving the whole problem. We also presented a generalization of DFS to variable density and validated against the well-established MARIN benchmark problem. The results show good agreement with experimental results in the form of pressure sensors. Later, we used this methodology to solve two applications in multiphase flow problems. One has to do with a flash rainwater storage tank (Bilbao water consortium), and the second is about designing a nozzle for 3D printing. In the flash rainwater storage tank, we predicted that the water height in the tank has a significant influence on how the flow behaves downstream of the tank door (valve). For the 3D printing, we developed an efficient design with the focused jet flow to prevent oxidation and heating at the tip of the nozzle during a melting process. Finally, we presented here the parallelism on multiple GPUs and the embedded system Kalray architecture. Almost all supercomputers today have heterogeneous architectures, such as CPU+GPU or other accelerators, and it is, therefore, essential to develop computational frameworks to take advantage of them. For multiple GPUs, we developed a stencil computation, applied to geological folds simulation. We explored halo computation and used CUDA streams to optimize computation and communication time. The resulting performance gain was 23% for four GPUs with Fermi architecture, and the corresponding improvement obtained on four Kepler GPUs were 47%. The Kalray architecture is designed to have low energy consumption. Here we tested the Jacobi method with different communication strategies. Additionally, visualization is a crucial area when we do scientific simulations. We developed an automated visualization framework, where we could see that task parallelization is more than 10 times faster than data parallelization. We have also used our DFS in the cloud computing setting to validate the simulation against the local cluster simulation. Finally, we recommend the easy pre-processing tool to support DFS simulation.La Caixa 201

    Coupling of particle simulation and lattice Boltzmann background flow on adaptive grids

    Get PDF
    The lattice-Boltzmann method as well as classical molecular dynamics are established and widely used methods for the simulation and research of soft matter. Molecular dynamics is a computer simulation technique on microscopic scales solving the multi-body kinetic equations of the involved particles. The lattice-Boltzmann method describes the hydrodynamic interactions of fluids, gases, or other soft matter on a coarser scale. Many applications, however, are multi-scale problems and they require a coupling of both methods. A basic concept for short ranged interactions in molecular dynamics is the linked cells algorithm that is in O(N) for homogeneously distributed particles. Spatial adaptive methods for the lattice-Boltzmann scheme are used in order to reduce costly scaling effects on the runtime and memory for large-scale simulations. As basis for this work the highly flexible simulation software ESPResSo is used and extended. The adaptive lattice-Boltzmann scheme, that is implemented in ESPResSo, uses a domain decomposition with tree-based grids along the space-filling Morton curve using the p4est software library. However, coupling the regular particle simulation with the adaptive lattice-Boltzmann method for highly parallel computer architectures is a challenging issue that raises several problems. In this work, an approach for the domain decomposition of the linked cells algorithm based on space-filling curves and the p4est library is presented. In general, the grids for molecular dynamics and fluid simulations are not equal. Thus, strategies to distribute differently refined grids on parallel processes are explained, including a parallel algorithm to construct the finest common tree using p4est. Furthermore, a method for interpolation and extrapolation, that is needed for the viscous coupling of particles with the fluid, on adaptively refined grids are discussed. The ESPResSo simulation software is augmented by the developed methods for particle-fluid coupling as well as the Morton curve based domain decompositions in a minimally invasive manner. The original ESPResSo implementation for regular particle and fluid simulations is used as reference for the developed algorithms

    Large-scale parallelised boundary element method electrostatics for biomolecular simulation

    Get PDF
    Large-scale biomolecular simulations require a model of particle interactions capable of incorporating the behaviour of large numbers of particles over relatively long timescales. If water is modelled as a continuous medium then the most important intermolecular forces between biomolecules can be modelled as long-range electrostatics governed by the Poisson- Boltzmann Equation (PBE). We present a linearised PBE solver called the "Boundary Element Electrostatics Program"(BEEP). BEEP is based on the Boundary Element Method (BEM), in combination with a recently developed O(N) Fast Multipole Method (FMM) algorithm which approximates the far-�field integrals within the BEM, yielding a method which scales linearly with the number of particles. BEEP improves on existing methods by parallelising the underlying algorithms for use on modern cluster architectures, as well as taking advantage of recent progress in the �field of GPGPU (General Purpose GPU) Programming, to exploit the highly parallel nature of graphics cards. We found the stability and numerical accuracy of the BEM/FMM method to be highly dependent on the choice of surface representation and integration method. For real proteins we demonstrate the critical level of surface detail required to produce converged electrostatic solvation energies, and introduce a curved surface representation based on Point-Normal G1-continuous triangles which we �find generally improves numerical stability compared to a simpler surface constructed from planar triangles. Despite our improvements upon existing BEM methods, we �find that it is not possible to directly integrate BEM surface solutions to obtain intermolecular electrostatic forces. It is, however, practicable to use the total electrostatic solvation energy calculated by BEEP to drive a Monte-Carlo simulation

    Brain and Human Body Modeling 2020

    Get PDF
    ​This open access book describes modern applications of computational human modeling in an effort to advance neurology, cancer treatment, and radio-frequency studies including regulatory, safety, and wireless communication fields. Readers working on any application that may expose human subjects to electromagnetic radiation will benefit from this book’s coverage of the latest models and techniques available to assess a given technology’s safety and efficacy in a timely and efficient manner. Describes computational human body phantom construction and application; Explains new practices in computational human body modeling for electromagnetic safety and exposure evaluations; Includes a survey of modern applications for which computational human phantoms are critical

    Numerical methods for hydraulic fracture propagation: a review of recent trends

    Get PDF
    Development of numerical methods for hydraulic fracture simulation has accelerated in the past two decades. Recent advances in hydraulic fracture modeling and simulation are driven by increased industry and research activity in oil and gas, a drive toward consideration of more complex behaviors associated with layered and naturally-fractured rock formations, and a deepening understanding of the underlying mathematical model and its intrinsic challenges. Here we review the basic approaches being employed. Some of these comprise enhancements of classical methods, while others are imported from other fields of mechanics but are completely new in their application to hydraulic fracturing. After a description of the intrinsic challenges associated with the mechanics of fluid-driven fractures, we discuss both continuum and meso-scales numerical methods as well as engineering models which typically make use of additional assumptions to reduce computational cost. We pay particular attention to the verification and validation of numerical models, which is increasingly enabled by an ever-expanding library of laboratory experiments and analytical solutions for simple geometries in a number of different propagation regimes. A number of challenges remain and are amplified with a drive toward fully-coupled, three-dimensional hydraulic fracture modeling that accounts for host-rock heterogeneity. In the context of such a drive to complex models, we argue that the importance of best-practice development that includes careful verification and validation is vital to ensure progress is constrained by the appropriate underlying physics and mathematics with a constant attention to identifying conditions under which simpler models suffice for the intended modeling purposes
    corecore