5,773 research outputs found

    Direct FEM computation of turbulent multiphase flow in 3D priting nozzle design

    Get PDF
    In this paper, we present a nozzle design of the 3D printing using FEniCS-HPC as mathematical and simulation tool. In recent years 3D printing or Additive Manufacturing (AM) has become a emerging technology and it has been already in use for many industries. 3D printing considered as a sustainable production or eco-friendly production, where one can minimize the wastage of the material during the production. Many industries are replacing their traditional parts or product manufacturing into optimized or smart 3D printing technology. In order to have 3D printing to be efficient, this should have optimized nozzle design. Here we design the nozzle for the titanium material. Since it is a metal during the process it has to be preserved by the inert gas. All this makes this problem comes under the multiphase flow. FEniCS-HPC is high level mathematical tool, where one can easily modify a mathematical equations according to the physics and has a good scalability on massively super computer architecture. And this problem modelled as Direct FEM/General Galerkin methodology for turbulent incompressible variable-density flow in FEniCS-HP

    Time-resolved Adaptive Direct FEM Simulation of High-lift Aircraft Configurations

    Get PDF
    Our simulation methodology is referred to as Direct FEM Simulation (DFS), or General Galerkin (G2) and uses a finite element method (FEM) with piecewise linear approximation in space and time, and with numerical stabilization in the form of a weighted least squares method based on the residual. The incompressible Navier-Stokes Equations (NSE) are discretized directly, without applying any filter. Thus, the method does not result in Large Eddy Simulation (LES) filtered solutions, but is instead an approximation of a weak solution satisfying the weak form of the NSE. In G2 we have a posteriori error estimates for quantities of interest that can be expressed as functionals of a weak solution. These a posteriori error estimates, which form the basis for our adaptive mesh refinement algorithm, are based on the solution of an associated adjoint problem with a goal quantity (the aerodynamic forces in this work) as data, similarly to an optimal control problem. We provide references to related work below. The methodology and software have been previously validated for a number of turbulent flow benchmark problems, including one of the HiLiftPW-2 high Reynolds number cases. The DFS method is implemented in the Unicorn solver, which uses the open source software framework FEniCS-HPC, designed for automated solution of partial differential equations on massively parallel architectures using the FEM. In this chapter we present adaptive results from the Third AIAA High Lift Prediction Workshop in Denver, Colorado based on our DFS methodology and Unicorn/FEniCS-HPC software. We show that the methodology quantitavely and qualitatively captures the main features of the experiment - aerodynamic forces and the stall mechanism with a novel numerical tripping, with a much coarser mesh resolution and cheaper computational cost than the standard in the field

    U.S. Humanitarian Demining Research and Development Program (HD R&D)

    Get PDF
    The anti-tank mine threat on access roads in eastern Angola is the greatest impediment to infrastructural rehabilitation, economic recovery and social development in that area. The authors discuss the method and equipment used by DanChurchAid to verify and clear roads in Moxico and Lunda Sul provinces

    Direct FEM large scale computation of turbulent multiphase flow in urban water systems and marine energy

    Get PDF
    High-Reynolds number turbulent incompressible multiphase flow represents a large class of engineering problems of key relevance to society. Here we describe our work on modeling two such problems: 1. The Consorcio de Aguas Bilbao Bizkaia is constructing a new storm tank system with an automatic cleaning system, based on periodically pushing tank water out in a tunnel 2. In the framework of the collaboration between BCAM - Basque Center for Applied Mathematics and Tecnalia R & I, the interaction of the sea flow with a semi submersible floating offshore wind platform is computationally investigated. We study the MARIA' benchmark modeling breaking waves over objects in marine environments. Both of these problems are modeled in the the Direct FEM/General Galerkin methodology for turbulent incompressible variable-densitv flow 1,2 in the FEniCS software framework

    Towards HPC-Embedded Case Study: Kalray and Message-Passing on NoC

    Get PDF
    Today one of the most important challenges in HPC is the development of computers with a low power consumption. In this context, recently, new embedded many-core systems have emerged. One of them is Kalray. Unlike other many-core architectures, Kalray is not a co-processor (self-hosted). One interesting feature of the Kalray architecture is the Network on Chip (NoC) connection. Habitually, the communication in many-core architectures is carried out via shared memory. However, in Kalray, the communication among processing elements can also be via Message-Passing on the NoC. One of the main motivations of this work is to present the main constraints to deal with the Kalray architecture. In particular, we focused on memory management and communication. We assess the use of NoC and shared memory on Kalray. Unlike shared memory, the implementation of Message-Passing on NoC is not transparent from programmer point of view. The synchronization among processing elements and NoC is other of the challenges to deal with in the Karlay processor. Although the synchronization using Message-Passing is more complex and consuming time than using shared memory, we obtain an overall speedup close to 6 when using Message-Passing on NoC with respect to the use of shared memory. Additionally, we have measured the power consumption of both approaches. Despite of being faster, the use of NoC presents a higher power consumption with respect to the approach that exploits shared memory. This additional consumption in Watts is about a 50%. However, the reduction in time by using NoC has an important impact on the overall power consumption as well

    Deconvolving Instrumental and Intrinsic Broadening in Excited State X-ray Spectroscopies

    Full text link
    Intrinsic and experimental mechanisms frequently lead to broadening of spectral features in excited-state spectroscopies. For example, intrinsic broadening occurs in x-ray absorption spectroscopy (XAS) measurements of heavy elements where the core-hole lifetime is very short. On the other hand, nonresonant x-ray Raman scattering (XRS) and other energy loss measurements are more limited by instrumental resolution. Here, we demonstrate that the Richardson-Lucy (RL) iterative algorithm provides a robust method for deconvolving instrumental and intrinsic resolutions from typical XAS and XRS data. For the K-edge XAS of Ag, we find nearly complete removal of ~9.3 eV FWHM broadening from the combined effects of the short core-hole lifetime and instrumental resolution. We are also able to remove nearly all instrumental broadening in an XRS measurement of diamond, with the resulting improved spectrum comparing favorably with prior soft x-ray XAS measurements. We present a practical methodology for implementing the RL algorithm to these problems, emphasizing the importance of testing for stability of the deconvolution process against noise amplification, perturbations in the initial spectra, and uncertainties in the core-hole lifetime.Comment: 35 pages, 13 figure

    Reconstructing phylogenetic level-1 networks from nondense binet and trinet sets

    Get PDF
    Binets and trinets are phylogenetic networks with two and three leaves, respectively. Here we consider the problem of deciding if there exists a binary level-1 phylogenetic network displaying a given set T of binary binets or trinets over a taxon set X, and constructing such a network whenever it exists. We show that this is NP-hard for trinets but polynomial-time solvable for binets. Moreover, we show that the problem is still polynomial-time solvable for inputs consisting of binets and trinets as long as the cycles in the trinets have size three. Finally, we present an O(3^{|X|} poly(|X|)) time algorithm for general sets of binets and trinets. The latter two algorithms generalise to instances containing level-1 networks with arbitrarily many leaves, and thus provide some of the first supernetwork algorithms for computing networks from a set of rooted 1 phylogenetic networks

    The Incidence and Clinical Relevance of Graft Hypertrophy After Matrix-Based Autologous Chondrocyte Implantation

    Get PDF
    Background: Graft hypertrophy is the most common complication of periosteal autologous chondrocyte implantation (p-ACI). Purpose: The aim of this prospective study was to analyze the development, the incidence rate, and the persistence of graft hypertrophy after matrix-based autologous chondrocyte implantation (mb-ACI) in the knee joint within a 2-year postoperative course. Study Design: Case series; Level of evidence, 4. Methods: Between 2004 and 2007, a total of 41 patients with 44 isolated cartilage defects of the knee were treated with the mb-ACI technique. The mean age of the patients was 35.8 years (standard deviation [SD], 11.3 years), and the mean body mass index was 25.9 (SD, 4.2; range, 19-35.3). The cartilage defects were arthroscopically classified as Outerbridge grades III and IV. The mean area of the cartilage defect measured 6.14 cm2 (SD, 2.3 cm2). Postoperative clinical and magnetic resonance imaging (MRI) examinations were conducted at 3, 6, 12, and 24 months to analyze the incidence and course of the graft. Results: Graft hypertrophy developed in 25% of the patients treated with mb-ACI within a postoperative course of 1 year; 16% of the patients developed hypertrophy grade 2, and 9% developed hypertrophy grade 1. Graft hypertrophy occurred primarily in the first 12 months and regressed in most cases within 2 years. The International Knee Documentation Committee (IKDC) and visual analog scale (VAS) scores improved during the postoperative follow-up time of 2 years. There was no difference between the clinical results regarding the IKDC and VAS pain scores and the presence of graft hypertrophy. Conclusion: The mb-ACI technique does not lead to graft hypertrophy requiring treatment as opposed to classic p-ACI. The frequency of occurrence of graft hypertrophy after p-ACI and mb-ACI is comparable. Graft hypertrophy can be considered as a temporary excessive growth of regenerative cartilage tissue rather than a true graft hypertrophy. It is therefore usually not a persistent or systematic complication in the treatment of circumscribed cartilage defects with mb-ACI
    • …
    corecore