5,841 research outputs found
Direct FEM computation of turbulent multiphase flow in 3D priting nozzle design
In this paper, we present a nozzle design of the 3D printing using FEniCS-HPC
as mathematical and simulation tool. In recent years 3D printing or Additive
Manufacturing (AM) has become a emerging technology and it has been already
in use for many industries. 3D printing considered as a sustainable production
or eco-friendly production, where one can minimize the wastage of the material
during the production. Many industries are replacing their traditional parts or
product manufacturing into optimized or smart 3D printing technology. In order
to have 3D printing to be efficient, this should have optimized nozzle design.
Here we design the nozzle for the titanium material. Since it is a metal during
the process it has to be preserved by the inert gas. All this makes this problem
comes under the multiphase flow. FEniCS-HPC is high level mathematical tool,
where one can easily modify a mathematical equations according to the physics
and has a good scalability on massively super computer architecture. And this
problem modelled as Direct FEM/General Galerkin methodology for turbulent
incompressible variable-density flow in FEniCS-HP
Time-resolved Adaptive Direct FEM Simulation of High-lift Aircraft Configurations
Our simulation methodology is referred to as Direct FEM Simulation (DFS), or General Galerkin (G2) and uses a finite element method (FEM) with piecewise linear approximation in space and time, and with numerical stabilization in the form of a weighted least squares method based on the residual. The incompressible Navier-Stokes Equations (NSE) are discretized directly, without applying any filter. Thus, the method does not result in Large Eddy Simulation (LES) filtered solutions, but is instead an approximation of a weak solution satisfying the weak form of the NSE. In G2 we have a posteriori error
estimates for quantities of interest that can be expressed as functionals of a weak solution.
These a posteriori error estimates, which form the basis for our adaptive mesh refinement algorithm, are based on the solution of an associated adjoint problem with a goal quantity (the aerodynamic forces in this work) as data, similarly to an optimal control problem.
We provide references to related work below. The methodology and software have been previously validated for a number of turbulent flow benchmark problems, including one of the HiLiftPW-2 high Reynolds number cases.
The DFS method is implemented in the Unicorn solver, which uses the open source software framework FEniCS-HPC, designed for automated solution of partial differential equations on massively parallel architectures using the FEM.
In this chapter we present adaptive results from the Third AIAA High Lift Prediction Workshop in Denver, Colorado based on our DFS methodology and Unicorn/FEniCS-HPC software. We show that the methodology quantitavely and qualitatively captures the main features of the experiment - aerodynamic forces and the stall mechanism with a novel numerical tripping, with a much coarser mesh resolution and cheaper computational cost than the standard in the field
U.S. Humanitarian Demining Research and Development Program (HD R&D)
The anti-tank mine threat on access roads in eastern Angola is the greatest impediment to infrastructural rehabilitation, economic recovery and social development in that area. The authors discuss the method and equipment used by DanChurchAid to verify and clear roads in Moxico and Lunda Sul provinces
Recommended from our members
Improving automatic music transcription through key detection
In this paper, a method for automatic transcription of polyphonic music is proposed that exploits key information. The proposed system performs key detection using a matching technique with distributions of pitch class pairs, called Zweiklang profiles. The automatic transcription system is based on probabilistic latent component analysis, supporting templates from multiple instruments, as well as tuning deviations and frequency modulations. Key information is incorporated to the transcription system using Dirichlet priors during the parameter update stage. Experiments are performed on a polyphonic, multiple-instrument dataset of Bach chorales, where it is shown that incorporating key information improves multi-pitch detection and instrument assignment performance
Direct FEM large scale computation of turbulent multiphase flow in urban water systems and marine energy
High-Reynolds number turbulent incompressible multiphase flow represents a large class of engineering problems of key relevance to society. Here we describe our work on modeling two such problems: 1. The Consorcio de Aguas Bilbao Bizkaia is constructing a new storm tank system with an automatic cleaning system, based on periodically pushing tank water out in a tunnel 2. In the framework of the collaboration between BCAM - Basque Center for Applied Mathematics and Tecnalia R & I, the interaction of the sea flow with a semi submersible floating offshore wind platform is computationally investigated. We study the MARIA' benchmark modeling breaking waves over objects in marine environments. Both of these problems are modeled in the the Direct FEM/General Galerkin methodology for turbulent incompressible variable-densitv flow 1,2 in the FEniCS software framework
Towards HPC-Embedded Case Study: Kalray and Message-Passing on NoC
Today one of the most important challenges in HPC is the development of computers with a low power consumption. In this context, recently, new embedded many-core systems have emerged. One of them is Kalray. Unlike other many-core architectures, Kalray is not a co-processor (self-hosted). One interesting feature of the Kalray architecture is the Network on Chip (NoC) connection. Habitually, the communication in many-core architectures is carried out via shared memory. However, in Kalray, the communication among processing elements can also be via Message-Passing on the NoC. One of the main motivations of this work is to present the main constraints to deal with the Kalray architecture. In particular, we focused on memory management and communication. We assess the use of NoC and shared memory on Kalray. Unlike shared memory, the implementation of Message-Passing on NoC is not transparent from programmer point of view. The synchronization among processing elements and NoC is other of the challenges to deal with in the Karlay processor. Although the synchronization using Message-Passing is more complex and consuming time than using shared memory, we obtain an overall speedup close to 6 when using Message-Passing on NoC with respect to the use of shared memory. Additionally, we have measured the power consumption of both approaches. Despite of being faster, the use of NoC presents a higher power consumption with respect to the approach that exploits shared memory. This additional consumption in Watts is about a 50%. However, the reduction in time by using NoC has an important impact on the overall power consumption as well
Deconvolving Instrumental and Intrinsic Broadening in Excited State X-ray Spectroscopies
Intrinsic and experimental mechanisms frequently lead to broadening of
spectral features in excited-state spectroscopies. For example, intrinsic
broadening occurs in x-ray absorption spectroscopy (XAS) measurements of heavy
elements where the core-hole lifetime is very short. On the other hand,
nonresonant x-ray Raman scattering (XRS) and other energy loss measurements are
more limited by instrumental resolution. Here, we demonstrate that the
Richardson-Lucy (RL) iterative algorithm provides a robust method for
deconvolving instrumental and intrinsic resolutions from typical XAS and XRS
data. For the K-edge XAS of Ag, we find nearly complete removal of ~9.3 eV FWHM
broadening from the combined effects of the short core-hole lifetime and
instrumental resolution. We are also able to remove nearly all instrumental
broadening in an XRS measurement of diamond, with the resulting improved
spectrum comparing favorably with prior soft x-ray XAS measurements. We present
a practical methodology for implementing the RL algorithm to these problems,
emphasizing the importance of testing for stability of the deconvolution
process against noise amplification, perturbations in the initial spectra, and
uncertainties in the core-hole lifetime.Comment: 35 pages, 13 figure
Reconstructing phylogenetic level-1 networks from nondense binet and trinet sets
Binets and trinets are phylogenetic networks with two and three leaves, respectively. Here we consider the problem of deciding if there exists a binary level-1 phylogenetic network displaying a given set T of binary binets or trinets over a taxon set X, and constructing such a network whenever it exists. We show that this is NP-hard for trinets but polynomial-time solvable for binets. Moreover, we show that the problem is still polynomial-time solvable for inputs consisting of binets and trinets as long as the cycles in the trinets have size three. Finally, we present an O(3^{|X|} poly(|X|)) time algorithm for general sets of binets and trinets. The latter two algorithms generalise to instances containing level-1 networks with arbitrarily many leaves, and thus provide some of the first supernetwork algorithms for computing networks from a set of rooted 1 phylogenetic networks
- …