30,490 research outputs found

    A fast strong coupling algorithm for the partitioned fluid–structure interaction simulation of BMHVs

    Get PDF
    The numerical simulation of Bileaflet Mechanical Heart Valves (BMHVs) has gained strong interest in the last years, as a design and optimisation tool. In this paper, a strong coupling algorithm for the partitioned fluidstructure interaction simulation of a BMHV is presented. The convergence of the coupling iterations between the flow solver and the leaflet motion solver is accelerated by using the Jacobian with the derivatives of the pressure and viscous moments acting on the leaflets with respect to the leaflet accelerations. This Jacobian is numerically calculated from the coupling iterations. An error analysis is done to derive a criterion for the selection of useable coupling iterations. The algorithm is successfully tested for two 3D cases of a BMHV and a comparison is made with existing coupling schemes. It is observed that the developed coupling scheme outperforms these existing schemes in needed coupling iterations per time step and CPU time

    Validation and Benchmarking of a Practical Free Magnetic Energy and Relative Magnetic Helicity Budget Calculation in Solar Magnetic Structures

    Full text link
    In earlier works we introduced and tested a nonlinear force-free (NLFF) method designed to self-consistently calculate the free magnetic energy and the relative magnetic helicity budgets of the corona of observed solar magnetic structures. The method requires, in principle, only a single, photospheric or low-chromospheric, vector magnetogram of a quiet-Sun patch or an active region and performs calculations in the absence of three-dimensional magnetic and velocity-field information. In this work we strictly validate this method using three-dimensional coronal magnetic fields. Benchmarking employs both synthetic, three-dimensional magnetohydrodynamic simulations and nonlinear force-free field extrapolations of the active-region solar corona. We find that our time-efficient NLFF method provides budgets that differ from those of more demanding semi-analytical methods by a factor of ~3, at most. This difference is expected from the physical concept and the construction of the method. Temporal correlations show more discrepancies that, however, are soundly improved for more complex, massive active regions, reaching correlation coefficients of the order of, or exceeding, 0.9. In conclusion, we argue that our NLFF method can be reliably used for a routine and fast calculation of free magnetic energy and relative magnetic helicity budgets in targeted parts of the solar magnetized corona. As explained here and in previous works, this is an asset that can lead to valuable insight into the physics and the triggering of solar eruptions.Comment: 32 pages, 14 figures, accepted by Solar Physic

    Improved Semileptonic Form Factor Calculations in Lattice QCD

    Full text link
    We investigate the computational efficiency of two stochastic based alternatives to the Sequential Propagator Method used in Lattice QCD calculations of heavy-light semileptonic form factors. In the first method, we replace the sequential propagator, which couples the calculation of two of the three propagators required for the calculation, with a stochastic propagator so that the calculations of all three propagators are independent. This method is more flexible than the Sequential Propagator Method but introduces stochastic noise. We study the noise to determine when this method becomes competitive with the Sequential Propagator Method, and find that for any practical calculation it is competitive with or superior to the Sequential Propagator Method. We also examine a second stochastic method, the so-called ``one-end trick", concluding it is relatively inefficient in this context. The investigation is carried out on two gauge field ensembles, using the non-perturbatively improved Wilson-Sheikholeslami-Wohlert action with N_f=2 mass-degenerate sea quarks. The two ensembles have similar lattice spacings but different sea quark masses. We use the first stochastic method to extract O(a){\mathcal O}(a)-improved, matched lattice results for the semileptonic form factors on the ensemble with lighter sea quarks, extracting f_+(0)

    Evaluation of linear classifiers on articles containing pharmacokinetic evidence of drug-drug interactions

    Full text link
    Background. Drug-drug interaction (DDI) is a major cause of morbidity and mortality. [...] Biomedical literature mining can aid DDI research by extracting relevant DDI signals from either the published literature or large clinical databases. However, though drug interaction is an ideal area for translational research, the inclusion of literature mining methodologies in DDI workflows is still very preliminary. One area that can benefit from literature mining is the automatic identification of a large number of potential DDIs, whose pharmacological mechanisms and clinical significance can then be studied via in vitro pharmacology and in populo pharmaco-epidemiology. Experiments. We implemented a set of classifiers for identifying published articles relevant to experimental pharmacokinetic DDI evidence. These documents are important for identifying causal mechanisms behind putative drug-drug interactions, an important step in the extraction of large numbers of potential DDIs. We evaluate performance of several linear classifiers on PubMed abstracts, under different feature transformation and dimensionality reduction methods. In addition, we investigate the performance benefits of including various publicly-available named entity recognition features, as well as a set of internally-developed pharmacokinetic dictionaries. Results. We found that several classifiers performed well in distinguishing relevant and irrelevant abstracts. We found that the combination of unigram and bigram textual features gave better performance than unigram features alone, and also that normalization transforms that adjusted for feature frequency and document length improved classification. For some classifiers, such as linear discriminant analysis (LDA), proper dimensionality reduction had a large impact on performance. Finally, the inclusion of NER features and dictionaries was found not to help classification.Comment: Pacific Symposium on Biocomputing, 201

    Achieving High Speed CFD simulations: Optimization, Parallelization, and FPGA Acceleration for the unstructured DLR TAU Code

    Get PDF
    Today, large scale parallel simulations are fundamental tools to handle complex problems. The number of processors in current computation platforms has been recently increased and therefore it is necessary to optimize the application performance and to enhance the scalability of massively-parallel systems. In addition, new heterogeneous architectures, combining conventional processors with specific hardware, like FPGAs, to accelerate the most time consuming functions are considered as a strong alternative to boost the performance. In this paper, the performance of the DLR TAU code is analyzed and optimized. The improvement of the code efficiency is addressed through three key activities: Optimization, parallelization and hardware acceleration. At first, a profiling analysis of the most time-consuming processes of the Reynolds Averaged Navier Stokes flow solver on a three-dimensional unstructured mesh is performed. Then, a study of the code scalability with new partitioning algorithms are tested to show the most suitable partitioning algorithms for the selected applications. Finally, a feasibility study on the application of FPGAs and GPUs for the hardware acceleration of CFD simulations is presented
    corecore