239 research outputs found

    FISH: A 3D parallel MHD code for astrophysical applications

    Full text link
    FISH is a fast and simple ideal magneto-hydrodynamics code that scales to ~10 000 processes for a Cartesian computational domain of ~1000^3 cells. The simplicity of FISH has been achieved by the rigorous application of the operator splitting technique, while second order accuracy is maintained by the symmetric ordering of the operators. Between directional sweeps, the three-dimensional data is rotated in memory so that the sweep is always performed in a cache-efficient way along the direction of contiguous memory. Hence, the code only requires a one-dimensional description of the conservation equations to be solved. This approach also enable an elegant novel parallelisation of the code that is based on persistent communications with MPI for cubic domain decomposition on machines with distributed memory. This scheme is then combined with an additional OpenMP parallelisation of different sweeps that can take advantage of clusters of shared memory. We document the detailed implementation of a second order TVD advection scheme based on flux reconstruction. The magnetic fields are evolved by a constrained transport scheme. We show that the subtraction of a simple estimate of the hydrostatic gradient from the total gradients can significantly reduce the dissipation of the advection scheme in simulations of gravitationally bound hydrostatic objects. Through its simplicity and efficiency, FISH is as well-suited for hydrodynamics classes as for large-scale astrophysical simulations on high-performance computer clusters. In preparation for the release of a public version, we demonstrate the performance of FISH in a suite of astrophysically orientated test cases.Comment: 27 pages, 11 figure

    Breakdown of Kolmogorov scaling in models of cluster aggregation with deposition

    Full text link
    The steady state of the model of cluster aggregation with deposition is characterized by a constant flux of mass directed from small masses towards large masses. It can therefore be studied using phenomenological theories of turbulence, such as Kolmogorov's 1941 theory. On the other hand, the large scale behavior of the aggregation model in dimensions lower than or equal to two is governed by a perturbative fixed point of the renormalization group flow, which enables an analytic study of the scaling properties of correlation functions in the steady state. In this paper, we show that the correlation functions have multifractal scaling, which violates linear Kolmogorov scaling. The analytical results are verified by Monte Carlo simulations.Comment: 5 pages 4 figure

    Nonequilibrium phase transitions in models of adsorption and desorption

    Full text link
    The nonequilibrium phase transition in a system of diffusing, coagulating particles in the presence of a steady input and evaporation of particles is studied. The system undergoes a transition from a phase in which the average number of particles is finite to one in which it grows linearly in time. The exponents characterizing the mass distribution near the critical point are calculated in all dimensions.Comment: 10 pages, 2 figures (To appear in Phys. Rev. E

    Prolonged decrease in heart rate variability after elective hip arthroplasty

    Get PDF
    The pattern of postoperative heart rate variability may provide insight into the response of the autonomic nervous system to anaesthesia and surgery. We have obtained spectral (fast Fourier transform) and non-spectral indices of heart rate variability from electrocardiographic recordings, sampled during continuous perioperative Holter monitoring in 15 otherwise healthy patients with an uncomplicated postoperative course, undergoing elective hip arthroplasty with either spinal or general anaesthesia. In both groups, total spectral energy (0.01-1 Hz), low-frequency spectral energy (0.01-0.15 Hz) and high-frequency spectral energy (0.15-0.40 Hz) decreased after surgery to 32% (95% confidence interval (Cl) 10.5; P <0.01), 29% (95% Cl 12.5; P <0.07; and 33% (95% Cl 12.5; P <0.01) of their preoperative values, respectively, and these indices remained suppressed for up to 5 days. Non-spectral indices decreased to a similar extent. These findings indicate a substantial and prolonged postoperative decrease in both parasympathetic and sympathetic influence on the sinus nod

    Gravitational waves from supernova matter

    Full text link
    We have performed a set of 11 three-dimensional magnetohydrodynamical core collapse supernova simulations in order to investigate the dependencies of the gravitational wave signal on the progenitor's initial conditions. We study the effects of the initial central angular velocity and different variants of neutrino transport. Our models are started up from a 15 solar mass progenitor and incorporate an effective general relativistic gravitational potential and a finite temperature nuclear equation of state. Furthermore, the electron flavour neutrino transport is tracked by efficient algorithms for the radiative transfer of massless fermions. We find that non- and slowly rotating models show gravitational wave emission due to prompt- and lepton driven convection that reveals details about the hydrodynamical state of the fluid inside the protoneutron stars. Furthermore we show that protoneutron stars can become dynamically unstable to rotational instabilities at T/|W| values as low as ~2 % at core bounce. We point out that the inclusion of deleptonization during the postbounce phase is very important for the quantitative GW prediction, as it enhances the absolute values of the gravitational wave trains up to a factor of ten with respect to a lepton-conserving treatment.Comment: 10 pages, 6 figures, accepted, to be published in a Classical and Quantum Gravity special issue for MICRA200

    Using state variables to model the response of tumour cells to radiation and heat: a novel multi-hit-repair approach

    Get PDF
    In order to overcome the limitations of the linear-quadratic model and include synergistic effects of heat and radiation, a novel radiobiological model is proposed. The model is based on a chain of cell populations which are characterized by the number of radiation induced damages (hits). Cells can shift downward along the chain by collecting hits and upward by a repair process. The repair process is governed by a repair probability which depends upon state variables used for a simplistic description of the impact of heat and radiation upon repair proteins. Based on the parameters used, populations up to 4-5 hits are relevant for the calculation of the survival. The model describes intuitively the mathematical behaviour of apoptotic and nonapoptotic cell death. Linear-quadratic-linear behaviour of the logarithmic cell survival, fractionation, and (with one exception) the dose rate dependencies are described correctly. The model covers the time gap dependence of the synergistic cell killing due to combined application of heat and radiation, but further validation of the proposed approach based on experimental data is needed. However, the model offers a work bench for testing different biological concepts of damage induction, repair, and statistical approaches for calculating the variables of state

    Macroscopic Equations of Motion for Two Phase Flow in Porous Media

    Full text link
    The established macroscopic equations of motion for two phase immiscible displacement in porous media are known to be physically incomplete because they do not contain the surface tension and surface areas governing capillary phenomena. Therefore a more general system of macroscopic equations is derived here which incorporates the spatiotemporal variation of interfacial energies. These equations are based on the theory of mixtures in macroscopic continuum mechanics. They include wetting phenomena through surface tensions instead of the traditional use of capillary pressure functions. Relative permeabilities can be identified in this approach which exhibit a complex dependence on the state variables. A capillary pressure function can be identified in equilibrium which shows the qualitative saturation dependence known from experiment. In addition the new equations allow to describe the spatiotemporal changes of residual saturations during immiscible displacement.Comment: 15 pages, Phys. Rev. E (1998), in prin

    Phase Transition in the Takayasu Model with Desorption

    Get PDF
    We study a lattice model where particles carrying different masses diffuse, coalesce upon contact, and also unit masses adsorb to a site with rate qq or desorb from a site with nonzero mass with rate pp. In the limit p=0p=0 (without desorption), our model reduces to the well studied Takayasu model where the steady-state single site mass distribution has a power law tail P(m)∌m−τP(m)\sim m^{-\tau} for large mass. We show that varying the desorption rate pp induces a nonequilibrium phase transition in all dimensions. For fixed qq, there is a critical pc(q)p_c(q) such that if p<pc(q)p<p_c(q), the steady state mass distribution, P(m)∌m−τP(m)\sim m^{-\tau} for large mm as in the Takayasu case. For p=pc(q)p=p_c(q), we find P(m)∌m−τcP(m)\sim m^{-\tau_c} where τc\tau_c is a new exponent, while for p>pc(q)p>p_c(q), P(m)∌exp⁥(−m/m∗)P(m)\sim \exp(-m/m^*) for large mm. The model is studied analytically within a mean field theory and numerically in one dimension.Comment: RevTex, 11 pages including 5 figures, submitted to Phys. Rev.

    A Simulated Annealing Approach to Approximate Bayes Computations

    Full text link
    Approximate Bayes Computations (ABC) are used for parameter inference when the likelihood function of the model is expensive to evaluate but relatively cheap to sample from. In particle ABC, an ensemble of particles in the product space of model outputs and parameters is propagated in such a way that its output marginal approaches a delta function at the data and its parameter marginal approaches the posterior distribution. Inspired by Simulated Annealing, we present a new class of particle algorithms for ABC, based on a sequence of Metropolis kernels, associated with a decreasing sequence of tolerances w.r.t. the data. Unlike other algorithms, our class of algorithms is not based on importance sampling. Hence, it does not suffer from a loss of effective sample size due to re-sampling. We prove convergence under a condition on the speed at which the tolerance is decreased. Furthermore, we present a scheme that adapts the tolerance and the jump distribution in parameter space according to some mean-fields of the ensemble, which preserves the statistical independence of the particles, in the limit of infinite sample size. This adaptive scheme aims at converging as close as possible to the correct result with as few system updates as possible via minimizing the entropy production in the system. The performance of this new class of algorithms is compared against two other recent algorithms on two toy examples.Comment: 20 pages, 2 figure

    The ALPS project release 2.0: Open source software for strongly correlated systems

    Full text link
    We present release 2.0 of the ALPS (Algorithms and Libraries for Physics Simulations) project, an open source software project to develop libraries and application programs for the simulation of strongly correlated quantum lattice models such as quantum magnets, lattice bosons, and strongly correlated fermion systems. The code development is centered on common XML and HDF5 data formats, libraries to simplify and speed up code development, common evaluation and plotting tools, and simulation programs. The programs enable non-experts to start carrying out serial or parallel numerical simulations by providing basic implementations of the important algorithms for quantum lattice models: classical and quantum Monte Carlo (QMC) using non-local updates, extended ensemble simulations, exact and full diagonalization (ED), the density matrix renormalization group (DMRG) both in a static version and a dynamic time-evolving block decimation (TEBD) code, and quantum Monte Carlo solvers for dynamical mean field theory (DMFT). The ALPS libraries provide a powerful framework for programers to develop their own applications, which, for instance, greatly simplify the steps of porting a serial code onto a parallel, distributed memory machine. Major changes in release 2.0 include the use of HDF5 for binary data, evaluation tools in Python, support for the Windows operating system, the use of CMake as build system and binary installation packages for Mac OS X and Windows, and integration with the VisTrails workflow provenance tool. The software is available from our web server at http://alps.comp-phys.org/.Comment: 18 pages + 4 appendices, 7 figures, 12 code examples, 2 table
    • 

    corecore