68 research outputs found

    City@home: Monte Carlo derivative pricing distributed on networked computers

    Get PDF
    Monte Carlo is a powerful and versatile derivative pricing tool, with the main drawback of requiring a large amount of computing time to generate enough realisations of the stochastic process. However, since realisations are independent from each other, the task is “embarrassingly” parallel and the workload can be easily distributed on a large set of processors without the need for fast networking and thus an expensive dedicated supercomputer. Such an alternative, much cheaper and more accessible way can be realised with the BOINC toolkit, distributing the Monte Carlo runs on networked clients running under Windows, Linux or various Unix variants, and recollecting the results at the end for a statistical evaluation of the price distribution at the final time. Though it is likely that the clients will belong to the intranet of a large company or institution, we gave our program the evocative name City@home in honour of the paradigmatic SETI@home project. As an application, we present the generation of synthetic high frequency financial time series for speculative option valuation in the context of uncoupled continuous-time random walks (fractional diffusion), with a Lévy marginal density function for the tick-by-tick log returns and a Mittag-Leffler marginal density function for the waiting times. Lévy deviates are generated with the Chambers-Mallows-Stuck method, Mittag-Leffler deviates with the Kozubowski-Pakes method

    Parallelization of a treecode

    Full text link
    I describe here the performance of a parallel treecode with individual particle timesteps. The code is based on the Barnes-Hut algorithm and runs cosmological N-body simulations on parallel machines with a distributed memory architecture using the MPI message-passing library. For a configuration with a constant number of particles per processor the scalability of the code was tested up to P=128 processors on an IBM SP4 machine. In the large PP limit the average CPU time per processor necessary for solving the gravitational interactions is 10\sim 10 % higher than that expected from the ideal scaling relation. The processor domains are determined every large timestep according to a recursive orthogonal bisection, using a weighting scheme which takes into account the total particle computational load within the timestep. The results of the numerical tests show that the load balancing efficiency LL of the code is high (>=90>=90%) up to P=32, and decreases to L80L\sim 80% when P=128. In the latter case it is found that some aspects of the code performance are affected by machine hardware, while the proposed weighting scheme can achieve a load balance as high as L90L\sim 90% even in the large PP limit.Comment: 30 pages, 3 tables, 9 figures, accepted for publication in New Astronom

    Direct numerical simulation of particle-laden turbulence in a straight square duct

    Get PDF
    Particle-laden turbulent flow through a straight square duct at Reτ = 300 is studied using direct numerical simulation (DNS) and Lagrangian particle tracking. A parallelized 3-D particle tracking direct numerical simulation code has been developed to perform the large-scale turbulent particle transport computations reported in this thesis. The DNS code is validated after demonstrating good agreement with the published DNS results for the same flow and Reynolds number. Lagrangian particle transport computations are carried out using a large ensemble of passive tracers and finite-inertia particles and the assumption of one-way fluid-particle coupling. Using four different types of initial particle distributions, Lagrangian particle dispersion, concentration and deposition are studied in the turbulent straight square duct. Particles are released in a uniform distribution on a cross-sectional plane at the duct inlet, released as particle pairs in the core region of the duct, distributed randomly in the domain or distributed uniformly in planes at certain heights above the walls. One- and two-particle dispersion statistics are computed and discussed for the low Reynolds number inhomogeneous turbulence present in a straight square duct. New detailed statistics on particle number concentration and deposition are also obtained and discussed

    Two-Dimensional Hydrodynamic Core-Collapse Supernova Simulations with Spectral Neutrino Transport II. Models for Different Progenitor Stars

    Full text link
    1D and 2D supernova simulations for stars between 11 and 25 solar masses are presented, making use of the Prometheus/Vertex neutrino-hydrodynamics code, which employs a full spectral treatment of the neutrino transport. Multi-dimensional transport aspects are treated by the ``ray-by-ray plus'' approximation described in Paper I. Our set of models includes a 2D calculation for a 15 solar mass star whose iron core is assumed to rotate rigidly with an angular frequency of 0.5 rad/s before collapse. No important differences were found depending on whether random seed perturbations for triggering convection are included already during core collapse, or whether they are imposed on a 1D collapse model shortly after bounce. Convection below the neutrinosphere sets in about 40 ms p.b. at a density above 10**12 g/cm^3 in all 2D models, and encompasses a layer of growing mass as time goes on. It leads to a more extended proto-neutron star structure with accelerated lepton number and energy loss and significantly higher muon and tau neutrino luminosities, but reduced mean energies of the radiated neutrinos, at times later than ~100 ms p.b. In case of an 11.2 solar mass star we find that low (l = 1,2) convective modes cause a probably rather weak explosion by the convectively supported neutrino-heating mechanism after ~150 ms p.b. when the 2D simulation is performed with a full 180 degree grid, whereas the same simulation with 90 degree wedge fails to explode like all other models. This sensitivity demonstrates the proximity of our 2D models to the borderline between success and failure, and stresses the need of simulations in 3D, ultimately without the axis singularity of a polar grid. (abridged)Comment: 42 pages, 44 figures; revised according to referee comments; accepted to Astronomy & Astrophysic

    N-body Models of Rotating Globular Clusters

    Get PDF
    We have studied the dynamical evolution of rotating globular clusters with direct NN-body models. Our initial models are rotating King models; we obtained results for both equal-mass systems and systems composed out of two mass components. Previous investigations using a Fokker-Planck solver have revealed that rotation has a noticeable influence on stellar systems like globular clusters, which evolve by two-body relaxation. In particular, it accelerates their dynamical evolution through the gravogyro instability. We have validated the occurence of the gravogyro instability with direct NN-body models. In the case of systems composed out of two mass components, mass segregation takes place, which competes with the rotation in the acceleration of the core collapse. The "accelerating" effect of rotation has not been detected in our isolated two-mass NN-body models. Last, but not least, we have looked at rotating NN-body models in a tidal field within the tidal approximation. It turns out that rotation increases the escape rate significantly. A difference between retrograde and prograde rotating star clusters occurs with respect to the orbit of the star cluster around the Galaxy, which is due to the presence of a ``third integral'' and chaotic scattering, respectively.Comment: 16 pages, 17 figures, accepted by MNRA
    corecore