113,786 research outputs found

    The Convergence of Difference Boxes

    Get PDF
    We consider an elementary mathematical puzzle known as a difference box in terms of a discrete map from R4 to R4 or, canonically, from a subset of the first quadrant of R2 into itself. We find the map\u27s unique canonical fixed point and answer the general question of how many iterations a given difference box takes to reach zero

    Measuring the three-dimensional shear from simulation data, with applications to weak gravitational lensing

    Get PDF
    We have developed a new three-dimensional algorithm, based on the standard P3^3M method, for computing deflections due to weak gravitational lensing. We compare the results of this method with those of the two-dimensional planar approach, and rigorously outline the conditions under which the two approaches are equivalent. Our new algorithm uses a Fast Fourier Transform convolution method for speed, and has a variable softening feature to provide a realistic interpretation of the large-scale structure in a simulation. The output values of the code are compared with those from the Ewald summation method, which we describe and develop in detail. With an optimal choice of the high frequency filtering in the Fourier convolution, the maximum errors, when using only a single particle, are about 7 per cent, with an rms error less than 2 per cent. For ensembles of particles, used in typical NN-body simulations, the rms errors are typically 0.3 per cent. We describe how the output from the algorithm can be used to generate distributions of magnification, source ellipticity, shear and convergence for large-scale structure.Comment: 22 pages, latex, 11 figure

    The Effect of Large-Scale Power on Simulated Spectra of the Lya forest

    Full text link
    We study the effects of box size on ENZO simulations of the intergalactic medium (IGM) at z = 2. We follow statistics of the cold dark matter (CDM) and the Lya absorption. We find that the larger boxes have fewer pixels with significant absorption (flux < 0.96) and more pixels in longer stretches with little or no absorption, and they have wider Lya lines. We trace these effect back to the additional power in larger boxes from longer wavelength modes. The IGM in our larger boxes is hotter, from increased pressure heating due to faster hydrodynamical infall. When we increase the photoheating in smaller boxes to compensate, their Lya statistics change to mimic those of a box of twice the size. Statistics converge towards their value in the largest (76.8 Mpc) box, except for the most common value of the CDM density which continues to rise. When we compare to errors with data, we find that our 76.8 Mpc box is larger than we need for the mean flux, barely large enough for the column density distribution and the power spectrum of the flux, and too small for the line widths. This box with 75 kpc cells has approximately the same mean flux as QSO spectra, but the Lya lines are too wide by 2.6 km/s, there are too few lines with log H I column densities > 10^17 cm^-2, and the power of the flux is too low by 20 - 50%, from small to large scales. Four times smaller cell size does not resolve these differences, nor do simple changes to the ultraviolet background that drives the H and He II ionization. It is hard to see how simulations using popular cosmological and astrophysical parameters can match Lyman-alpha forest data at z=2

    GRChombo : Numerical Relativity with Adaptive Mesh Refinement

    Full text link
    In this work, we introduce GRChombo: a new numerical relativity code which incorporates full adaptive mesh refinement (AMR) using block structured Berger-Rigoutsos grid generation. The code supports non-trivial "many-boxes-in-many-boxes" mesh hierarchies and massive parallelism through the Message Passing Interface (MPI). GRChombo evolves the Einstein equation using the standard BSSN formalism, with an option to turn on CCZ4 constraint damping if required. The AMR capability permits the study of a range of new physics which has previously been computationally infeasible in a full 3+1 setting, whilst also significantly simplifying the process of setting up the mesh for these problems. We show that GRChombo can stably and accurately evolve standard spacetimes such as binary black hole mergers and scalar collapses into black holes, demonstrate the performance characteristics of our code, and discuss various physics problems which stand to benefit from the AMR technique.Comment: 48 pages, 24 figure

    A New Algorithm for Computing Statistics of Weak Lensing by Large-Scale Structure

    Full text link
    We describe an efficient algorithm for calculating the statistics of weak lensing by large-scale structure based on a tiled set of independent particle-mesh N-body simulations which telescope in resolution along the line of sight. This efficiency allows us to predict not only the mean properties of lensing observables such as the power spectrum, skewness and kurtosis of the convergence, but also their sampling errors for finite fields of view, which are themselves crucial for assessing the cosmological significance of observations. We find that the nongaussianity of the distribution substantially increases the sampling errors for the skewness and kurtosis in the several to tens of arcminutes regime, whereas those for the power spectrum are only fractionally increased even out to wavenumbers where shot noise from the intrinsic ellipticities of the galaxies will likely dominate the errors.Comment: 12 pages, 13 figures; minor changes reflect accepted versio

    Binary black holes on a budget: Simulations using workstations

    Get PDF
    Binary black hole simulations have traditionally been computationally very expensive: current simulations are performed in supercomputers involving dozens if not hundreds of processors, thus systematic studies of the parameter space of binary black hole encounters still seem prohibitive with current technology. Here we show how the multi-layered refinement level code BAM can be used on dual processor workstations to simulate certain binary black hole systems. BAM, based on the moving punctures method, provides grid structures composed of boxes of increasing resolution near the center of the grid. In the case of binaries, the highest resolution boxes are placed around each black hole and they track them in their orbits until the final merger when a single set of levels surrounds the black hole remnant. This is particularly useful when simulating spinning black holes since the gravitational fields gradients are larger. We present simulations of binaries with equal mass black holes with spins parallel to the binary axis and intrinsic magnitude of S/m^2= 0.75. Our results compare favorably to those of previous simulations of this particular system. We show that the moving punctures method produces stable simulations at maximum spatial resolutions up to M/160 and for durations of up to the equivalent of 20 orbital periods.Comment: 20 pages, 8 figures. Final version, to appear in a special issue of Class. Quantum Grav. based on the New Frontiers in Numerical Relativity Conference, Golm, July 200

    Nonequilibrium dynamics of a simple stochastic model

    Full text link
    We investigate the low-temperature dynamics of a simple stochastic model, introduced recently in the context of the physics of glasses. The slowest characteristic time at equilibrium diverges exponentially at low temperature. On smaller time scales, the nonequilibrium dynamics of the system exhibits an aging regime. We present an analytical study of the scaling behaviour of the mean energy, of its local correlation and response functions, and of the associated fluctuation-dissipation ratio throughout the regime of low temperature and long times. This analysis includes the aging regime, the convergence to equilibrium, and the crossover behaviour between them.Comment: 36 pages, plain tex, 7 figures, to be published by Journal of Physics
    • …
    corecore