111,465 research outputs found
Magnetogenesis from Cosmic String Loops
Large-scale coherent magnetic fields are observed in galaxies and clusters,
but their ultimate origin remains a mystery. We reconsider the prospects for
primordial magnetogenesis by a cosmic string network. We show that the magnetic
flux produced by long strings has been overestimated in the past, and give
improved estimates. We also compute the fields created by the loop population,
and find that it gives the dominant contribution to the total magnetic field
strength on present-day galactic scales. We present numerical results obtained
by evolving semi-analytic models of string networks (including both one-scale
and velocity-dependent one-scale models) in a Lambda-CDM cosmology, including
the forces and torques on loops from Hubble redshifting, dynamical friction,
and gravitational wave emission. Our predictions include the magnetic field
strength as a function of correlation length, as well as the volume covered by
magnetic fields. We conclude that string networks could account for magnetic
fields on galactic scales, but only if coupled with an efficient dynamo
amplification mechanism.Comment: 10 figures; v3: small typos corrected to match published version.
MagnetiCS, the code described in paper, is available at
http://markcwyman.com/ and
http://www.damtp.cam.ac.uk/user/dhw22/code/index.htm
Unstable coronal loops : numerical simulations with predicted observational signatures
We present numerical studies of the nonlinear, resistive magnetohydrodynamic
(MHD) evolution of coronal loops. For these simulations we assume that the
loops carry no net current, as might be expected if the loop had evolved due to
vortex flows. Furthermore the initial equilibrium is taken to be a cylindrical
flux tube with line-tied ends. For a given amount of twist in the magnetic
field it is well known that once such a loop exceeds a critical length it
becomes unstableto ideal MHD instabilities. The early evolution of these
instabilities generates large current concentrations. Firstly we show that
these current concentrations are consistent with the formation of a current
sheet. Magnetic reconnection can only occur in the vicinity of these current
concentrations and we therefore couple the resistivity to the local current
density. This has the advantage of avoiding resistive diffusion in regions
where it should be negligible. We demonstrate the importance of this procedure
by comparison with simulations based on a uniform resistivity. From our
numerical experiments we are able to estimate some observational signatures for
unstable coronal loops. These signatures include: the timescale of the loop
brightening; the temperature increase; the energy released and the predicted
observable flow speeds. Finally we discuss to what extent these observational
signatures are consistent with the properties of transient brightening loops.Comment: 13 pages, 9 figure
Tiramisu: A Polyhedral Compiler for Expressing Fast and Portable Code
This paper introduces Tiramisu, a polyhedral framework designed to generate
high performance code for multiple platforms including multicores, GPUs, and
distributed machines. Tiramisu introduces a scheduling language with novel
extensions to explicitly manage the complexities that arise when targeting
these systems. The framework is designed for the areas of image processing,
stencils, linear algebra and deep learning. Tiramisu has two main features: it
relies on a flexible representation based on the polyhedral model and it has a
rich scheduling language allowing fine-grained control of optimizations.
Tiramisu uses a four-level intermediate representation that allows full
separation between the algorithms, loop transformations, data layouts, and
communication. This separation simplifies targeting multiple hardware
architectures with the same algorithm. We evaluate Tiramisu by writing a set of
image processing, deep learning, and linear algebra benchmarks and compare them
with state-of-the-art compilers and hand-tuned libraries. We show that Tiramisu
matches or outperforms existing compilers and libraries on different hardware
architectures, including multicore CPUs, GPUs, and distributed machines.Comment: arXiv admin note: substantial text overlap with arXiv:1803.0041
Distributed memory compiler methods for irregular problems: Data copy reuse and runtime partitioning
Outlined here are two methods which we believe will play an important role in any distributed memory compiler able to handle sparse and unstructured problems. We describe how to link runtime partitioners to distributed memory compilers. In our scheme, programmers can implicitly specify how data and loop iterations are to be distributed between processors. This insulates users from having to deal explicitly with potentially complex algorithms that carry out work and data partitioning. We also describe a viable mechanism for tracking and reusing copies of off-processor data. In many programs, several loops access the same off-processor memory locations. As long as it can be verified that the values assigned to off-processor memory locations remain unmodified, we show that we can effectively reuse stored off-processor data. We present experimental data from a 3-D unstructured Euler solver run on iPSC/860 to demonstrate the usefulness of our methods
- …