6,289 research outputs found
IllinoisGRMHD: An Open-Source, User-Friendly GRMHD Code for Dynamical Spacetimes
In the extreme violence of merger and mass accretion, compact objects like
black holes and neutron stars are thought to launch some of the most luminous
outbursts of electromagnetic and gravitational wave energy in the Universe.
Modeling these systems realistically is a central problem in theoretical
astrophysics, but has proven extremely challenging, requiring the development
of numerical relativity codes that solve Einstein's equations for the
spacetime, coupled to the equations of general relativistic (ideal)
magnetohydrodynamics (GRMHD) for the magnetized fluids. Over the past decade,
the Illinois Numerical Relativity (ILNR) Group's dynamical spacetime GRMHD code
has proven itself as a robust and reliable tool for theoretical modeling of
such GRMHD phenomena. However, the code was written "by experts and for
experts" of the code, with a steep learning curve that would severely hinder
community adoption if it were open-sourced. Here we present IllinoisGRMHD,
which is an open-source, highly-extensible rewrite of the original
closed-source GRMHD code of the ILNR Group. Reducing the learning curve was the
primary focus of this rewrite, with the goal of facilitating community
involvement in the code's use and development, as well as the minimization of
human effort in generating new science. IllinoisGRMHD also saves computer time,
generating roundoff-precision identical output to the original code on
adaptive-mesh grids, but nearly twice as fast at scales of hundreds to
thousands of cores.Comment: 37 pages, 6 figures, single column. Matches published versio
A finite element method with mesh adaptivity for computing vortex states in fast-rotating Bose-Einstein condensates
Numerical computations of stationary states of fast-rotating Bose-Einstein
condensates require high spatial resolution due to the presence of a large
number of quantized vortices. In this paper we propose a low-order finite
element method with mesh adaptivity by metric control, as an alternative
approach to the commonly used high order (finite difference or spectral)
approximation methods. The mesh adaptivity is used with two different numerical
algorithms to compute stationary vortex states: an imaginary time propagation
method and a Sobolev gradient descent method. We first address the basic issue
of the choice of the variable used to compute new metrics for the mesh
adaptivity and show that simultaneously refinement using the real and imaginary
part of the solution is successful. Mesh refinement using only the modulus of
the solution as adaptivity variable fails for complicated test cases. Then we
suggest an optimized algorithm for adapting the mesh during the evolution of
the solution towards the equilibrium state. Considerable computational time
saving is obtained compared to uniform mesh computations. The new method is
applied to compute difficult cases relevant for physical experiments (large
nonlinear interaction constant and high rotation rates).Comment: to appear in J. Computational Physic
A new code for orbit analysis and Schwarzschild modelling of triaxial stellar systems
We review the methods used to study the orbital structure and chaotic
properties of various galactic models and to construct self-consistent
equilibrium solutions by Schwarzschild's orbit superposition technique. These
methods are implemented in a new publicly available software tool, SMILE, which
is intended to be a convenient and interactive instrument for studying a
variety of 2D and 3D models, including arbitrary potentials represented by a
basis-set expansion, a spherical-harmonic expansion with coefficients being
smooth functions of radius (splines), or a set of fixed point masses. We also
propose two new variants of Schwarzschild modelling, in which the density of
each orbit is represented by the coefficients of the basis-set or spline
spherical-harmonic expansion, and the orbit weights are assigned in such a way
as to reproduce the coefficients of the underlying density model. We explore
the accuracy of these general-purpose potential expansions and show that they
may be efficiently used to approximate a wide range of analytic density models
and serve as smooth representations of discrete particle sets (e.g. snapshots
from an N-body simulation), for instance, for the purpose of orbit analysis of
the snapshot. For the variants of Schwarzschild modelling, we use two test
cases - a triaxial Dehnen model containing a central black hole, and a model
re-created from an N-body snapshot obtained by a cold collapse. These tests
demonstrate that all modelling approaches are capable of creating equilibrium
models.Comment: MNRAS, 24 pages, 18 figures. Software is available at
http://td.lpi.ru/~eugvas/smile
Optimisation of Mobile Communication Networks - OMCO NET
The mini conference “Optimisation of Mobile Communication Networks” focuses on advanced methods for search and optimisation applied to wireless communication networks. It is sponsored by Research & Enterprise Fund Southampton Solent University.
The conference strives to widen knowledge on advanced search methods capable of optimisation of wireless communications networks. The aim is to provide a forum for exchange of recent knowledge, new ideas and trends in this progressive and challenging area. The conference will popularise new successful approaches on resolving hard tasks such as minimisation of transmit power, cooperative and optimal routing
Difficulties with Recovering The Masses of Supermassive Black Holes from Stellar Kinematical Data
We investigate the ability of three-integral, axisymmetric, orbit-based
modeling algorithms to recover the parameters defining the gravitational
potential (M/L ratio and black hole mass Mh) in spheroidal stellar systems
using stellar kinematical data. We show that the potential estimation problem
is generically under-determined when applied to long-slit kinematical data of
the kind used in most black hole mass determinations to date. A range of
parameters (M/L, Mh) can provide equally good fits to the data, making it
impossible to assign best-fit values. We illustrate the indeterminacy using a
variety of data sets derived from realistic models as well as published
observations of the galaxy M32. In the case of M32, our reanalysis demonstrates
that data published prior to 2000 are equally consistent with Mh in the range
1.5x10^6-5x10^6 solar masses, with no preferred value in that range. While the
HST/STIS data for this galaxy may overcome the degeneracy in Mh, HST data for
most galaxies do not resolve the black hole's sphere of influence and in these
galaxies the degree of degeneracy allowed by the data may be substantial. We
investigate the effect on the degeneracy of enforcing smoothness
(regularization) constraints. However we find no indication that the true
potential can be recovered simply by enforcing smoothness. For a given
smoothing level, all solutions in the minimum-chisquare valley exhibit similar
levels of noise. These experiments affirm that the indeterminacy is real and
not an artifact associated with non-smooth solutions. (Abridged)Comment: Accepted for publication in The Astrophysical Journal. Changes
include discussion of regularizatio
SpECTRE: A Task-based Discontinuous Galerkin Code for Relativistic Astrophysics
We introduce a new relativistic astrophysics code, SpECTRE, that combines a
discontinuous Galerkin method with a task-based parallelism model. SpECTRE's
goal is to achieve more accurate solutions for challenging relativistic
astrophysics problems such as core-collapse supernovae and binary neutron star
mergers. The robustness of the discontinuous Galerkin method allows for the use
of high-resolution shock capturing methods in regions where (relativistic)
shocks are found, while exploiting high-order accuracy in smooth regions. A
task-based parallelism model allows efficient use of the largest supercomputers
for problems with a heterogeneous workload over disparate spatial and temporal
scales. We argue that the locality and algorithmic structure of discontinuous
Galerkin methods will exhibit good scalability within a task-based parallelism
framework. We demonstrate the code on a wide variety of challenging benchmark
problems in (non)-relativistic (magneto)-hydrodynamics. We demonstrate the
code's scalability including its strong scaling on the NCSA Blue Waters
supercomputer up to the machine's full capacity of 22,380 nodes using 671,400
threads.Comment: 41 pages, 13 figures, and 7 tables. Ancillary data contains
simulation input file
Thirty Years of Machine Learning: The Road to Pareto-Optimal Wireless Networks
Future wireless networks have a substantial potential in terms of supporting
a broad range of complex compelling applications both in military and civilian
fields, where the users are able to enjoy high-rate, low-latency, low-cost and
reliable information services. Achieving this ambitious goal requires new radio
techniques for adaptive learning and intelligent decision making because of the
complex heterogeneous nature of the network structures and wireless services.
Machine learning (ML) algorithms have great success in supporting big data
analytics, efficient parameter estimation and interactive decision making.
Hence, in this article, we review the thirty-year history of ML by elaborating
on supervised learning, unsupervised learning, reinforcement learning and deep
learning. Furthermore, we investigate their employment in the compelling
applications of wireless networks, including heterogeneous networks (HetNets),
cognitive radios (CR), Internet of things (IoT), machine to machine networks
(M2M), and so on. This article aims for assisting the readers in clarifying the
motivation and methodology of the various ML algorithms, so as to invoke them
for hitherto unexplored services as well as scenarios of future wireless
networks.Comment: 46 pages, 22 fig
- …