2,398 research outputs found
Monte-Carlo Simulations of Globular Cluster Evolution - I. Method and Test Calculations
We present a new parallel supercomputer implementation of the Monte-Carlo
method for simulating the dynamical evolution of globular star clusters. Our
method is based on a modified version of Henon's Monte-Carlo algorithm for
solving the Fokker-Planck equation. Our code allows us to follow the evolution
of a cluster containing up to 5x10^5 stars to core collapse in < 40 hours of
computing time. In this paper we present the results of test calculations for
clusters with equal-mass stars, starting from both Plummer and King model
initial conditions. We consider isolated as well as tidally truncated clusters.
Our results are compared to those obtained from approximate, self-similar
analytic solutions, from direct numerical integrations of the Fokker-Planck
equation, and from direct N-body integrations performed on a GRAPE-4
special-purpose computer with N=16384. In all cases we find excellent agreement
with other methods, establishing our new code as a robust tool for the
numerical study of globular cluster dynamics using a realistic number of stars.Comment: 35 pages, including 8 figures, submitted to ApJ. Revised versio
MYRIAD: A new N-body code for simulations of Star Clusters
We present a new C++ code for collisional N-body simulations of star
clusters. The code uses the Hermite fourth-order scheme with block time steps,
for advancing the particles in time, while the forces and neighboring particles
are computed using the GRAPE-6 board. Special treatment is used for close
encounters, binary and multiple sub-systems that either form dynamically or
exist in the initial configuration. The structure of the code is modular and
allows the appropriate treatment of more physical phenomena, such as stellar
and binary evolution, stellar collisions and evolution of close black-hole
binaries. Moreover, it can be easily modified so that the part of the code that
uses GRAPE-6, could be replaced by another module that uses other
accelerating-hardware like the Graphics Processing Units (GPUs). Appropriate
choice of the free parameters give a good accuracy and speed for simulations of
star clusters up to and beyond core collapse. Simulations of Plummer models
consisting of equal-mass stars reached core collapse at t~17 half-mass
relaxation times, which compares very well with existing results, while the
cumulative relative error in the energy remained below 0.001. Also, comparisons
with published results of other codes for the time of core collapse for
different initial conditions, show excellent agreement. Simulations of King
models with an initial mass-function, similar to those found in the literature,
reached core collapse at t~0.17, which is slightly smaller than the expected
result from previous works. Finally, the code accuracy becomes comparable and
even better than the accuracy of existing codes, when a number of close binary
systems is dynamically created in a simulation. This is due to the high
accuracy of the method that is used for close binary and multiple sub-systems.Comment: 24 pages, 29 figures, accepted for publication to Astronomy &
Astrophysic
Comparing compact binary parameter distributions I: Methods
Being able to measure each merger's sky location, distance, component masses,
and conceivably spins, ground-based gravitational-wave detectors will provide a
extensive and detailed sample of coalescing compact binaries (CCBs) in the
local and, with third-generation detectors, distant universe. These
measurements will distinguish between competing progenitor formation models. In
this paper we develop practical tools to characterize the amount of
experimentally accessible information available, to distinguish between two a
priori progenitor models. Using a simple time-independent model, we demonstrate
the information content scales strongly with the number of observations. The
exact scaling depends on how significantly mass distributions change between
similar models. We develop phenomenological diagnostics to estimate how many
models can be distinguished, using first-generation and future instruments.
Finally, we emphasize that multi-observable distributions can be fully
exploited only with very precisely calibrated detectors, search pipelines,
parameter estimation, and Bayesian model inference
The Living Rainforest Sustainable Greenhouses
The Living Rainforest (www.livingrainforest.org) is an educational charity that uses rainforest ecology as a metaphor for communicating general sustainability issues to the public. Its greenhouses and office buildings are to be renovated using the most sustainable methods currently available. This will be realised through construction of a high insulating greenhouse covering with a k-value of less than 2 Wm-2K-1, passive seasonal storage of excess summer solar energy in the ground by a ground source heat exchanger and exploitation of this low grade solar energy for heating in winter by a heat pump. In winter the heat pump will produce cold water to cool the ground allowing a passive cooling function in summer via the GSHE. It will be demonstrated that a GSHE is an alternative for an open aquifer in regions with no aquifer availability. The heat pump will deliver the heating baseload, the peak load will be delivered by a biomass boiler, fired with locally-sourced low-cost wood chips. It is expected that the energy saving will be about 75%, resulting in a major cost reduction. The low k-value of the covering is linked to a light transmission of 75 %. This is high enough for the demands of the vegetation in The Living Rainforest. Because the inner greenhouse climate demands are comparable to that of ornamentals, the results will be applicable to commercial ornamental production. In future low k-value coverings will also be available with high light transmission, allowing wider application of the results. This paper focuses on the correlation between k-value, light transmission and energy demand in order to investigate the trade-off between light transmittance (a major energy gain) and heat loss. The effects of these design parameters on storage and harvesting capacity are also considered but appear to have a low sensitivity. The renovated greenhouse site at The Living Rainforest will show that new greenhouses and ecology can be linked to sustainability and this will be communicated and demonstrated to the public
The Evolution of Globular Clusters in the Galaxy
We investigate the evolution of globular clusters using N-body calculations
and anisotropic Fokker-Planck (FP) calculations. The models include a mass
spectrum, mass loss due to stellar evolution, and the tidal field of the parent
galaxy. Recent N-body calculations have revealed a serious discrepancy between
the results of N-body calculations and isotropic FP calculations. The main
reason for the discrepancy is an oversimplified treatment of the tidal field
employed in the isotropic FP models. In this paper we perform a series of
calculations with anisotropic FP models with a better treatment of the tidal
boundary and compare these with N-body calculations. The new tidal boundary
condition in our FP model includes one free parameter. We find that a single
value of this parameter gives satisfactory agreement between the N-body and FP
models over a wide range of initial conditions.
Using the improved FP model, we carry out an extensive survey of the
evolution of globular clusters over a wide range of initial conditions varying
the slope of the mass function, the central concentration, and the relaxation
time. The evolution of clusters is followed up to the moment of core collapse
or the disruption of the clusters in the tidal field of the parent galaxy. In
general, our model clusters, calculated with the anisotropic FP model with the
improved treatment for the tidal boundary, live longer than isotropic models.
The difference in the lifetime between the isotropic and anisotropic models is
particularly large when the effect of mass loss via stellar evolution is rather
significant. On the other hand the difference is small for relaxation-
dominated clusters which initially have steep mass functions and high central
concentrations.Comment: 36 pages, 11 figures, LaTeX; added figures and tables; accepted by
Ap
Decision Makers Facing Uncertainty: Theory versus Evidence
We consider three competing normative theories of how to make choices when facing uncertainty: subjective expected utility, maximin utility and minimax regret. In simple decision problems, we compare how decision makers under each of these theories value safe options, freedom of choice and information. We then use these models to predict answers to questions in the European Values Survey and use these predictions via a latent class analysis to estimate the distribution of these behaviors across Europe. We find a larger proportion of Bayesians in the Northern countries than in Southern countries. The opposite is true for maximin utility behavior. Only a few are consistent with minimax regret behavior
Recommended from our members
SAD phasing of XFEL data depends critically on the error model.
A nonlinear least-squares method for refining a parametric expression describing the estimated errors of reflection intensities in serial crystallographic (SX) data is presented. This approach, which is similar to that used in the rotation method of crystallographic data collection at synchrotrons, propagates error estimates from photon-counting statistics to the merged data. Here, it is demonstrated that the application of this approach to SX data provides better SAD phasing ability, enabling the autobuilding of a protein structure that had previously failed to be built. Estimating the error in the merged reflection intensities requires the understanding and propagation of all of the sources of error arising from the measurements. One type of error, which is well understood, is the counting error introduced when the detector counts X-ray photons. Thus, if other types of random errors (such as readout noise) as well as uncertainties in systematic corrections (such as from X-ray attenuation) are completely understood, they can be propagated along with the counting error, as appropriate. In practice, most software packages propagate as much error as they know how to model and then include error-adjustment terms that scale the error estimates until they explain the variance among the measurements. If this is performed carefully, then during SAD phasing likelihood-based approaches can make optimal use of these error estimates, increasing the chance of a successful structure solution. In serial crystallography, SAD phasing has remained challenging, with the few examples of de novo protein structure solution each requiring many thousands of diffraction patterns. Here, the effects of different methods of treating the error estimates are estimated and it is shown that using a parametric approach that includes terms proportional to the known experimental uncertainty, the reflection intensity and the squared reflection intensity to improve the error estimates can allow SAD phasing even from weak zinc anomalous signal
- …