26,260 research outputs found
DISPATCH: A Numerical Simulation Framework for the Exa-scale Era. I. Fundamentals
We introduce a high-performance simulation framework that permits the
semi-independent, task-based solution of sets of partial differential
equations, typically manifesting as updates to a collection of `patches' in
space-time. A hybrid MPI/OpenMP execution model is adopted, where work tasks
are controlled by a rank-local `dispatcher' which selects, from a set of tasks
generally much larger than the number of physical cores (or hardware threads),
tasks that are ready for updating. The definition of a task can vary, for
example, with some solving the equations of ideal magnetohydrodynamics (MHD),
others non-ideal MHD, radiative transfer, or particle motion, and yet others
applying particle-in-cell (PIC) methods. Tasks do not have to be grid-based,
while tasks that are, may use either Cartesian or orthogonal curvilinear
meshes. Patches may be stationary or moving. Mesh refinement can be static or
dynamic. A feature of decisive importance for the overall performance of the
framework is that time steps are determined and applied locally; this allows
potentially large reductions in the total number of updates required in cases
when the signal speed varies greatly across the computational domain, and
therefore a corresponding reduction in computing time. Another feature is a
load balancing algorithm that operates `locally' and aims to simultaneously
minimise load and communication imbalance. The framework generally relies on
already existing solvers, whose performance is augmented when run under the
framework, due to more efficient cache usage, vectorisation, local
time-stepping, plus near-linear and, in principle, unlimited OpenMP and MPI
scaling.Comment: 17 pages, 8 figures. Accepted by MNRA
Automated data processing architecture for the Gemini Planet Imager Exoplanet Survey
The Gemini Planet Imager Exoplanet Survey (GPIES) is a multi-year direct
imaging survey of 600 stars to discover and characterize young Jovian
exoplanets and their environments. We have developed an automated data
architecture to process and index all data related to the survey uniformly. An
automated and flexible data processing framework, which we term the Data
Cruncher, combines multiple data reduction pipelines together to process all
spectroscopic, polarimetric, and calibration data taken with GPIES. With no
human intervention, fully reduced and calibrated data products are available
less than an hour after the data are taken to expedite follow-up on potential
objects of interest. The Data Cruncher can run on a supercomputer to reprocess
all GPIES data in a single day as improvements are made to our data reduction
pipelines. A backend MySQL database indexes all files, which are synced to the
cloud, and a front-end web server allows for easy browsing of all files
associated with GPIES. To help observers, quicklook displays show reduced data
as they are processed in real-time, and chatbots on Slack post observing
information as well as reduced data products. Together, the GPIES automated
data processing architecture reduces our workload, provides real-time data
reduction, optimizes our observing strategy, and maintains a homogeneously
reduced dataset to study planet occurrence and instrument performance.Comment: 21 pages, 3 figures, accepted in JATI
Resolving Orbital and Climate Keys of Earth and Extraterrestrial Environments with Dynamics 1.0: A General Circulation Model for Simulating the Climates of Rocky Planets
Resolving Orbital and Climate Keys of Earth and Extraterrestrial Environments
with Dynamics (ROCKE-3D) is a 3-Dimensional General Circulation Model (GCM)
developed at the NASA Goddard Institute for Space Studies for the modeling of
atmospheres of Solar System and exoplanetary terrestrial planets. Its parent
model, known as ModelE2 (Schmidt et al. 2014), is used to simulate modern and
21st Century Earth and near-term paleo-Earth climates. ROCKE-3D is an ongoing
effort to expand the capabilities of ModelE2 to handle a broader range of
atmospheric conditions including higher and lower atmospheric pressures, more
diverse chemistries and compositions, larger and smaller planet radii and
gravity, different rotation rates (slowly rotating to more rapidly rotating
than modern Earth, including synchronous rotation), diverse ocean and land
distributions and topographies, and potential basic biosphere functions. The
first aim of ROCKE-3D is to model planetary atmospheres on terrestrial worlds
within the Solar System such as paleo-Earth, modern and paleo-Mars,
paleo-Venus, and Saturn's moon Titan. By validating the model for a broad range
of temperatures, pressures, and atmospheric constituents we can then expand its
capabilities further to those exoplanetary rocky worlds that have been
discovered in the past and those to be discovered in the future. We discuss the
current and near-future capabilities of ROCKE-3D as a community model for
studying planetary and exoplanetary atmospheres.Comment: Revisions since previous draft. Now submitted to Astrophysical
Journal Supplement Serie
An Affine-Invariant Sampler for Exoplanet Fitting and Discovery in Radial Velocity Data
Markov Chain Monte Carlo (MCMC) proves to be powerful for Bayesian inference
and in particular for exoplanet radial velocity fitting because MCMC provides
more statistical information and makes better use of data than common
approaches like chi-square fitting. However, the non-linear density functions
encountered in these problems can make MCMC time-consuming. In this paper, we
apply an ensemble sampler respecting affine invariance to orbital parameter
extraction from radial velocity data. This new sampler has only one free
parameter, and it does not require much tuning for good performance, which is
important for automatization. The autocorrelation time of this sampler is
approximately the same for all parameters and far smaller than
Metropolis-Hastings, which means it requires many fewer function calls to
produce the same number of independent samples. The affine-invariant sampler
speeds up MCMC by hundreds of times compared with Metropolis-Hastings in the
same computing situation. This novel sampler would be ideal for projects
involving large datasets such as statistical investigations of planet
distribution. The biggest obstacle to ensemble samplers is the existence of
multiple local optima; we present a clustering technique to deal with local
optima by clustering based on the likelihood of the walkers in the ensemble. We
demonstrate the effectiveness of the sampler on real radial velocity data.Comment: 24 pages, 7 figures, accepted to Ap
Exploring heterogeneity of unreliable machines for p2p backup
P2P architecture is a viable option for enterprise backup. In contrast to
dedicated backup servers, nowadays a standard solution, making backups directly
on organization's workstations should be cheaper (as existing hardware is
used), more efficient (as there is no single bottleneck server) and more
reliable (as the machines are geographically dispersed).
We present the architecture of a p2p backup system that uses pairwise
replication contracts between a data owner and a replicator. In contrast to
standard p2p storage systems using directly a DHT, the contracts allow our
system to optimize replicas' placement depending on a specific optimization
strategy, and so to take advantage of the heterogeneity of the machines and the
network. Such optimization is particularly appealing in the context of backup:
replicas can be geographically dispersed, the load sent over the network can be
minimized, or the optimization goal can be to minimize the backup/restore time.
However, managing the contracts, keeping them consistent and adjusting them in
response to dynamically changing environment is challenging.
We built a scientific prototype and ran the experiments on 150 workstations
in the university's computer laboratories and, separately, on 50 PlanetLab
nodes. We found out that the main factor affecting the quality of the system is
the availability of the machines. Yet, our main conclusion is that it is
possible to build an efficient and reliable backup system on highly unreliable
machines (our computers had just 13% average availability)
Recommended from our members
Understanding construction delay analysis and the role of pre-construction programming
Copyright © 2013, American Society of Civil Engineers. This is the author's accepted manuscript. The final published article is available from the link below.Modern construction projects commonly suffer from delay in their completions. The resolution of time and cost claims consequently flowing from such delays continues to remain a difficult undertaking for all project parties. A common approach often relied on by contractors and their employers (or their representatives) to resolve this matter involves applying various delay analysis techniques, which are all based on construction programs originally developed for managing the project. However, evidence from literature suggests that the reliability of these techniques in ensuring successful claims resolution are often undermined by the nature and quality of the underlying program used. As part of a wider research carried out on delay and disruption analysis in practice, this paper reports on an aspect of the study aimed at exploring preconstruction stage programming issues that affect delay claims resolutions. This aspect is based on an in-depth interview with experienced construction planning engineers in the United Kingdom, conducted after an initial large-scale survey on delay and disruption techniques usage. Summary of key findings and conclusions include: (1) most contractors prefer to use linked bar chart format for their baseline programs over conventional critical path method (CPM) networks; (2) baseline programs are developed using planning software packages. Some of these pose difficulties when employed for most delay analysis techniques, except for simpler ones; (3) manpower loading graphs are not commonly developed as part of the main deliverables during preconstruction stage planning. As a result, most programs are not subjected to resource loading and leveling for them to accurately reflect planned resource usage on site. This practice has detrimental effects on the reliability of baseline programs in their use for resolving delay claims; and (4) baseline program development involves many different experts within construction organizations as expected, but with very little involvement of the employer or its representative. Active client involvement is however quite important as it would facilitate quick program approval/acceptance before construction, a necessary requirement for early delay claims settlement, which otherwise are often left unresolved long after the delaying events with the potential of generating into expensive disputes. The study results provide a better understanding of the key issues that need attention if improvements are to be made in delay claim resolutions. Additional research focusing on the testing of these results using a much larger sample and rigorous statistical analysis for generalization purposes would be helpful in advancing the limited knowledge of this subject matter
Exoplanet Catalogues
One of the most exciting developments in the field of exoplanets has been the
progression from 'stamp-collecting' to demography, from discovery to
characterisation, from exoplanets to comparative exoplanetology. There is an
exhilaration when a prediction is confirmed, a trend is observed, or a new
population appears. This transition has been driven by the rise in the sheer
number of known exoplanets, which has been rising exponentially for two decades
(Mamajek 2016). However, the careful collection, scrutiny and organisation of
these exoplanets is necessary for drawing robust, scientific conclusions that
are sensitive to the biases and caveats that have gone into their discovery.
The purpose of this chapter is to discuss and demonstrate important
considerations to keep in mind when examining or constructing a catalogue of
exoplanets. First, we introduce the value of exoplanetary catalogues. There are
a handful of large, online databases that aggregate the available exoplanet
literature and render it digestible and navigable - an ever more complex task
with the growing number and diversity of exoplanet discoveries. We compare and
contrast three of the most up-to-date general catalogues, including the data
and tools that are available. We then describe exoplanet catalogues that were
constructed to address specific science questions or exoplanet discovery space.
Although we do not attempt to list or summarise all the published lists of
exoplanets in the literature in this chapter, we explore the case study of the
NASA Kepler mission planet catalogues in some detail. Finally, we lay out some
of the best practices to adopt when constructing or utilising an exoplanet
catalogue.Comment: 14 pages, 6 figures. Invited review chapter, to appear in "Handbook
of Exoplanets", edited by H.J. Deeg and J.A. Belmonte, section editor N.
Batalh
- …