281 research outputs found
ON COMPUTER SIMULATION AS A COMPONENT IN INFORMATION SYSTEMS RESEARCH
Computer simulation is widely regarded as a useful activity during various phases of research. However, depending on its context, the meaning, definition, and focus of the term can vary: In traffic planning, for example, simulation is used to determine useful configurations of a road network, thus focusing on the environment. An entirely different perspective is used within multi-agent systems. In such settings, the environment of the agents remains static, while the interesting research questions concern the behavior of the agents themselves. The research focuses on the microscopic level and the resulting emergent behavior. This article puts such diverse meanings in the context of a research process that treats descriptive and prescriptive research as two sides of the same coin. We develop a framework to classify different types of simulation, based on the actual research activity they are intended to be used for. Two case studies supplement the framework
A jump-growth model for predator-prey dynamics: derivation and application to marine ecosystems
This paper investigates the dynamics of biomass in a marine ecosystem. A
stochastic process is defined in which organisms undergo jumps in body size as
they catch and eat smaller organisms. Using a systematic expansion of the
master equation, we derive a deterministic equation for the macroscopic
dynamics, which we call the deterministic jump-growth equation, and a linear
Fokker-Planck equation for the stochastic fluctuations. The McKendrick--von
Foerster equation, used in previous studies, is shown to be a first-order
approximation, appropriate in equilibrium systems where predators are much
larger than their prey. The model has a power-law steady state consistent with
the approximate constancy of mass density in logarithmic intervals of body mass
often observed in marine ecosystems. The behaviours of the stochastic process,
the deterministic jump-growth equation and the McKendrick--von Foerster
equation are compared using numerical methods. The numerical analysis shows two
classes of attractors: steady states and travelling waves.Comment: 27 pages, 4 figures. Final version as published. Only minor change
A-dependence of nuclear transparency in quasielastic A(e,e'p) at high Q^2
The A-dependence of the quasielastic A(e,e'p) reaction has been studied at
SLAC with H-2, C, Fe, and Au nuclei at momentum transfers Q^2 = 1, 3, 5, and
6.8 (GeV/c)^2. We extract the nuclear transparency T(A,Q^2), a measure of the
average probability that the struck proton escapes from the nucleus A without
interaction. Several calculations predict a significant increase in T with
momentum transfer, a phenomenon known as Color Transparency. No significant
rise within errors is seen for any of the nuclei studied.Comment: 5 pages incl. 2 figures, Caltech preprint OAP-73
Experimental and Theoretical Challenges in the Search for the Quark Gluon Plasma: The STAR Collaboration's Critical Assessment of the Evidence from RHIC Collisions
We review the most important experimental results from the first three years
of nucleus-nucleus collision studies at RHIC, with emphasis on results from the
STAR experiment, and we assess their interpretation and comparison to theory.
The theory-experiment comparison suggests that central Au+Au collisions at RHIC
produce dense, rapidly thermalizing matter characterized by: (1) initial energy
densities above the critical values predicted by lattice QCD for establishment
of a Quark-Gluon Plasma (QGP); (2) nearly ideal fluid flow, marked by
constituent interactions of very short mean free path, established most
probably at a stage preceding hadron formation; and (3) opacity to jets. Many
of the observations are consistent with models incorporating QGP formation in
the early collision stages, and have not found ready explanation in a hadronic
framework. However, the measurements themselves do not yet establish
unequivocal evidence for a transition to this new form of matter. The
theoretical treatment of the collision evolution, despite impressive successes,
invokes a suite of distinct models, degrees of freedom and assumptions of as
yet unknown quantitative consequence. We pose a set of important open
questions, and suggest additional measurements, at least some of which should
be addressed in order to establish a compelling basis to conclude definitively
that thermalized, deconfined quark-gluon matter has been produced at RHIC.Comment: 101 pages, 37 figures; revised version to Nucl. Phys.
Task swapping networks in distributed systems
In this paper we propose task swapping networks for task reassignments by
using task swappings in distributed systems. Some classes of task reassignments
are achieved by using iterative local task swappings between software agents in
distributed systems. We use group-theoretic methods to find a minimum-length
sequence of adjacent task swappings needed from a source task assignment to a
target task assignment in a task swapping network of several well-known
topologies.Comment: This is a preprint of a paper whose final and definite form is
published in: Int. J. Comput. Math. 90 (2013), 2221-2243 (DOI:
10.1080/00207160.2013.772985
Recommended from our members
Characterization and intercomparison of aerosol absorption photometers: Result of two intercomparison workshops
Absorption photometers for real time application have been available since the 1980s, but the use of filter-based instruments to derive information on aerosol properties (absorption coefficient and black carbon, BC) is still a matter of debate. Several workshops have been conducted to investigate the performance of individual instruments over the intervening years. Two workshops with large sets of aerosol absorption photometers were conducted in 2005 and 2007. The data from these instruments were corrected using existing methods before further analysis. The inter-comparison shows a large variation between the responses to absorbing aerosol particles for different types of instruments. The unit to unit variability between instruments can be up to 30% for Particle Soot Absorption Photometers (PSAPs) and Aethalometers. Multi Angle Absorption Photometers (MAAPs) showed a variability of less than 5%. Reasons for the high variability were identified to be variations in sample flow and spot size. It was observed that different flow rates influence system performance with respect to response to absorption and instrumental noise. Measurements with non absorbing particles showed that the current corrections of a cross sensitivity to particle scattering are not sufficient. Remaining cross sensitivities were found to be a function of the total particle load on the filter. The large variation between the response to absorbing aerosol particles for different types of instruments indicates that current correction functions for absorption photometers are not adequate
The matter power spectrum in redshift space using effective field theory
The use of Eulerian 'standard perturbation theory' to describe mass assembly in the early universe has traditionally been limited to modes with k <= 0.1 h/Mpc at z=0. At larger k the SPT power spectrum deviates from measurements made using N-body simulations. Recently, there has been progress in extending the reach of perturbation theory to larger k using ideas borrowed from effective field theory. We revisit the computation of the redshift-space matter power spectrum within this framework, including for the first time for the full one-loop time dependence. We use a resummation scheme proposed by Vlah et al. to account for damping of the baryonic acoustic oscillations due to large-scale random motions and show that this has a significant effect on the multipole power spectra. We renormalize by comparison to a suite of custom N-body simulations matching the MultiDark MDR1 cosmology. At z=0 and for scales k <~ 0.4 h/Mpc we find that the EFT furnishes a description of the real-space power spectrum up to ~ 2%, for the ell=0 mode up to ~ 5% and for the ell = 2, 4 modes up to ~ 25%. We argue that, in the MDR1 cosmology, positivity of the ell = 0 mode gives a firm upper limit of k ~ 0.74 h/Mpc for the validity of the one-loop EFT prediction in redshift space using only the lowest-order counterterm. We show that replacing the one-loop growth factors by their Einstein-de Sitter counterparts is a good approximation for the ell = 0 mode, but can induce deviations as large as 2% for the ell = 2, 4 modes. An accompanying software bundle, distributed under open source licenses, includes Mathematica notebooks describing the calculation, together with parallel pipelines capable of computing both the necessary one-loop SPT integrals and the effective field theory counterterms
An updated radiocarbon-based ice margin chronology for the last deglaciation of the North American Ice Sheet Complex
The North American Ice Sheet Complex (NAISC; consisting of the Laurentide, Cordilleran and Innuitian ice sheets) was the largest ice mass to repeatedly grow and decay in the Northern Hemisphere during the Quaternary. Understanding its pattern of retreat following the Last Glacial Maximum is critical for studying many facets of the Late Quaternary, including ice sheet behaviour, the evolution of Holocene landscapes, sea level, atmospheric circulation, and the peopling of the Americas. Currently, the most up-to-date and authoritative margin chronology for the entire ice sheet complex is featured in two publications (Geological Survey of Canada Open File 1574 [Dyke et al., 2003]; ‘Quaternary Glaciations – Extent and Chronology, Part II’ [Dyke, 2004]). These often-cited datasets track ice margin recession in 36 time slices spanning 18 ka to 1 ka (all ages in uncalibrated radiocarbon years) using a combination of geomorphology, stratigraphy and radiocarbon dating. However, by virtue of being over 15 years old, the ice margin chronology requires updating to reflect new work and important revisions. This paper updates the aforementioned 36 ice margin maps to reflect new data from regional studies. We also update the original radiocarbon dataset from the 2003/2004 papers with 1541 new ages to reflect work up to and including 2018. A major revision is made to the 18 ka ice margin, where Banks and Eglinton islands (once considered to be glacial refugia) are now shown to be fully glaciated. Our updated 18 ka ice sheet increased in areal extent from 17.81 to 18.37 million km2, which is an increase of 3.1% in spatial coverage of the NAISC at that time. Elsewhere, we also summarize, region-by-region, significant changes to the deglaciation sequence. This paper integrates new information provided by regional experts and radiocarbon data into the deglaciation sequence while maintaining consistency with the original ice margin positions of Dyke et al. (2003) and Dyke (2004) where new information is lacking; this is a pragmatic solution to satisfy the needs of a Quaternary research community that requires up-to-date knowledge of the pattern of ice margin recession of what was once the world’s largest ice mass. The 36 updated isochrones are available in PDF and shapefile format, together with a spreadsheet of the expanded radiocarbon dataset (n = 5195 ages) and estimates of uncertainty for each interval
- …