195,053 research outputs found
Weak refinement in Z
An important aspect in the specification of distributed systems is the role of the internal (or unobservable) operation. Such operations are not part of the user interface (i.e. the user cannot invoke them), however, they are essential to our understanding and correct modelling of the system. Various conventions have been employed to model internal operations when specifying distributed systems in Z. If internal operations are distinguished in the specification notation, then refinement needs to deal with internal operations in appropriate ways. However, in the presence of internal operations, standard Z refinement leads to undesirable implementations.
In this paper we present a generalization of Z refinement, called weak refinement, which treats internal operations differently from observable operations when refining a system. We illustrate some of the properties of weak refinement through a specification of a telecommunications protocol
Specifying and Refining Internal Operations in Z
Abstract An important aspect in the specification of distributed systems is the role of the internal (or unobservable) operation. Such operations are not part of the interface to the environment (i.e. the user cannot invoke them), however, they are essential to our understanding and correct modelling of the system. In this paper we are interested in the use of the formal specification notation Z for the description of distributed systems. Various conventions have been employed to model internal operations when specifying such systems in Z. If internal operations are distinguished in the specification notation, then refinement needs to deal with internal operations in appropriate ways. Using an example of a telecommunications protocol we show that standard Z refinement is inappropriate for refining a system when internal operations are specified explicitly. We present a generalization of Z refinement, called weak refinement, which treats internal operations differently from observable operations when refining a system. We discuss the role of internal operations in a Z specification, and in particular whether an equivalent specification not containing internal operations can be found. The nature of divergence through livelock is also discussed. Keywords: Z; Refinement; Distributed Systems; Internal Operations; Process Algebras; Concurrency
Nonequilibrium scaling explorations on a 2D Z(5)-symmetric model
We have investigated the dynamic critical behavior of the two-dimensional
Z(5)-symmetric spin model by using short-time Monte Carlo (MC) simulations. We
have obtained estimates of some critical points in its rich phase diagram and
included, among the usual critical lines the study of first-order (weak)
transition by looking into the order-disorder phase transition. Besides, we
also investigated the soft-disorder phase transition by considering empiric
methods. A study of the behavior of along the self-dual critical
line has been performed and special attention has been devoted to the critical
bifurcation point, or FZ (Fateev-Zamolodchikov) point. Firstly, by using a
refinement method and taking into account simulations out-of-equilibrium, we
were able to localize parameters of this point. In a second part of our study,
we turned our attention to the behavior of the model at the early stage of its
time evolution in order to find the dynamic critical exponent z as well as the
static critical exponents and of the FZ-point on square
lattices. The values of the static critical exponents and parameters are in
good agreement with the exact results, and the dynamic critical exponent
very close of the 4-state Potts model ().Comment: 11 pages, 7 figure
Weak lensing of the Lyman-alpha forest
The angular positions of quasars are deflected by the gravitational lensing
effect of foreground matter. The Lyman-alpha forest seen in the spectra of
these quasars is therefore also lensed. We propose that the signature of weak
gravitational lensing of the forest could be measured using similar techniques
that have been applied to the lensed Cosmic Microwave Background, and which
have also been proposed for application to spectral data from 21cm radio
telescopes. As with 21cm data, the forest has the advantage of spectral
information, potentially yielding many lensed "slices" at different redshifts.
We perform an illustrative idealized test, generating a high resolution angular
grid of quasars (of order arcminute separation), and lensing the
Lyman-alphaforest spectra at redshifts z=2-3 using a foreground density field.
We find that standard quadratic estimators can be used to reconstruct images of
the foreground mass distribution at z~1. There currently exists a wealth of Lya
forest data from quasar and galaxy spectral surveys, with smaller sightline
separations expected in the future. Lyman-alpha forest lensing is sensitive to
the foreground mass distribution at redshifts intermediate between CMB lensing
and galaxy shear, and avoids the difficulties of shape measurement associated
with the latter. With further refinement and application of mass reconstruction
techniques, weak gravitational lensing of the high redshift Lya forest may
become a useful new cosmological probe.Comment: 9 pages, 7 figures, submitted to MNRA
Intrinsic alignments of galaxies in the Horizon-AGN cosmological hydrodynamical simulation
The intrinsic alignments of galaxies are recognised as a contaminant to weak
gravitational lensing measurements. In this work, we study the alignment of
galaxy shapes and spins at low redshift () in Horizon-AGN, an
adaptive-mesh-refinement hydrodynamical cosmological simulation box of 100
Mpc/h a side with AGN feedback implementation. We find that spheroidal galaxies
in the simulation show a tendency to be aligned radially towards over-densities
in the dark matter density field and other spheroidals. This trend is in
agreement with observations, but the amplitude of the signal depends strongly
on how shapes are measured and how galaxies are selected in the simulation.
Disc galaxies show a tendency to be oriented tangentially around spheroidals in
three-dimensions. While this signal seems suppressed in projection, this does
not guarantee that disc alignments can be safely ignored in future weak lensing
surveys. The shape alignments of luminous galaxies in Horizon-AGN are in
agreement with observations and other simulation works, but we find less
alignment for lower luminosity populations. We also characterize the
systematics of galaxy shapes in the simulation and show that they can be safely
neglected when measuring the correlation of the density field and galaxy
ellipticities.Comment: 20 pages, 23 figure
Testing cosmic-ray acceleration with radio relics: a high-resolution study using MHD and tracers
Weak shocks in the intracluster medium may accelerate cosmic-ray protons and
cosmic-ray electrons differently depending on the angle between the upstream
magnetic field and the shock normal. In this work, we investigate how shock
obliquity affects the production of cosmic rays in high-resolution simulations
of galaxy clusters. For this purpose, we performed a magneto-hydrodynamical
simulation of a galaxy cluster using the mesh refinement code \enzo. We use
Lagrangian tracers to follow the properties of the thermal gas, the cosmic rays
and the magnetic fields over time. We tested a number of different acceleration
scenarios by varying the obliquity-dependent acceleration efficiencies of
protons and electrons, and by examining the resulting hadronic -ray and
radio emission. We find that the radio emission does not change significantly
if only quasi-perpendicular shocks are able to accelerate cosmic-ray electrons.
Our analysis suggests that radio emitting electrons found in relics have been
typically shocked many times before . On the other hand, the hadronic
-ray emission from clusters is found to decrease significantly if only
quasi-parallel shocks are allowed to accelerate cosmic-ray protons. This might
reduce the tension with the low upper limits on -ray emission from
clusters set by the \textit{Fermi}-satellite.Comment: 16 pages, 17 Figures, accepted for publication by MNRA
LoCuSS: Calibrating Mass-Observable Scaling Relations for Cluster Cosmology with Subaru Weak Lensing Observations
We present a joint weak-lensing/X-ray study of galaxy cluster mass-observable
scaling relations, motivated by the critical importance of accurate calibration
of mass proxies for future X-ray missions, including eROSITA. We use a sample
of 12 clusters at z\simeq0.2 that we have observed with Subaru and XMM-Newton
to construct relationships between the weak-lensing mass (M), and three X-ray
observables: gas temperature (T), gas mass (Mgas), and quasi-integrated gas
pressure (Yx) at overdensities of \Delta=2500, 1000, and 500 with respect to
the critical density. We find that Mgas at \Delta\le1000 appears to be the most
promising mass proxy of the three, because it has the lowest intrinsic scatter
in mass at fixed observable: \sigma_lnM\simeq0.1, independent of cluster
dynamical state. The scatter in mass at fixed T and Yx is a factor of \sim2-3
larger than at fixed Mgas, which are indicative of the structural segregation
that we find in the M-T and M-Yx relationships. Undisturbed clusters are found
to be \sim40% and \sim20% more massive than disturbed clusters at fixed T and
Yx respectively at \sim2\sigma significance. In particular, A1914 - a
well-known merging cluster - significantly increases the scatter and lowers the
the normalization of the relation for disturbed clusters. We also investigated
the covariance between intrinsic scatter in M-Mgas and M-T relations, finding
that they are positively correlated. This contradicts the adaptive mesh
refinement simulations that motivated the idea that Yx may be a low scatter
mass proxy, and agrees with more recent smoothed particle hydrodynamic
simulations based on the Millennium Simulation. We also propose a method to
identify a robust mass proxy based on principal component analysis. The
statistical precision of our results are limited by the small sample size and
the presence of the extreme merging cluster in our sample.Comment: 13 pages, 6 figures : ApJ in press : proof ve
Correct and Efficient Antichain Algorithms for Refinement Checking
The notion of refinement plays an important role in software engineering. It
is the basis of a stepwise development methodology in which the correctness of
a system can be established by proving, or computing, that a system refines its
specification. Wang et al. describe algorithms based on antichains for
efficiently deciding trace refinement, stable failures refinement and
failures-divergences refinement. We identify several issues pertaining to the
soundness and performance in these algorithms and propose new, correct,
antichain-based algorithms. Using a number of experiments we show that our
algorithms outperform the original ones in terms of running time and memory
usage. Furthermore, we show that additional run time improvements can be
obtained by applying divergence-preserving branching bisimulation minimisation
Recommended from our members
Automated verification of refinement laws
Demonic refinement algebras are variants of Kleene algebras. Introduced by von Wright as a light-weight variant of the refinement calculus, their intended semantics are positively disjunctive predicate transformers, and their calculus is entirely within first-order equational logic. So, for the first time, off-the-shelf automated theorem proving (ATP) becomes available for refinement proofs. We used ATP to verify a toolkit of basic refinement laws. Based on this toolkit, we then verified two classical complex refinement laws for action systems by ATP: a data refinement law and Back's atomicity refinement law. We also present a refinement law for infinite loops that has been discovered through automated analysis. Our proof experiments not only demonstrate that refinement can effectively be automated, they also compare eleven different ATP systems and suggest that program verification with variants of Kleene algebras yields interesting theorem proving benchmarks. Finally, we apply hypothesis learning techniques that seem indispensable for automating more complex proofs
- …