1,398 research outputs found
Integration of BPM systems
New technologies have emerged to support the global economy where for instance suppliers, manufactures and retailers are working together in order to minimise the cost and
maximise efficiency. One of the technologies that has become a buzz word for many businesses is business process management or BPM. A business process comprises activities
and tasks, the resources required to perform each task, and the business rules linking these activities and tasks. The tasks may be performed by human and/or machine actors.
Workflow provides a way of describing the order of execution and the dependent relationships between the constituting activities of short or long running processes.
Workflow allows businesses to capture not only the information but also the processes that transform the information - the process asset (Koulopoulos, T. M., 1995). Applications which involve automated, human-centric and collaborative processes across organisations are
inherently different from one organisation to another. Even within the same organisation but over time, applications are adapted as ongoing change to the business processes is seen as the norm in today’s dynamic business environment. The major difference lies in the specifics of business processes which are changing rapidly in order to match the way in which businesses operate. In this chapter we introduce and discuss Business Process Management (BPM) with a focus on the integration of heterogeneous BPM systems across multiple organisations. We identify the problems and the main challenges not only with regards to technologies but also in the social and cultural context. We also discuss the issues that have arisen in our bid to find the solutions
When Do Transformers Shine in RL? Decoupling Memory from Credit Assignment
Reinforcement learning (RL) algorithms face two distinct challenges: learning
effective representations of past and present observations, and determining how
actions influence future returns. Both challenges involve modeling long-term
dependencies. The transformer architecture has been very successful to solve
problems that involve long-term dependencies, including in the RL domain.
However, the underlying reason for the strong performance of Transformer-based
RL methods remains unclear: is it because they learn effective memory, or
because they perform effective credit assignment? After introducing formal
definitions of memory length and credit assignment length, we design simple
configurable tasks to measure these distinct quantities. Our empirical results
reveal that Transformers can enhance the memory capacity of RL algorithms,
scaling up to tasks that require memorizing observations steps ago.
However, Transformers do not improve long-term credit assignment. In summary,
our results provide an explanation for the success of Transformers in RL, while
also highlighting an important area for future research and benchmark design
Analysis of two-point statistics of cosmic shear: III. Covariances of shear measures made easy
In recent years cosmic shear, the weak gravitational lensing effect by the
large-scale structure of the Universe, has proven to be one of the
observational pillars on which the cosmological concordance model is founded.
Several cosmic shear statistics have been developed in order to analyze data
from surveys. For the covariances of the prevalent second-order measures we
present simple and handy formulae, valid under the assumptions of Gaussian
density fluctuations and a simple survey geometry. We also formulate these
results in the context of shear tomography, i.e. the inclusion of redshift
information, and generalize them to arbitrary data field geometries. We define
estimators for the E- and B-mode projected power spectra and show them to be
unbiased in the case of Gaussianity and a simple survey geometry. From the
covariance of these estimators we demonstrate how to derive covariances of
arbitrary combinations of second-order cosmic shear measures. We then
recalculate the power spectrum covariance for general survey geometries and
examine the bias thereby introduced on the estimators for exemplary
configurations. Our results for the covariances are considerably simpler than
and analytically shown to be equivalent to the real-space approach presented in
the first paper of this series. We find good agreement with other numerical
evaluations and confirm the general properties of the covariance matrices. The
studies of the specific survey configurations suggest that our simplified
covariances may be employed for realistic survey geometries to good
approximation.Comment: 15 pages, including 4 figures (Fig. 3 reduced in quality); minor
changes, Fig. 4 extended; published in A&
Cosmological constraints from the capture of non-Gaussianity in Weak Lensing data
Weak gravitational lensing has become a common tool to constrain the
cosmological model. The majority of the methods to derive constraints on
cosmological parameters use second-order statistics of the cosmic shear.
Despite their success, second-order statistics are not optimal and degeneracies
between some parameters remain. Tighter constraints can be obtained if
second-order statistics are combined with a statistic that is efficient to
capture non-Gaussianity. In this paper, we search for such a statistical tool
and we show that there is additional information to be extracted from
statistical analysis of the convergence maps beyond what can be obtained from
statistical analysis of the shear field. For this purpose, we have carried out
a large number of cosmological simulations along the {\sigma}8-{\Omega}m
degeneracy, and we have considered three different statistics commonly used for
non-Gaussian features characterization: skewness, kurtosis and peak count. To
be able to investigate non-Gaussianity directly in the shear field we have used
the aperture mass definition of these three statistics for different scales.
Then, the results have been compared with the results obtained with the same
statistics estimated in the convergence maps at the same scales. First, we show
that shear statistics give similar constraints to those given by convergence
statistics, if the same scale is considered. In addition, we find that the peak
count statistic is the best to capture non-Gaussianities in the weak lensing
field and to break the {\sigma}8-{\Omega}m degeneracy. We show that this
statistical analysis should be conducted in the convergence maps: first,
because there exist fast algorithms to compute the convergence map for
different scales, and secondly because it offers the opportunity to denoise the
reconstructed convergence map, which improves non-Gaussian features extraction.Comment: Accepted for publication in MNRAS (11 pages, 5 figures, 9 tables
A companion to a quasar at redshift 4.7
There is a growing consensus that the emergence of quasars at high redshifts
is related to the onset of galaxy formation, suggesting that the detection of
concentrations of gas accompanying such quasars should provide clues about the
early history of galaxies. Quasar companions have been recently identified at
redshifts up to . Here we report observations of Lyman-
emission (a tracer of ionised hydrogen) from the companion to a quasar at
=4.702, corresponding to a time when the Universe was less than ten per cent
of its present age. We argue that most of the emission arises in a gaseous
nebula that has been photoionised by the quasar, but an additional component of
continuum light -perhaps quasar light scattered from dust in the companion
body, or emission from young stars within the nebula- appears necessary to
explain the observations. These observations may be indicative of the first
stages in the assembly of galaxy-sized structures.Comment: 8 pages, 4 figures, plain LaTeX. Accepted for publication in Natur
Self calibration of photometric redshift scatter in weak lensing surveys
Photo-z errors, especially catastrophic errors, are a major uncertainty for
precision weak lensing cosmology. We find that the shear-(galaxy number)
density and density-density cross correlation measurements between photo-z
bins, available from the same lensing surveys, contain valuable information for
self-calibration of the scattering probabilities between the true-z and photo-z
bins. The self-calibration technique we propose does not rely on cosmological
priors nor parameterization of the photo-z probability distribution function,
and preserves all of the cosmological information available from shear-shear
measurement. We estimate the calibration accuracy through the Fisher matrix
formalism. We find that, for advanced lensing surveys such as the planned stage
IV surveys, the rate of photo-z outliers can be determined with statistical
uncertainties of 0.01-1% for galaxies. Among the several sources of
calibration error that we identify and investigate, the {\it galaxy
distribution bias} is likely the most dominant systematic error, whereby
photo-z outliers have different redshift distributions and/or bias than
non-outliers from the same bin. This bias affects all photo-z calibration
techniques based on correlation measurements. Galaxy bias variations of
produce biases in photo-z outlier rates similar to the statistical
errors of our method, so this galaxy distribution bias may bias the
reconstructed scatters at several- level, but is unlikely to completely
invalidate the self-calibration technique.Comment: v2: 19 pages, 10 figures. Added one figure. Expanded discussions.
Accepted to MNRA
Optimising cosmic shear surveys to measure modifications to gravity on cosmic scales
We consider how upcoming photometric large scale structure surveys can be
optimized to measure the properties of dark energy and possible cosmic scale
modifications to General Relativity in light of realistic astrophysical and
instrumental systematic uncertainities. In particular we include flexible
descriptions of intrinsic alignments, galaxy bias and photometric redshift
uncertainties in a Fisher Matrix analysis of shear, position and position-shear
correlations, including complementary cosmological constraints from the CMB. We
study the impact of survey tradeoffs in depth versus breadth, and redshift
quality. We parameterise the results in terms of the Dark Energy Task Force
figure of merit, and deviations from General Relativity through an analagous
Modified Gravity figure of merit. We find that intrinsic alignments weaken the
dependence of figure of merit on area and that, for a fixed observing time, a
fiducial Stage IV survey plateaus above roughly 10,000deg2 for DE and peaks at
about 5,000deg2 as the relative importance of IAs at low redshift penalises
wide, shallow surveys. While reducing photometric redshift scatter improves
constraining power, the dependence is shallow. The variation in constraining
power is stronger once IAs are included and is slightly more pronounced for MG
constraints than for DE. The inclusion of intrinsic alignments and galaxy
position information reduces the required prior on photometric redshift
accuracy by an order of magnitude for both the fiducial Stage III and IV
surveys, equivalent to a factor of 100 reduction in the number of spectroscopic
galaxies required to calibrate the photometric sample.Comment: 13 pages, 6 figures. Fixed an error in equation 19 which changes the
right hand panels of figures 1 and 2, and modifies conclusions on the results
for fixed observing tim
- …