301 research outputs found
Akubras to Hard Hats: Easing Skills Shortages through Labour Harmonisation Strategies
This article examines skill and labour shortages within rural agricultural industries in Western Australia. It draws on primary and secondary data, including 600 survey respondents in the sector. It is determined that there may be a shortage of farm workers during the busy seasons, while they are unemployed during the low seasons. Consequently, it is proposed that a human capability framework is utilised to encourage farm owners and (or) workers to consider the potential for labour-harmonisation (LH) strategies which would allow workers to transit between working on the land during the busy seasons and in mining during the low seasons. The outcomes of the study are considered in relation to indicators of precarious work illustrating that LH could enable an easing of labour shortages for both the farming and mining sectors, while providing benefits for the respective workers, employers, and the region in general
Computing a rectilinear shortest path amid splinegons in plane
We reduce the problem of computing a rectilinear shortest path between two
given points s and t in the splinegonal domain \calS to the problem of
computing a rectilinear shortest path between two points in the polygonal
domain. As part of this, we define a polygonal domain \calP from \calS and
transform a rectilinear shortest path computed in \calP to a path between s and
t amid splinegon obstacles in \calS. When \calS comprises of h pairwise
disjoint splinegons with a total of n vertices, excluding the time to compute a
rectilinear shortest path amid polygons in \calP, our reduction algorithm takes
O(n + h \lg{n}) time. For the special case of \calS comprising of concave-in
splinegons, we have devised another algorithm in which the reduction procedure
does not rely on the structures used in the algorithm to compute a rectilinear
shortest path in polygonal domain. As part of these, we have characterized few
of the properties of rectilinear shortest paths amid splinegons which could be
of independent interest
On the Inability of Markov Models to Capture Criticality in Human Mobility
We examine the non-Markovian nature of human mobility by exposing the
inability of Markov models to capture criticality in human mobility. In
particular, the assumed Markovian nature of mobility was used to establish a
theoretical upper bound on the predictability of human mobility (expressed as a
minimum error probability limit), based on temporally correlated entropy. Since
its inception, this bound has been widely used and empirically validated using
Markov chains. We show that recurrent-neural architectures can achieve
significantly higher predictability, surpassing this widely used upper bound.
In order to explain this anomaly, we shed light on several underlying
assumptions in previous research works that has resulted in this bias. By
evaluating the mobility predictability on real-world datasets, we show that
human mobility exhibits scale-invariant long-range correlations, bearing
similarity to a power-law decay. This is in contrast to the initial assumption
that human mobility follows an exponential decay. This assumption of
exponential decay coupled with Lempel-Ziv compression in computing Fano's
inequality has led to an inaccurate estimation of the predictability upper
bound. We show that this approach inflates the entropy, consequently lowering
the upper bound on human mobility predictability. We finally highlight that
this approach tends to overlook long-range correlations in human mobility. This
explains why recurrent-neural architectures that are designed to handle
long-range structural correlations surpass the previously computed upper bound
on mobility predictability
Responsibility modelling for civil emergency planning
This paper presents a new approach to analysing and understanding civil emergency planning based on the notion of responsibility modelling combined with HAZOPS-style analysis of information requirements. Our goal is to represent complex contingency plans so that they can be more readily understood, so that inconsistencies can be highlighted and vulnerabilities discovered. In this paper, we outline the framework for contingency planning in the United Kingdom and introduce the notion of responsibility models as a means of representing the key features of contingency plans. Using a case study of a flooding emergency, we illustrate our approach to responsibility modelling and suggest how it adds value to current textual contingency plans
Mining, compressing and classifying with extensible motifs
BACKGROUND: Motif patterns of maximal saturation emerged originally in contexts of pattern discovery in biomolecular sequences and have recently proven a valuable notion also in the design of data compression schemes. Informally, a motif is a string of intermittently solid and wild characters that recurs more or less frequently in an input sequence or family of sequences. Motif discovery techniques and tools tend to be computationally imposing, however, special classes of "rigid" motifs have been identified of which the discovery is affordable in low polynomial time. RESULTS: In the present work, "extensible" motifs are considered such that each sequence of gaps comes endowed with some elasticity, whereby the same pattern may be stretched to fit segments of the source that match all the solid characters but are otherwise of different lengths. A few applications of this notion are then described. In applications of data compression by textual substitution, extensible motifs are seen to bring savings on the size of the codebook, and hence to improve compression. In germane contexts, in which compressibility is used in its dual role as a basis for structural inference and classification, extensible motifs are seen to support unsupervised classification and phylogeny reconstruction. CONCLUSION: Off-line compression based on extensible motifs can be used advantageously to compress and classify biological sequences
Novel Results on the Number of Runs of the Burrows-Wheeler-Transform
The Burrows-Wheeler-Transform (BWT), a reversible string transformation, is
one of the fundamental components of many current data structures in string
processing. It is central in data compression, as well as in efficient query
algorithms for sequence data, such as webpages, genomic and other biological
sequences, or indeed any textual data. The BWT lends itself well to compression
because its number of equal-letter-runs (usually referred to as ) is often
considerably lower than that of the original string; in particular, it is well
suited for strings with many repeated factors. In fact, much attention has been
paid to the parameter as measure of repetitiveness, especially to evaluate
the performance in terms of both space and time of compressed indexing data
structures.
In this paper, we investigate , the ratio of and of the number
of runs of the BWT of the reverse of . Kempa and Kociumaka [FOCS 2020] gave
the first non-trivial upper bound as , for any string
of length . However, nothing is known about the tightness of this upper
bound. We present infinite families of binary strings for which holds, thus giving the first non-trivial lower bound on
, the maximum over all strings of length .
Our results suggest that is not an ideal measure of the repetitiveness of
the string, since the number of repeated factors is invariant between the
string and its reverse. We believe that there is a more intricate relationship
between the number of runs of the BWT and the string's combinatorial
properties.Comment: 14 pages, 2 figue
Towards a better solution to the shortest common supersequence problem: the deposition and reduction algorithm
BACKGROUND: The problem of finding a Shortest Common Supersequence (SCS) of a set of sequences is an important problem with applications in many areas. It is a key problem in biological sequences analysis. The SCS problem is well-known to be NP-complete. Many heuristic algorithms have been proposed. Some heuristics work well on a few long sequences (as in sequence comparison applications); others work well on many short sequences (as in oligo-array synthesis). Unfortunately, most do not work well on large SCS instances where there are many, long sequences. RESULTS: In this paper, we present a Deposition and Reduction (DR) algorithm for solving large SCS instances of biological sequences. There are two processes in our DR algorithm: deposition process, and reduction process. The deposition process is responsible for generating a small set of common supersequences; and the reduction process shortens these common supersequences by removing some characters while preserving the common supersequence property. Our evaluation on simulated data and real DNA and protein sequences show that our algorithm consistently produces the best results compared to many well-known heuristic algorithms, and especially on large instances. CONCLUSION: Our DR algorithm provides a partial answer to the open problem of designing efficient heuristic algorithm for SCS problem on many long sequences. Our algorithm has a bounded approximation ratio. The algorithm is efficient, both in running time and space complexity and our evaluation shows that it is practical even for SCS problems on many long sequences
Long-term outcome of patients with multiple myeloma after autologous hematopoietic cell transplantation and nonmyeloablative allografting
Recommended from our members
Increased shear in the North Atlantic upper-level jet stream over the past four decades
Earth’s equator-to-pole temperature gradient drives westerly mid-latitude jet streams through thermal wind balance. In the upper atmosphere, anthropogenic climate change is strengthening this meridional temperature gradient by cooling the polar lower stratosphere and warming the tropical upper troposphere acting to strengthen the upper-level jet stream. In contrast, in the lower atmosphere, Arctic amplification of global warming is weakening the meridional temperature gradient acting to weaken the upper-level jet stream. Therefore, trends in the speed of the upper-level jet stream represent a closely balanced tug-of-war between two competing effects at different altitudes. It is possible to isolate one of the competing effects by analysing the vertical shear—the change in wind speed with height—instead of the wind speed, but this approach has not previously been taken. Here we show that, although the zonal wind speed in the North Atlantic polar jet stream at 250 hectopascals has not changed since the start of the observational satellite era in 1979, the vertical shear has increased by 15 per cent (with a range of 11–17 per cent) according to three different reanalysis datasets. We further show that this trend is attributable to the thermal wind response to the enhanced upper-level meridional temperature gradient. Our results indicate that climate change may be having a larger impact on the North Atlantic jet stream than previously thought. The increased vertical shear is consistent with the intensification of shear-driven clear-air turbulence expected from climate change which will affect aviation in the busy transatlantic flight corridor by creating a more turbulent flying environment for aircraft. We conclude that the effects of climate change and variability on the upper-level jet stream are being partly obscured by the traditional focus on wind speed rather than wind shear
Recommended from our members
Aviation turbulence: dynamics, forecasting, and response to climate change
Atmospheric turbulence is a major hazard in the aviation industry and can cause injuries to passengers and crew. Understanding the physical and dynamical generation mechanisms of turbulence aids with the development of new forecasting algorithms and, therefore, reduces the impact that it has on the aviation industry. The scope of this paper is to review the dynamics of aviation turbulence, its response to climate change, and current forecasting methods at the cruising altitude of aircraft. Aviation-affecting turbulence comes from three main sources: vertical wind shear instabilities, convection, and mountain waves. Understanding these features helps researchers to develop better turbulence diagnostics. Recent research suggests that turbulence will increase in frequency and strength with climate change, and therefore, turbulence forecasting may become more important in the future. The current methods of forecasting are unable to predict every turbulence event, and research is ongoing to find the best solution to this problem by combining turbulence predictors and using ensemble forecasts to increase skill. The skill of operational turbulence forecasts has increased steadily over recent decades, mirroring improvements in our understanding. However, more work is needed—ideally in collaboration with the aviation industry—to improve observations and increase forecast skill, to help maintain and enhance aviation safety standards in the future
- …