301 research outputs found

    Akubras to Hard Hats: Easing Skills Shortages through Labour Harmonisation Strategies

    Full text link
    This article examines skill and labour shortages within rural agricultural industries in Western Australia. It draws on primary and secondary data, including 600 survey respondents in the sector. It is determined that there may be a shortage of farm workers during the busy seasons, while they are unemployed during the low seasons. Consequently, it is proposed that a human capability framework is utilised to encourage farm owners and (or) workers to consider the potential for labour-harmonisation (LH) strategies which would allow workers to transit between working on the land during the busy seasons and in mining during the low seasons. The outcomes of the study are considered in relation to indicators of precarious work illustrating that LH could enable an easing of labour shortages for both the farming and mining sectors, while providing benefits for the respective workers, employers, and the region in general

    Computing a rectilinear shortest path amid splinegons in plane

    Full text link
    We reduce the problem of computing a rectilinear shortest path between two given points s and t in the splinegonal domain \calS to the problem of computing a rectilinear shortest path between two points in the polygonal domain. As part of this, we define a polygonal domain \calP from \calS and transform a rectilinear shortest path computed in \calP to a path between s and t amid splinegon obstacles in \calS. When \calS comprises of h pairwise disjoint splinegons with a total of n vertices, excluding the time to compute a rectilinear shortest path amid polygons in \calP, our reduction algorithm takes O(n + h \lg{n}) time. For the special case of \calS comprising of concave-in splinegons, we have devised another algorithm in which the reduction procedure does not rely on the structures used in the algorithm to compute a rectilinear shortest path in polygonal domain. As part of these, we have characterized few of the properties of rectilinear shortest paths amid splinegons which could be of independent interest

    On the Inability of Markov Models to Capture Criticality in Human Mobility

    Get PDF
    We examine the non-Markovian nature of human mobility by exposing the inability of Markov models to capture criticality in human mobility. In particular, the assumed Markovian nature of mobility was used to establish a theoretical upper bound on the predictability of human mobility (expressed as a minimum error probability limit), based on temporally correlated entropy. Since its inception, this bound has been widely used and empirically validated using Markov chains. We show that recurrent-neural architectures can achieve significantly higher predictability, surpassing this widely used upper bound. In order to explain this anomaly, we shed light on several underlying assumptions in previous research works that has resulted in this bias. By evaluating the mobility predictability on real-world datasets, we show that human mobility exhibits scale-invariant long-range correlations, bearing similarity to a power-law decay. This is in contrast to the initial assumption that human mobility follows an exponential decay. This assumption of exponential decay coupled with Lempel-Ziv compression in computing Fano's inequality has led to an inaccurate estimation of the predictability upper bound. We show that this approach inflates the entropy, consequently lowering the upper bound on human mobility predictability. We finally highlight that this approach tends to overlook long-range correlations in human mobility. This explains why recurrent-neural architectures that are designed to handle long-range structural correlations surpass the previously computed upper bound on mobility predictability

    Responsibility modelling for civil emergency planning

    Get PDF
    This paper presents a new approach to analysing and understanding civil emergency planning based on the notion of responsibility modelling combined with HAZOPS-style analysis of information requirements. Our goal is to represent complex contingency plans so that they can be more readily understood, so that inconsistencies can be highlighted and vulnerabilities discovered. In this paper, we outline the framework for contingency planning in the United Kingdom and introduce the notion of responsibility models as a means of representing the key features of contingency plans. Using a case study of a flooding emergency, we illustrate our approach to responsibility modelling and suggest how it adds value to current textual contingency plans

    Mining, compressing and classifying with extensible motifs

    Get PDF
    BACKGROUND: Motif patterns of maximal saturation emerged originally in contexts of pattern discovery in biomolecular sequences and have recently proven a valuable notion also in the design of data compression schemes. Informally, a motif is a string of intermittently solid and wild characters that recurs more or less frequently in an input sequence or family of sequences. Motif discovery techniques and tools tend to be computationally imposing, however, special classes of "rigid" motifs have been identified of which the discovery is affordable in low polynomial time. RESULTS: In the present work, "extensible" motifs are considered such that each sequence of gaps comes endowed with some elasticity, whereby the same pattern may be stretched to fit segments of the source that match all the solid characters but are otherwise of different lengths. A few applications of this notion are then described. In applications of data compression by textual substitution, extensible motifs are seen to bring savings on the size of the codebook, and hence to improve compression. In germane contexts, in which compressibility is used in its dual role as a basis for structural inference and classification, extensible motifs are seen to support unsupervised classification and phylogeny reconstruction. CONCLUSION: Off-line compression based on extensible motifs can be used advantageously to compress and classify biological sequences

    Novel Results on the Number of Runs of the Burrows-Wheeler-Transform

    Full text link
    The Burrows-Wheeler-Transform (BWT), a reversible string transformation, is one of the fundamental components of many current data structures in string processing. It is central in data compression, as well as in efficient query algorithms for sequence data, such as webpages, genomic and other biological sequences, or indeed any textual data. The BWT lends itself well to compression because its number of equal-letter-runs (usually referred to as rr) is often considerably lower than that of the original string; in particular, it is well suited for strings with many repeated factors. In fact, much attention has been paid to the rr parameter as measure of repetitiveness, especially to evaluate the performance in terms of both space and time of compressed indexing data structures. In this paper, we investigate ρ(v)\rho(v), the ratio of rr and of the number of runs of the BWT of the reverse of vv. Kempa and Kociumaka [FOCS 2020] gave the first non-trivial upper bound as ρ(v)=O(log2(n))\rho(v) = O(\log^2(n)), for any string vv of length nn. However, nothing is known about the tightness of this upper bound. We present infinite families of binary strings for which ρ(v)=Θ(logn)\rho(v) = \Theta(\log n) holds, thus giving the first non-trivial lower bound on ρ(n)\rho(n), the maximum over all strings of length nn. Our results suggest that rr is not an ideal measure of the repetitiveness of the string, since the number of repeated factors is invariant between the string and its reverse. We believe that there is a more intricate relationship between the number of runs of the BWT and the string's combinatorial properties.Comment: 14 pages, 2 figue

    Towards a better solution to the shortest common supersequence problem: the deposition and reduction algorithm

    Get PDF
    BACKGROUND: The problem of finding a Shortest Common Supersequence (SCS) of a set of sequences is an important problem with applications in many areas. It is a key problem in biological sequences analysis. The SCS problem is well-known to be NP-complete. Many heuristic algorithms have been proposed. Some heuristics work well on a few long sequences (as in sequence comparison applications); others work well on many short sequences (as in oligo-array synthesis). Unfortunately, most do not work well on large SCS instances where there are many, long sequences. RESULTS: In this paper, we present a Deposition and Reduction (DR) algorithm for solving large SCS instances of biological sequences. There are two processes in our DR algorithm: deposition process, and reduction process. The deposition process is responsible for generating a small set of common supersequences; and the reduction process shortens these common supersequences by removing some characters while preserving the common supersequence property. Our evaluation on simulated data and real DNA and protein sequences show that our algorithm consistently produces the best results compared to many well-known heuristic algorithms, and especially on large instances. CONCLUSION: Our DR algorithm provides a partial answer to the open problem of designing efficient heuristic algorithm for SCS problem on many long sequences. Our algorithm has a bounded approximation ratio. The algorithm is efficient, both in running time and space complexity and our evaluation shows that it is practical even for SCS problems on many long sequences
    corecore