23,556 research outputs found

    Refactoring Process Models in Large Process Repositories.

    Get PDF
    With the increasing adoption of process-aware information systems (PAIS), large process model repositories have emerged. Over time respective models have to be re-aligned to the real-world business processes through customization or adaptation. This bears the risk that model redundancies are introduced and complexity is increased. If no continuous investment is made in keeping models simple, changes are becoming increasingly costly and error-prone. Though refactoring techniques are widely used in software engineering to address related problems, this does not yet constitute state-of-the art in business process management. Process designers either have to refactor process models by hand or cannot apply respective techniques at all. This paper proposes a set of behaviour-preserving techniques for refactoring large process repositories. This enables process designers to eectively deal with model complexity by making process models better understandable and easier to maintain

    Improving Classification When a Class Hierarchy is Available Using a Hierarchy-Based Prior

    Full text link
    We introduce a new method for building classification models when we have prior knowledge of how the classes can be arranged in a hierarchy, based on how easily they can be distinguished. The new method uses a Bayesian form of the multinomial logit (MNL, a.k.a. ``softmax'') model, with a prior that introduces correlations between the parameters for classes that are nearby in the tree. We compare the performance on simulated data of the new method, the ordinary MNL model, and a model that uses the hierarchy in different way. We also test the new method on a document labelling problem, and find that it performs better than the other methods, particularly when the amount of training data is small

    Compact routing on the Internet AS-graph

    Get PDF
    Compact routing algorithms have been presented as candidates for scalable routing in the future Internet, achieving near-shortest path routing with considerably less forwarding state than the Border Gateway Protocol. Prior analyses have shown strong performance on power-law random graphs, but to better understand the applicability of compact routing algorithms in the context of the Internet, they must be evaluated against real- world data. To this end, we present the first systematic analysis of the behaviour of the Thorup-Zwick (TZ) and Brady-Cowen (BC) compact routing algorithms on snapshots of the Internet Autonomous System graph spanning a 14 year period. Both algorithms are shown to offer consistently strong performance on the AS graph, producing small forwarding tables with low stretch for all snapshots tested. We find that the average stretch for the TZ algorithm increases slightly as the AS graph has grown, while previous results on synthetic data suggested the opposite would be true. We also present new results to show which features of the algorithms contribute to their strong performance on these graphs

    Keeping the Cost of Process Change Low through Refactoring

    Get PDF
    With the increasing adoption of process-aware information systems (PAIS) large process model repositories have emerged. Over time respective models have to be re-aligned to the real world business processes through customization or adaptation. This bears the risk that model redundancies are introduced and complexity is increased. If no continuous investment is made in keeping models simple, changes are becoming increasingly costly and error-prone. Although refactoring techniques are widely used in software engineering to address related problems, this does not yet constitute state-of-the art in business process management. Consequently, process designers either have to refactor process models by hand or can not apply respective techniques at all. In this paper we propose a set of techniques for refactoring large process repositories, which are behaviour-preserving. The proposed refactorings enable process designers to effectively deal with model complexity by making process models easier to change, less error-prone and better understandable

    Optimal Inference in Crowdsourced Classification via Belief Propagation

    Full text link
    Crowdsourcing systems are popular for solving large-scale labelling tasks with low-paid workers. We study the problem of recovering the true labels from the possibly erroneous crowdsourced labels under the popular Dawid-Skene model. To address this inference problem, several algorithms have recently been proposed, but the best known guarantee is still significantly larger than the fundamental limit. We close this gap by introducing a tighter lower bound on the fundamental limit and proving that Belief Propagation (BP) exactly matches this lower bound. The guaranteed optimality of BP is the strongest in the sense that it is information-theoretically impossible for any other algorithm to correctly label a larger fraction of the tasks. Experimental results suggest that BP is close to optimal for all regimes considered and improves upon competing state-of-the-art algorithms.Comment: This article is partially based on preliminary results published in the proceeding of the 33rd International Conference on Machine Learning (ICML 2016

    Analysis of the retail survey of products that carry welfare- claims and of non-retailer led assurance schemes whose logos accompany welfare-claims.

    No full text
    This report serves two aims. Firstly, this report contains analysis of the retail audit (sub-deliverable 1.2.2.1) of welfare-friendly food products in the 6 study countries. The report gives the results of an emerging comparative analysis of the ‘market’ for welfare-friendly food products in the 6 study countries. It also outlines ‘non-retailer’ led schemes1 whose products occurred in the study. In this way, an emerging picture of the actual product ranges, that make claims about welfare-friendliness, will be drawn based on fieldwork carried out from November 2004 until April 2005. Also, the report explores how the different legislative and voluntary standards on animal welfare compare across different countries and how these actively advertise their welfare-friendlier component to consumers through food packaging. <br/

    Generative Adversarial Networks for Financial Trading Strategies Fine-Tuning and Combination

    Get PDF
    Systematic trading strategies are algorithmic procedures that allocate assets aiming to optimize a certain performance criterion. To obtain an edge in a highly competitive environment, the analyst needs to proper fine-tune its strategy, or discover how to combine weak signals in novel alpha creating manners. Both aspects, namely fine-tuning and combination, have been extensively researched using several methods, but emerging techniques such as Generative Adversarial Networks can have an impact into such aspects. Therefore, our work proposes the use of Conditional Generative Adversarial Networks (cGANs) for trading strategies calibration and aggregation. To this purpose, we provide a full methodology on: (i) the training and selection of a cGAN for time series data; (ii) how each sample is used for strategies calibration; and (iii) how all generated samples can be used for ensemble modelling. To provide evidence that our approach is well grounded, we have designed an experiment with multiple trading strategies, encompassing 579 assets. We compared cGAN with an ensemble scheme and model validation methods, both suited for time series. Our results suggest that cGANs are a suitable alternative for strategies calibration and combination, providing outperformance when the traditional techniques fail to generate any alpha
    corecore