22 research outputs found

    Detecting the Effects of Changes on the Compliance of Cross-organizational Business Processes

    Get PDF
    An emerging challenge for collaborating business partners is to properly define and evolve their cross-organizational processes with respect to imposed global compliance rules. Since compliance verification is known to be very costly, reducing the number of compliance rules to be rechecked in the context of process changes will be crucial. Opposed to intra-organizational processes, however, change effects cannot be easily assessed in such distributed scenarios, where partners only provide restricted public views and assertions on their private processes. Even if local process changes are invisible to partners, they might affect the compliance of the cross-organizational process with the mentioned rules. This paper provides an approach for ensuring compliance when evolving a cross-organizational process. For this purpose, we construct qualified dependency graphs expressing relationships between process activities, process assertions, and compliance rules. Based on such graphs, we are able to determine the subset of compliance rules that might be affected by a particular change. Altogether, our approach increases the efficiency of compliance checking in cross-organizational settings

    Active Learning and Proofreading for Delineation of Curvilinear Structures

    Get PDF
    Many state-of-the-art delineation methods rely on supervised machine learning algorithms. As a result, they require manually annotated training data, which is tedious to obtain. Furthermore, even minor classification errors may significantly affect the topology of the final result. In this paper we propose a generic approach to addressing both of these problems by taking into account the influence of a potential misclassification on the resulting delineation. In an Active Learning context, we identify parts of linear structures that should be annotated first in order to train a classifier effectively. In a proofreading context, we similarly find regions of the resulting reconstruction that should be verified in priority to obtain a nearly-perfect result. In both cases, by focusing the attention of the human expert on potential classification mistakes which are the most critical parts of the delineation, we reduce the amount of required supervision. We demonstrate the effectiveness of our approach on microscopy images depicting blood vessels and neurons

    NetMets: software for quantifying and visualizing errors in biological network segmentation

    Get PDF
    One of the major goals in biomedical image processing is accurate segmentation of networks embedded in volumetric data sets. Biological networks are composed of a meshwork of thin filaments that span large volumes of tissue. Examples of these structures include neurons and microvasculature, which can take the form of both hierarchical trees and fully connected networks, depending on the imaging modality and resolution. Network function depends on both the geometric structure and connectivity. Therefore, there is considerable demand for algorithms that segment biological networks embedded in three-dimensional data. While a large number of tracking and segmentation algorithms have been published, most of these do not generalize well across data sets. One of the major reasons for the lack of general-purpose algorithms is the limited availability of metrics that can be used to quantitatively compare their effectiveness against a pre-constructed ground-truth. In this paper, we propose a robust metric for measuring and visualizing the differences between network models. Our algorithm takes into account both geometry and connectivity to measure network similarity. These metrics are then mapped back onto an explicit model for visualization

    Process Mining for Six Sigma

    Get PDF
    Process mining offers a set of techniques for gaining data-based insights into business processes from event logs. The literature acknowledges the potential benefits of using process mining techniques in Six Sigma-based process improvement initiatives. However, a guideline that is explicitly dedicated on how process mining can be systematically used in Six Sigma initiatives is lacking. To address this gap, the Process Mining for Six Sigma (PMSS) guideline has been developed to support organizations in systematically using process mining techniques aligned with the DMAIC (Define-Measure-Analyze-Improve-Control) model of Six Sigma. Following a design science research methodology, PMSS and its tool support have been developed iteratively in close collaboration with experts in Six Sigma and process mining, and evaluated by means of focus groups, demonstrations and interviews with industry experts. The results of the evaluations indicate that PMSS is useful as a guideline to support Six Sigma-based process improvement activities. It offers a structured guideline for practitioners by extending the DMAIC-based standard operating procedure. PMSS can help increasing the efficiency and effectiveness of Six Sigma-based process improving efforts. This work extends the body of knowledge in the fields of process mining and Six Sigma, and helps closing the gap between them. Hence, it contributes to the broad field of quality management

    Multiple Object Tracking Using K-Shortest Paths Optimization

    No full text

    Reconstructing Curvilinear Networks using Path Classifiers and Integer Programming

    No full text

    Reconstructing Evolving Tree Structures in Time Lapse Sequences

    Get PDF
    We propose an approach to reconstructing tree structures that evolve over time in 2D images and 3D image stacks such as neuronal axons or plant branches. Instead of reconstructing structures in each image independently, we do so for all images simultaneously to take advantage of temporal-consistency constraints. We show that this problem can be formulated as a Quadratic Mixed Integer Program and solved efficiently. The outcome of our approach is a framework that provides substantial improvements in reconstructions over traditional single time-instance formulations. Furthermore, an added benefit of our approach is the ability to automatically detect places where significant changes have occurred over time, which is challenging when considering large amounts of data
    corecore