178 research outputs found
Discovering duplicate tasks in transition systems for the simplification of process models
This work presents a set of methods to improve the understandability of process models. Traditionally, simplification methods trade off quality metrics, such as fitness or precision. Conversely, the methods proposed in this paper produce simplified models while preserving or even increasing fidelity metrics. The first problem addressed in the
paper is the discovery of duplicate tasks. A new method is proposed that avoids overfitting by working on the transition system generated by the log. The method is able to discover duplicate tasks even in the presence of concurrency and choice. The second problem is the structural simplification of the model by identifying optional and repetitive tasks. The tasks are substituted by annotated events that allow the removal of silent tasks and reduce the complexity of the
model. An important feature of the methods proposed in this paper is that they are independent from the actual miner used for process discovery.Peer ReviewedPostprint (author's final draft
Approximate computation of alignments of business processes through relaxation labelling
A fundamental problem in conformance checking is aligning event data with process models. Unfortunately, existing techniques for this task are either complex, or can only be applicable to restricted classes of models. This in practice means that for large inputs, current techniques often fail to produce a result. In this paper we propose a method to approximate alignments for unconstrained process models, which relies on the use of relaxation labelling techniques on top of a partial order representation of the process model. The implementation on the proposed technique achieves a speed-up of several orders of magnitude with respect to the approaches in the literature (either optimal or approximate), often with a reasonable trade-off on the cost of the obtained alignment.Peer ReviewedPostprint (author's final draft
Encoding conformance checking artefacts in SAT
Conformance checking strongly relies on the computation of artefacts, which enable reasoning on the relation between observed and modeled behavior. This paper shows how important conformance artefacts like alignments, anti-alignments or even multi-alignments, defined over the edit distance, can be computed by encoding the problem as a SAT instance. From a general perspective, the work advocates for a unified family of techniques that can compute conformance artefacts in the same way. The prototype implementation of the techniques presented in this paper show capabilities for dealing with some of the current benchmarks, and potential for the near future when optimizations similar to the ones in the literature are incorporated.Peer ReviewedPostprint (author's final draft
Encoding conformance checking artefacts in SAT
Conformance checking strongly relies on the computation of artefacts, which enable reasoning on the relation between observed and modeled behavior. This paper shows how important conformance artefacts like alignments, anti-alignments or even multi-alignments, defined over the edit distance, can be computed by encoding the problem as a SAT instance. From a general perspective, the work advocates for a unified family of techniques that can compute conformance artefacts in the same way. The prototype implementation of the techniques presented in this paper show capabilities for dealing with some of the current benchmarks, and potential for the near future when optimizations similar to the ones in the literature are incorporated.Peer ReviewedPostprint (author's final draft
Conformance checking using activity and trace embeddings
Conformance checking describes process mining techniques used to compare an event log and a corresponding process model. In this paper, we propose an entirely new approach to conformance checking based on neural network-based embeddings. These embeddings are vector representations of every activity/task present in the model and log, obtained via act2vec, a Word2vec based model. Our novel conformance checking approach applies the Word Mover’s Distance to the activity embeddings of traces in order to measure fitness and precision. In addition, we investigate a more efficiently calculated lower bound of the former metric, i.e. the Iterative Constrained Transfers measure. An alternative method using trace2vec, a Doc2vec based model, to train and compare vector representations of the process instances themselves is also introduced. These methods are tested in different settings and compared to other conformance checking techniques, showing promising results
Efficacy of adjuvant chemotherapy according to hormone receptor status in young patients with breast cancer: a pooled analysis
Introduction Breast cancer at a young age is associated with an unfavorable prognosis. Very young patients with breast cancer therefore are advised to undergo adjuvant chemotherapy irrespective of tumor stage or grade. However, chemotherapy alone may not be adequate in young patients with hormone receptor-positive breast cancer. Therefore, we studied the effect of adjuvant chemotherapy in young patients with breast cancer in relation to hormone receptor status.
Methods Paraffin-embedded tumor material was collected from 480 early-stage breast cancer patients younger than 41 years who participated in one of four European Organization for Research and Treatment of Cancer trials. Using immunohistochemistry on the whole series of tumors, we assessed estrogen receptor (ER) status and progesterone receptor (PgR) status in a standardized way. Endpoints in this study were overall survival (OS) and distant metastasis-free survival (DMFS). The median follow-up period was 7.3 years.
Results Overall, patients with ER-positive tumors had better OS rates (hazard ratio [HR] 0.63; P = 0.02) compared with those with ER-negative tumors. However, in the subgroup of patients who received chemotherapy, no significant difference in OS (HR 0.87; P = 0.63) and DMFS (HR 1.36; P = 0.23) was found between patients with ER-positive tumors or those with ER-negative tumors. These differences were similar for PgR status.
Conclusion Young patients with hormone receptor-positive tumors benefit less from adjuvant systemic chemotherapy than patients with hormone receptor-negative tumors. These results confirm that chemotherapy alone cannot be considered optimal adjuvant systemic treatment in breast cancer patients 40 years old or younger with hormone receptor-positive tumors
Verification of Logs - Revealing Faulty Processes of a Medical Laboratory
Abstract. If there is a suspicion of Lyme disease, a blood sample of a patient is sent to a medical laboratory. The laboratory performs a number of dierent blood examinations testing for antibodies against the Lyme disease bacteria. The total number of dierent examinations depends on the intermediate results of the blood count. The costs of each examination is paid by the health insurance company of the patient. To control and restrict the number of performed examinations the health insurance companies provide a charges regulation document. If a health insurance company disagrees with the charges of a laboratory it is the job of the public prosecution service to validate the charges according to the regulation document. In this paper we present a case study showing a systematic approach to reveal faulty processes of a medical laboratory. First, files produced by the information system of the respective laboratory are analysed and consolidated in a database. An excerpt from this database is translated into an event log describing a sequential language of events performed by the information system. With the help of the regulation document this language can be split in two sets- the set of valid and the set of faulty words. In a next step, we build a coloured Petri net model corre-sponding to the set of valid words in a sense that only the valid words are executable in the Petri net model. In a last step we translated the coloured Petri net into a PL/SQL-program. This program can automat-ically reveal all faulty processes stored in the database.
Classifying breast cancer surgery: a novel, complexity-based system for oncological, oncoplastic and reconstructive procedures, and proof of principle by analysis of 1225 operations in 1166 patients
<p>Abstract</p> <p>Background</p> <p>One of the basic prerequisites for generating evidence-based data is the availability of classification systems. Attempts to date to classify breast cancer operations have focussed on specific problems, e.g. the avoidance of secondary corrective surgery for surgical defects, rather than taking a generic approach.</p> <p>Methods</p> <p>Starting from an existing, simpler empirical scheme based on the complexity of breast surgical procedures, which was used in-house primarily in operative report-writing, a novel classification of ablative and breast-conserving procedures initially needed to be developed and elaborated systematically. To obtain proof of principle, a prospectively planned analysis of patient records for all major breast cancer-related operations performed at our breast centre in 2005 and 2006 was conducted using the new classification. Data were analysed using basic descriptive statistics such as frequency tables.</p> <p>Results</p> <p>A novel two-type, six-tier classification system comprising 12 main categories, 13 subcategories and 39 sub-subcategories of oncological, oncoplastic and reconstructive breast cancer-related surgery was successfully developed. Our system permitted unequivocal classification, without exception, of all 1225 procedures performed in 1166 breast cancer patients in 2005 and 2006.</p> <p>Conclusion</p> <p>Breast cancer-related surgical procedures can be generically classified according to their surgical complexity. Analysis of all major procedures performed at our breast centre during the study period provides proof of principle for this novel classification system. We envisage various applications for this classification, including uses in randomised clinical trials, guideline development, specialist surgical training, continuing professional development as well as quality of care and public health research.</p
Product Lifecycle Management for Digital Transformation of Industries.
Currently, organizations tend to reuse their past knowledge to make good decisions quickly and effectively and thus, to improve their business processes performance in terms of time, quality, efficiency, etc. Process mining techniques allow organizations to achieve this objective through process discovery. This paper develops a semi-automated approach that supports decision making by discovering decision rules from the past process executions. It identifies a ranking of the process patterns that satisfy the discovered decision rules and which are the most likely to be executed by a given user in a given context. The approach is applied on a supervision process of the gas network exploitationFU
- …