17 research outputs found

    All That Glitters Is Not Gold: Towards Process Discovery Techniques with Guarantees

    Get PDF
    The aim of a process discovery algorithm is to construct from event data a process model that describes the underlying, real-world process well. Intuitively, the better the quality of the event data, the better the quality of the model that is discovered. However, existing process discovery algorithms do not guarantee this relationship. We demonstrate this by using a range of quality measures for both event data and discovered process models. This paper is a call to the community of IS engineers to complement their process discovery algorithms with properties that relate qualities of their inputs to those of their outputs. To this end, we distinguish four incremental stages for the development of such algorithms, along with concrete guidelines for the formulation of relevant properties and experimental validation. We will also use these stages to reflect on the state of the art, which shows the need to move forward in our thinking about algorithmic process discovery.Comment: 13 pages, 4 figures. Submitted to the International Conference on Advanced Information Systems Engineering, 202

    Ganciclovir therapeutic drug monitoring in transplant recipients

    Get PDF
    BACKGROUND: The use of (val)ganciclovir is complicated by toxicity, slow response to treatment and acquired resistance. OBJECTIVES: To evaluate a routine therapeutic drug monitoring (TDM) programme for ganciclovir in a transplant patient population. METHODS: An observational study was performed in transplant recipients from June 2018 to February 2020. Dose adjustments were advised by the TDM pharmacist as part of clinical care. For prophylaxis, a trough concentration (Cmin) of 1-2 mg/L and an AUC24h of >50 mg·h/L were aimed for. For treatment, a Cmin of 2-4 mg/L and an AUC24h of 80-120 mg·h/L were aimed for. RESULTS: Ninety-five solid organ and stem cell transplant patients were enrolled. Overall, 450 serum concentrations were measured; with a median of 3 (IQR = 2-6) per patient. The median Cmin and AUC24h in the treatment and prophylaxis groups were 2.0 mg/L and 90 mg·h/L and 0.9 mg/L and 67 mg·h/L, respectively. Significant intra- and inter-patient patient variability was observed. The majority of patients with an estimated glomerular filtration rate of more than 120 mL/min/1.73 m2 and patients on continuous veno-venous haemofiltration showed underexposure. The highest Cmin and AUC24h values were associated with the increase in liver function markers and decline in WBC count as compared with baseline. CONCLUSIONS: This study revealed that a standard weight and kidney function-based dosing regimen resulted in highly variable ganciclovir Cmin and under- and over-exposure were observed in patients on dialysis and in patients with increased renal function. Clearly there is a need to explore the impact of concentration-guided dose adjustments in a prospective study

    Correctness Notions for Petri Nets with Identifiers

    Full text link
    A model of an information system describes its processes and how resources are involved in these processes to manipulate data objects. This paper presents an extension to the Petri nets formalism suitable for describing information systems in which states refer to object instances of predefined types and resources are identified as instances of special object types. Several correctness criteria for resource- and object-aware information systems models are proposed, supplemented with discussions on their decidability for interesting classes of systems. These new correctness criteria can be seen as generalizations of the classical soundness property of workflow models concerned with process control flow correctness

    Improving agile requirements: the Quality User Story framework and tool

    No full text
    User stories are a widely adopted requirements notation in agile development. Yet, user stories are too often poorly written in practice and exhibit inherent quality defects. Triggered by this observation, we propose the Quality User Story (QUS) framework, a set of 13 quality criteria that user story writers should strive to conform to. Based on QUS, we present the Automatic Quality User Story Artisan (AQUSA) software tool. Relying on natural language processing (NLP) techniques, AQUSA detects quality defects and suggest possible remedies. We describe the architecture of AQUSA, its implementation, and we report on an evaluation that analyzes 1023 user stories obtained from 18 software companies. Our tool does not yet reach the ambitious 100 % recall that Daniel Berry and colleagues require NLP tools for RE to achieve. However, we obtain promising results and we identify some improvements that will substantially improve recall and precision

    Correctness Notions for Petri Nets with Identifiers

    No full text
    A model of an information system describes its processes and how resourcesare involved in these processes to manipulate data objects. This paper presentsan extension to the Petri nets formalism suitable for describing informationsystems in which states refer to object instances of predefined types andresources are identified as instances of special object types. Severalcorrectness criteria for resource- and object-aware information systems modelsare proposed, supplemented with discussions on their decidability forinteresting classes of systems. These new correctness criteria can be seen asgeneralizations of the classical soundness property of workflow modelsconcerned with process control flow correctness

    Aggregate architecture simulation in event-sourcing applications using layered queuing networks

    No full text
    Workload intensity in terms of arrival rate and think-time can be used accurately simulate system performance in traditional systems. Most systems treat individual requests on a standalone basis and resource demands typically do not vary too significantly, which in most cases can be addressed as a parametric dependency. New frameworks such as Command Query Responsibility Segregation and Event Sourcing change the paradigm, where request processing is both parametric dependent and dynamic, as the history of changes that have occurred are replayed to construct the current state of the system. This makes every request unique and difficult to simulate. While traditional systems are studied extensively in the scientific community, the latter is still new and mainly used by practitioners. In this work, we study one such industrial application using Command Query Responsibility Segregation and Event Sourcing frameworks. We propose new workload patterns suited to define the dynamic behavior of these systems, define various architectural patterns possible in such systems based on domain-driven design principles, and create analytical performance models to make predictions. We verify the models by making measurements on an actual application running similar workloads and compare the predictions. Furthermore, we discuss the suitability of the architectural patterns to different usage scenarios and propose changes to architecture in each case to improve performance

    Extracting conceptual models from user stories with Visual Narrator

    No full text
    Extracting conceptual models from natural language requirements can help identify dependencies, redundancies, and conflicts between requirements via a holistic and easy-to-understand view that is generated from lengthy textual specifications. Unfortunately, existing approaches never gained traction in practice, because they either require substantial human involvement or they deliver too low accuracy. In this paper, we propose an automated approach called Visual Narrator based on natural language processing that extracts conceptual models from user story requirements. We choose this notation because of its popularity among (agile) practitioners and its focus on the essential components of a requirement: Who? What? Why? Coupled with a careful selection and tuning of heuristics, we show how Visual Narrator enables generating conceptual models from user stories with high accuracy. Visual Narrator is part of the holistic Grimm method for user story collaboration that ranges from elicitation to the interactive visualization and analysis of requirements

    The hunt for the guzzler : Architecture-based energy profiling using stubs

    No full text
    Context: Software producing organizations have the ability to address the energy impact of their software products through their source code and software architecture. In spite of that, the focus often remains on hardware aspects, which limits the contribution of software towards energy efficient ICT solutions. Objective: No methods exist to provide software architects information about the energy consumption of the different components in their software product. The objective of this paper is to bring software producing organizations in control of this qualitative aspect of their software. Method: To achieve the objective, we developed the StEP Method to systematically investigate the effects of software units through the use of software stubs in relation to energy concerns. To evaluate the proposed method, an experiment involving three different versions of a commercial software product has been conducted. In the experiment, two versions of a software product were stubbed according to stakeholder concerns and stressed according to a test case, whilst energy consumption measurements were performed. The method provided guidance for the experiment and all activities were documented for future purposes. Results: Comparing energy consumption differences across versions unraveled the energy consumption related to the products’ core functionality. Using the energy profile, stakeholders could identify the major energy consuming elements and prioritize software engineering efforts to maximize impact. Conclusions: We introduce the StEP Method and demonstrate its applicability in an industrial setting. The method identified energy hotspots and thereby improved the control stakeholders have over the sustainability of a software product. Despite promising results, several concerns are identified that require further attention to improve the method. For instance, we recommend the investigation of software operation data to determine, and possibly automatically create, stubs

    The hunt for the guzzler : Architecture-based energy profiling using stubs

    No full text
    Context: Software producing organizations have the ability to address the energy impact of their software products through their source code and software architecture. In spite of that, the focus often remains on hardware aspects, which limits the contribution of software towards energy efficient ICT solutions. Objective: No methods exist to provide software architects information about the energy consumption of the different components in their software product. The objective of this paper is to bring software producing organizations in control of this qualitative aspect of their software. Method: To achieve the objective, we developed the StEP Method to systematically investigate the effects of software units through the use of software stubs in relation to energy concerns. To evaluate the proposed method, an experiment involving three different versions of a commercial software product has been conducted. In the experiment, two versions of a software product were stubbed according to stakeholder concerns and stressed according to a test case, whilst energy consumption measurements were performed. The method provided guidance for the experiment and all activities were documented for future purposes. Results: Comparing energy consumption differences across versions unraveled the energy consumption related to the products’ core functionality. Using the energy profile, stakeholders could identify the major energy consuming elements and prioritize software engineering efforts to maximize impact. Conclusions: We introduce the StEP Method and demonstrate its applicability in an industrial setting. The method identified energy hotspots and thereby improved the control stakeholders have over the sustainability of a software product. Despite promising results, several concerns are identified that require further attention to improve the method. For instance, we recommend the investigation of software operation data to determine, and possibly automatically create, stubs
    corecore