37 research outputs found

    The use of knowledge management systems and Event-B modelling in a lean enterprise

    Get PDF
    This paper provides a case study describing an approach to improving the efficiency of an information system (IS) by supporting processes outside the IS, using the ontology-driven knowledge management systems (KMS) as a mini-application in the area of so-called lean enterprise. Lean enterprise is focused on creating a maximal value for final customers while eliminating all kinds of waste and unnecessary costs, which significantly helps to increase the level of its competitiveness. It is about managerial decision-making, which can be in some cases contradictory (solving a local problem can cause a problem in another place). In this paper, we describe the KMS ATOM, which supports the innovation process in a lean enterprise. We show how the risk of wrong decisions due to contradictory effects can be eliminated by implementing a safety-critical system into the traditional IS. Our model is supported by Event-B modelling, a refinement-based formal modelling method, which is successfully used in important areas such as infrastructure, medicine, nuclear engineering and transportation (fire alarm systems, robotic surgery machines, braking systems in transportation, etc.). Nowadays, Event-B modelling is starting to be used for various management decision-making activities, and it is becoming a powerful competitiveness tool. This paper introduces a simple example of how Event-B modelling and its proof obligations can help improve and automate the decision-making process by eliminating potential threats of inefficient decisions.RVO project "Modelling of effective production and administration processes parameters in industrial companies based on the concept Industry 4.0

    Automated conflict resolution between multiple clinical pathways:A technology report

    Get PDF
    Background The number of people in the UK with three or more long-term conditions continues to grow and the management of patients with co-morbidities is complex. In treating patients with multimorbidities, a fundamental problem is understanding and detecting points of conflict between different guidelines which to date has relied on individual clinicians collating disparate information. Objective We will develop a framework for modelling a diverse set of care pathways, and investigate how conflicts can be detected and resolved automatically. We will use this knowledge to develop a software tool for use by clinicians that can map guidelines, highlight root causes of conflict between these guidelines and suggest ways they might be resolved. Method Our work consists of three phases. First, we will accurately model clinical pathways for six of the most common chronic diseases; second, we will automatically identify and detect sources of conflict across the pathways and how they might be resolved. Third, we will present a case study to prove the validity of our approach using a team of clinicians to detect and resolve the conflicts in the treatment of a fictional patient with multiple common morbidities and compare their findings and recommendations with those derived automatically using our novel software. Discussion This paper describes the development of an important software-based method for identifying a conflict between clinical guidelines. Our findings will support clinicians treating patients with multimorbidity in both primary and secondary care settings

    A design pattern for optimizations in data intensive applications using ABS and JAVA 8

    Get PDF
    Cloud environments have become a standard method for enterprises to offer their applications by means of web services, data management systems, or simply renting out computing resources. In our previous work, we presented how we can use a modeling language together with the new features of JAVA 8 to overcome certain drawbacks of data structures and synchronization mechanisms in parallel applications. We extend this solution into a design pattern that allows application-specific optimizations in a distributed setting. We validate this integration using our previous case study of the Prime Sieve of Eratosthenes and illustrate the performance improvements in terms of speed-up and memory consumption

    A design pattern for optimizations in data intensive applications using ABS and JAVA 8

    Get PDF
    Cloud environments have become a standard method for enterprises to offer their applications by means of web services, data management systems, or simply renting out computing resources. In our previous work, we presented how we can use a modeling language together with the new features of JAVA 8 to overcome certain drawbacks of data structures and synchronization mechanisms in parallel applications. We extend this solution into a design pattern that allows application-specific optimizations in a distributed setting. We validate this integration using our previous case study of the Prime Sieve of Eratosthenes and illustrate the performance improvements in terms of speed-up and memory co

    Generalized Property-Directed Reachability for Hybrid Systems

    Full text link
    Generalized property-directed reachability (GPDR) belongs to the family of the model-checking techniques called IC3/PDR. It has been successfully applied to software verification; for example, it is the core of Spacer, a state-of-the-art Horn-clause solver bundled with Z3. However, it has yet to be applied to hybrid systems, which involve a continuous evolution of values over time. As the first step towards GPDR- based model checking for hybrid systems, this paper formalizes HGPDR, an adaptation of GPDR to hybrid systems, and proves its soundness. We also implemented a semi-automated proof-of-concept verifier, which allows a user to provide hints to guide verification steps.Comment: To appear in VMCAI 202

    SkipAnalyzer: A Tool for Static Code Analysis with Large Language Models

    Full text link
    We introduce SkipAnalyzer, a large language model (LLM)-powered tool for static code analysis. SkipAnalyzer has three components: 1) an LLM-based static bug detector that scans source code and reports specific types of bugs, 2) an LLM-based false-positive filter that can identify false-positive bugs in the results of static bug detectors (e.g., the result of step 1) to improve detection accuracy, and 3) an LLM-based patch generator that can generate patches for the detected bugs above. As a proof-of-concept, SkipAnalyzer is built on ChatGPT, which has exhibited outstanding performance in various software engineering tasks. To evaluate SkipAnalyzer, we focus on two types of typical and critical bugs that are targeted by static bug detection, i.e., Null Dereference and Resource Leak as subjects. We employ Infer to aid the gathering of these two bug types from 10 open-source projects. Consequently, our experiment dataset contains 222 instances of Null Dereference bugs and 46 instances of Resource Leak bugs. Our study demonstrates that SkipAnalyzer achieves remarkable performance in the mentioned static analysis tasks, including bug detection, false-positive warning removal, and bug repair. In static bug detection, SkipAnalyzer achieves accuracy values of up to 68.37% for detecting Null Dereference bugs and 76.95% for detecting Resource Leak bugs, improving the precision of the current leading bug detector, Infer, by 12.86% and 43.13%, respectively. For removing false-positive warnings, SkipAnalyzer can reach a precision of up to 93.88% for Null Dereference bugs and 63.33% for Resource Leak bugs. Additionally, SkipAnalyzer surpasses state-of-the-art false-positive warning removal tools. Furthermore, in bug repair, SkipAnalyzer can generate syntactically correct patches to fix its detected bugs with a success rate of up to 97.30%

    Dynamic slicing of concurrent specification languages

    Full text link
    This is the author’s version of a work that was accepted for publication in Parallel Computing. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in Parallel Computing, 53, 1-22., (2016). DOI 10.1016/j.parco.2016.01.006.[EN] Dynamic slicing is a technique to extract the part of the program (called slice) that influences or is influenced, in a particular execution, by a given point of interest in the source code (called slicing criterion). Since a single execution is considered, the technique often uses a trace of this execution to analyze data and control dependencies. In this work we present the first formulation and implementation of dynamic slicing in the context of CSP. Most of the ideas presented can be directly applied to other concurrent specification languages such as Promela or CCS, but we center the discussion and the implementation on CSP. We base our technique on a new data structure to represent CSP computations called track. A track is a data structure which represents the sequence of expressions that have been evaluated during the computation, and moreover, it is labeled with the location of these expressions in the specification. The implementation of a dynamic slicer for CSP is useful for debugging, program comprehension, and program specialization, and it is also interesting from a theoretical perspective because CSP introduces difficulties such as heavy concurrency and non-determinism, synchronizations, frequent absence of data dependence, etc. © 2016 Elsevier B.V. All rights reservedThis work has been partially supported by the EU (FEDER) and the Spanish Ministerio de Economia y Competitividad under Grant TIN2013-44742-C4-1-R and by the Generalitat Valenciana under Grant PROMETEOII/2015/013 (SmartLogic). Salvador Tamarit was partially supported by Madrid regional projects N-GREENS Software-CM (S2013/ICE-2731), and by European Union project POLCA (STREP FP7-ICT-20133.4 610686).Llorens Agost, ML.; Oliver Villarroya, J.; Silva, J.; Tamarit Muñoz, S. (2016). Dynamic slicing of concurrent specification languages. Parallel Computing. 53:1-22. https://doi.org/10.1016/j.parco.2016.01.006S1225
    corecore