1,308 research outputs found

    Planning with Global Constraints for Computing Infrastructure Reconfiguration

    Get PDF
    This paper presents a prototype system called SFplan- ner which uses an automated planning technique to generate workflows for reconfiguring a computing infras- tructure. The system allows an administrator to specify a configuration task which consists of current state, de- sired state and global constraints. This task is compiled to a grounded finite-domain representation as the input for the standard (unmodified) Fast-Downward planner in order to automatically generate a workflow. The ex- ecution of the workflow will bring the system into the desired state, preserving the global constraints at every stage of the workflow

    Business Process Redesign in the Perioperative Process: A Case Perspective for Digital Transformation

    Get PDF
    This case study investigates business process redesign within the perioperative process as a method to achieve digital transformation. Specific perioperative sub-processes are targeted for re-design and digitalization, which yield improvement. Based on a 184-month longitudinal study of a large 1,157 registered-bed academic medical center, the observed effects are viewed through a lens of information technology (IT) impact on core capabilities and core strategy to yield a digital transformation framework that supports patient-centric improvement across perioperative sub-processes. This research identifies existing limitations, potential capabilities, and subsequent contextual understanding to minimize perioperative process complexity, target opportunity for improvement, and ultimately yield improved capabilities. Dynamic technological activities of analysis, evaluation, and synthesis applied to specific perioperative patient-centric data collected within integrated hospital information systems yield the organizational resource for process management and control. Conclusions include theoretical and practical implications as well as study limitations

    Using Workflows to Explore and Optimise Named Entity Recognition for Chemistry

    Get PDF
    Chemistry text mining tools should be interoperable and adaptable regardless of system-level implementation, installation or even programming issues. We aim to abstract the functionality of these tools from the underlying implementation via reconfigurable workflows for automatically identifying chemical names. To achieve this, we refactored an established named entity recogniser (in the chemistry domain), OSCAR and studied the impact of each component on the net performance. We developed two reconfigurable workflows from OSCAR using an interoperable text mining framework, U-Compare. These workflows can be altered using the drag-&-drop mechanism of the graphical user interface of U-Compare. These workflows also provide a platform to study the relationship between text mining components such as tokenisation and named entity recognition (using maximum entropy Markov model (MEMM) and pattern recognition based classifiers). Results indicate that, for chemistry in particular, eliminating noise generated by tokenisation techniques lead to a slightly better performance than others, in terms of named entity recognition (NER) accuracy. Poor tokenisation translates into poorer input to the classifier components which in turn leads to an increase in Type I or Type II errors, thus, lowering the overall performance. On the Sciborg corpus, the workflow based system, which uses a new tokeniser whilst retaining the same MEMM component, increases the F-score from 82.35% to 84.44%. On the PubMed corpus, it recorded an F-score of 84.84% as against 84.23% by OSCAR

    An LTL Semantics of Business Workflows with Recovery

    Full text link
    We describe a business workflow case study with abnormal behavior management (i.e. recovery) and demonstrate how temporal logics and model checking can provide a methodology to iteratively revise the design and obtain a correct-by construction system. To do so we define a formal semantics by giving a compilation of generic workflow patterns into LTL and we use the bound model checker Zot to prove specific properties and requirements validity. The working assumption is that such a lightweight approach would easily fit into processes that are already in place without the need for a radical change of procedures, tools and people's attitudes. The complexity of formalisms and invasiveness of methods have been demonstrated to be one of the major drawback and obstacle for deployment of formal engineering techniques into mundane projects

    High Speed Simulation Analytics

    Get PDF
    Simulation, especially Discrete-event simulation (DES) and Agent-based simulation (ABS), is widely used in industry to support decision making. It is used to create predictive models or Digital Twins of systems used to analyse what-if scenarios, perform sensitivity analytics on data and decisions and even to optimise the impact of decisions. Simulation-based Analytics, or just Simulation Analytics, therefore has a major role to play in Industry 4.0. However, a major issue in Simulation Analytics is speed. Extensive, continuous experimentation demanded by Industry 4.0 can take a significant time, especially if many replications are required. This is compounded by detailed models as these can take a long time to simulate. Distributed Simulation (DS) techniques use multiple computers to either speed up the simulation of a single model by splitting it across the computers and/or to speed up experimentation by running experiments across multiple computers in parallel. This chapter discusses how DS and Simulation Analytics, as well as concepts from contemporary e-Science, can be combined to contribute to the speed problem by creating a new approach called High Speed Simulation Analytics. We present a vision of High Speed Simulation Analytics to show how this might be integrated with the future of Industry 4.0

    High Speed Simulation Analytics

    Get PDF
    Simulation, especially Discrete-event simulation (DES) and Agent-based simulation (ABS), is widely used in industry to support decision making. It is used to create predictive models or Digital Twins of systems used to analyse what-if scenarios, perform sensitivity analytics on data and decisions and even to optimise the impact of decisions. Simulation-based Analytics, or just Simulation Analytics, therefore has a major role to play in Industry 4.0. However, a major issue in Simulation Analytics is speed. Extensive, continuous experimentation demanded by Industry 4.0 can take a significant time, especially if many replications are required. This is compounded by detailed models as these can take a long time to simulate. Distributed Simulation (DS) techniques use multiple computers to either speed up the simulation of a single model by splitting it across the computers and/or to speed up experimentation by running experiments across multiple computers in parallel. This chapter discusses how DS and Simulation Analytics, as well as concepts from contemporary e-Science, can be combined to contribute to the speed problem by creating a new approach called High Speed Simulation Analytics. We present a vision of High Speed Simulation Analytics to show how this might be integrated with the future of Industry 4.0

    Collaborative Ontology Engineering Methodologies for the Development of Decision Support Systems: Case Studies in the Healthcare Domain

    Get PDF
    New models and technological advances are driving the digital transformation of healthcare systems. Ontologies and Semantic Web have been recognized among the most valuable solutions to manage the massive, various, and complex healthcare data deriving from different sources, thus acting as backbones for ontology-based Decision Support Systems (DSSs). Several contributions in the literature propose Ontology engineering methodologies (OEMs) to assist the formalization and development of ontologies, by providing guidelines on tasks, activities, and stakeholders' participation. Nevertheless, existing OEMs differ widely according to their approach, and often lack of sufficient details to support ontology engineers. This paper performs a meta-review of the main criteria adopted for assessing OEMs, and major issues and shortcomings identified in existing methodologies. The key issues requiring specific attention (i.e., the delivery of a feasibility study, the introduction of project management processes, the support for reuse, and the involvement of stakeholders) are then explored into three use cases of semantic-based DSS in health-related fields. Results contribute to the literature on OEMs by providing insights on specific tools and approaches to be used when tackling these issues in the development of collaborative OEMs supporting DSS

    XSRL: An XML web-services request language

    Get PDF
    One of the most serious challenges that web-service enabled e-marketplaces face is the lack of formal support for expressing service requests against UDDI-resident web-services in order to solve a complex business problem. In this paper we present a web-service request language (XSRL) developed on the basis of AI planning and the XML database query language XQuery. This framework is designed to handle and execute XSRL requests and is capable of performing planning actions under uncertainty on the basis of refinement and revision as new service-related information is accumulated (via interaction with the user or UDDI) and as execution circumstances necessitate change

    The CloudSME Simulation Platform and its Applications: A Generic Multi-cloud Platform for Developing and Executing Commercial Cloud-based Simulations

    Get PDF
    Simulation is used in industry to study a large variety of problems ranging from increasing the productivity of a manufacturing system to optimizing the design of a wind turbine. However, some simulation models can be computationally demanding and some simulation projects require time consuming experimentation. High performance computing infrastructures such as clusters can be used to speed up the execution of large models or multiple experiments but at a cost that is often too much for Small and Medium-sized Enterprises (SMEs). Cloud computing presents an attractive, lower cost alternative. However, developing a cloud-based simulation application can again be costly for an SME due to training and development needs, especially if software vendors need to use resources of different heterogeneous clouds to avoid being locked-in to one particular cloud provider. In an attempt to reduce the cost of development of commercial cloud-based simulations, the CloudSME Simulation Platform (CSSP) has been developed as a generic approach that combines an AppCenter with the workflow of the WS-PGRADE/gUSE science gateway framework and the multi-cloud-based capabilities of the CloudBroker Platform. The paper presents the CSSP and two representative case studies from distinctly different areas that illustrate how commercial multi-cloud-based simulations can be created
    corecore