587 research outputs found

    Calendar.help: Designing a Workflow-Based Scheduling Agent with Humans in the Loop

    Full text link
    Although information workers may complain about meetings, they are an essential part of their work life. Consequently, busy people spend a significant amount of time scheduling meetings. We present Calendar.help, a system that provides fast, efficient scheduling through structured workflows. Users interact with the system via email, delegating their scheduling needs to the system as if it were a human personal assistant. Common scheduling scenarios are broken down using well-defined workflows and completed as a series of microtasks that are automated when possible and executed by a human otherwise. Unusual scenarios fall back to a trained human assistant who executes them as unstructured macrotasks. We describe the iterative approach we used to develop Calendar.help, and share the lessons learned from scheduling thousands of meetings during a year of real-world deployments. Our findings provide insight into how complex information tasks can be broken down into repeatable components that can be executed efficiently to improve productivity.Comment: 10 page

    Implementing Problem Resolution Models in Remedy

    Get PDF
    This paper defines the concept of Problem Resolution Model (PRM) and describes the current implementation made by the User Support unit at CERN. One of the main challenges of User Support services in any High Energy Physics institute/organization is to address solving of the computing-relatedproblems faced by their researchers. The User Support group at CERN is the IT unit in charge of modeling the operations of the Help Desk and acts as asecond level support to some of the support lines whose problems are receptioned at the Help Desk. The motivation behind the use of a PRM is to provide well defined procedures and methods to react in an efficient way to a request for solving a problem,providing advice, information etc. A PRM is materialized on a workflow which has a set of defined states in which a problem can be. Problems move from onestate to another according to actions as decided by the person who is handling them. A PRM can be implemented by a computer application, generallyreferred to as Problem Reporting Management System (PRMS). Through this application problems can be effectively guided through the states of theworkflow by applying actions on them. This automatic handling improves problem resolution times and provides flexible incorporation of the problems inthe workflow (either by email, the helpdesk operator etc.). It also provides registration and accounting of problems including the creation of a knowledgebase, reporting, performance measurement, etc. For such implementation we have used Remedy, which is the current choice of the IT Division at CERN fora PRMS. Remedy is an specialized development system to create PRM applications. We have developed a complete Remedy application to implement theUser Support PRM. Also, we have created complementary tools for reporting, statistics, backups, etc. The aim of this paper is to explain all these concepts and the main issues behind their implementation

    System Design for a Data-driven and Explainable Customer Sentiment Monitor

    Full text link
    The most important goal of customer services is to keep the customer satisfied. However, service resources are always limited and must be prioritized. Therefore, it is important to identify customers who potentially become unsatisfied and might lead to escalations. Today this prioritization of customers is often done manually. Data science on IoT data (esp. log data) for machine health monitoring, as well as analytics on enterprise data for customer relationship management (CRM) have mainly been researched and applied independently. In this paper, we present a framework for a data-driven decision support system which combines IoT and enterprise data to model customer sentiment. Such decision support systems can help to prioritize customers and service resources to effectively troubleshoot problems or even avoid them. The framework is applied in a real-world case study with a major medical device manufacturer. This includes a fully automated and interpretable machine learning pipeline designed to meet the requirements defined with domain experts and end users. The overall framework is currently deployed, learns and evaluates predictive models from terabytes of IoT and enterprise data to actively monitor the customer sentiment for a fleet of thousands of high-end medical devices. Furthermore, we provide an anonymized industrial benchmark dataset for the research community

    Unknown Exception Handling Tool Using Humans as Agents

    Get PDF
    In a typical workflow process, exceptions are the norm. Exceptions are defined as deviations from the normal sequence of activities and events. Exceptions can be divided into two broad categories: known exceptions (i.e., expected and predefined deviations) and unknown exceptions (i.e., unexpected and undefined deviations). Business Process Execution Language (BPEL) has become the de facto standard for executing business workflows with the use of web services. BPEL includes exception handling methods that are sufficient for known exception scenarios. Depending on the exception and the specifics of the exception handling tools, processes may either halt or move to completion. Instances of processes that are halted or left incomplete due to unhandled exceptions affect the performance of the workflow process, as they increase resource utilization and process completion time. However, designing efficient process handlers to avoid the issue of unhandled exceptions is not a simple task. This thesis provides a tool that handles unknown exceptions using provisions for exception handling with the involvement of human activities by using the BPEL4PEOPLE specification. BPEL4PEOPLE, an extension of BPEL, offers the ability to specify human activities within BPEL processes. The approach considered in this thesis involves humans in exception handling tools by providing an alternate sub process within a given business process. A prototype application has been developed implementing the tool that handles unknown exceptions. The prototype application monitors the progress of an automated workflow process and permits human involvement to reroute the course of a workflow process when an unknown exception occurs. The utility of the prototype and the tool using the Scenario Walkthrough and Inspection Methods (SWIMs) are demonstrated. We demonstrate the utility of the tool through loan application process scenarios, and offer a walkthrough of the system by using examples of instances with examples of known and unknown exceptions, as well as a claims analysis of process instances results

    Improving Healthcare in Remote Environments Via a New Integrated, Online Communication Platform

    Get PDF
    International SOS delivers integrated medical solutions to remote and extreme remote onshore and offshore projects worldwide, using highly trained and experienced medics with a robust system of protocols, procedures and clear escalation criteria to our Topside support centres.  To enable improved medical escalations from the remote sites, we have developed a customised, online communication tool called the Digital Topside Platform (DTP).  The system was designed to allow simplified telemedical interaction, improved data security, and enable integrated patient care from initial presentation to return to work, with a focus on managing the case on-site, and patient confidentiality. The design process included a focus on user experience and workflow optimisation, and an iterative development methodology.  The system includes multiple components including:  management dashboards, interoperability with existing case management systems, messaging, video and file transfer features, and case data including medical and contextual information via a mix of auto-populated and manual entry data points.  The solution was designed for non-urgent cases, which represents >80% of case escalation volume. Initial deployment of the solution to offshore oil rigs in West Africa and the Intl.SOS Johannesburg Response Centre has demonstrated four key improvements over baseline data. A preliminary analysis of our data shows that 70-80% of the case escalations use the system, demonstrating a high rate of user adoption. The improvements shown were: (1) reduced need for urgent medical evacuation for non-life threatening conditions; (2) increased adherence to evidence based guidelines via more efficient clinical governance processes; (3) faster escalation processes via profile management and system integration; and (4) early escalation of work related injury cases resulting in reduced time off work.  Based on evidence to date, the system facilitates improved healthcare management for patients in the remote offshore environment.  The system continues to be deployed to additional sites and regions globally, which will generate a statistically significant dataset for further evaluation

    Secure management of logs in internet of things

    Full text link
    Ever since the advent of computing, managing data has been of extreme importance. With innumerable devices getting added to network infrastructure, there has been a proportionate increase in the data which needs to be stored. With the advent of Internet of Things (IOT) it is anticipated that billions of devices will be a part of the internet in another decade. Since those devices will be communicating with each other on a regular basis with little or no human intervention, plethora of real time data will be generated in quick time which will result in large number of log files. Apart from complexity pertaining to storage, it will be mandatory to maintain confidentiality and integrity of these logs in IOT enabled devices. This paper will provide a brief overview about how logs can be efficiently and securely stored in IOT devices.Comment: 6 pages, 1 tabl

    Optimized Time Management for Declarative Workflows

    Get PDF
    Declarative process models are increasingly used since they fit better with the nature of flexible process-aware information systems and the requirements of the stakeholders involved. When managing business processes, in addition, support for representing time and reasoning about it becomes crucial. Given a declarative process model, users may choose among different ways to execute it, i.e., there exist numerous possible enactment plans, each one presenting specific values for the given objective functions (e.g., overall completion time). This paper suggests a method for generating optimized enactment plans (e.g., plans minimizing overall completion time) from declarative process models with explicit temporal constraints. The latter covers a number of well-known workflow time patterns. The generated plans can be used for different purposes like providing personal schedules to users, facilitating early detection of critical situations, or predicting execution times for process activities. The proposed approach is applied to a range of test models of varying complexity. Although the optimization of process execution is a highly constrained problem, results indicate that our approach produces a satisfactory number of suitable solutions, i.e., solutions optimal in many cases

    Leveraging OpenStack and Ceph for a Controlled-Access Data Cloud

    Full text link
    While traditional HPC has and continues to satisfy most workflows, a new generation of researchers has emerged looking for sophisticated, scalable, on-demand, and self-service control of compute infrastructure in a cloud-like environment. Many also seek safe harbors to operate on or store sensitive and/or controlled-access data in a high capacity environment. To cater to these modern users, the Minnesota Supercomputing Institute designed and deployed Stratus, a locally-hosted cloud environment powered by the OpenStack platform, and backed by Ceph storage. The subscription-based service complements existing HPC systems by satisfying the following unmet needs of our users: a) on-demand availability of compute resources, b) long-running jobs (i.e., >30> 30 days), c) container-based computing with Docker, and d) adequate security controls to comply with controlled-access data requirements. This document provides an in-depth look at the design of Stratus with respect to security and compliance with the NIH's controlled-access data policy. Emphasis is placed on lessons learned while integrating OpenStack and Ceph features into a so-called "walled garden", and how those technologies influenced the security design. Many features of Stratus, including tiered secure storage with the introduction of a controlled-access data "cache", fault-tolerant live-migrations, and fully integrated two-factor authentication, depend on recent OpenStack and Ceph features.Comment: 7 pages, 5 figures, PEARC '18: Practice and Experience in Advanced Research Computing, July 22--26, 2018, Pittsburgh, PA, US

    Successfully initiating an escalation of care in acute ward settings—A qualitative observational study

    Get PDF
    Aims: To address knowledge gaps by (i) developing a theoretical understanding of escalation and (ii) identifying escalation success factors. Design: Non‐participant observations were used to examine deteriorating patient escalation events. Methods: Escalation event data were collected by a researcher who shadowed clinical staff, between February 16th 2021 and March 17th 2022 from two National Health Service Trusts. Events were analysed using Framework Analysis. Escalation tasks were mapped using a Hierarchical Task Analysis diagram and data presented as percentages, frequency and 95% CI. Results: A total of 38 observation sessions were conducted, totaling 105 h, during which 151 escalation events were captured. Half of these were not early warning score‐initiated and resulted from bleeding, infection, or chest pain. Four communication phenotypes were observed in the escalation events. The most common was Outcome Focused Escalation, where the referrer expected specific outcomes like blood cultures or antibiotic prescriptions. Informative Escalations were often used when a triggering patient's condition was of low clinical concern and ranked as the second most frequent escalation communication type. General Concern Escalations occurred when the referrer did not have predetermined expectations. Spontaneous Interaction Escalations were the least frequently observed, occurring opportunistically in communal workspaces. Conclusion: Half of the events were non‐triggering escalations and understanding these can inform the design of systems to support staff better to undertake them. Escalation is not homogenous and differing escalation communication phenotypes exist. Informative Escalations represent an organizational requirement to report triggering warning scores and a targeted reduction of these may be organizationally advantageous. Increasing the frequency of Spontaneous Escalations, through hospital designs, may also be beneficial. Impact Statement: Our work highlights that a significant proportion of escalation workload occurs without a triggering early warning score and there is scope to better support these with designed systems. Further examination of reducing Informative and increasing Spontaneous Escalations is also warranted. Patient and Public Contribution: Extensive PPIE was completed throughout the lifecycle of this study. PPIE members validated the research questions and overarching aims of the overall study. PPIE members contributed to the design of the study reviewed documents and the final data generated

    Information brokering in globally distributed work: a workarounds perspective

    Get PDF
    Past studies have so far taken an interest in the two important roles intermediaries play to effectively broker information. One, where intermediaries connect information between multiple users. Two, where they protect information being transmitted. Common to these two streams is the assumption that efficient brokering takes place when information is visible. However, in practice, information exchanges bypass the intermediary for various reasons. Despite this, existing research has paid little attention how intermediaries broker effectively when information is not visible. Drawing on a qualitative case study in a globally distributed finance function we explore how intermediaries broker in a complex, distributed setting that creates conditions to distort and hide information. We contribute to brokering literature by offering a new third role: regulating information. Our research also provides insights for intermediary management by illuminating the normative complexity of information workarounds which aid problem-solving but leads to information hiding
    corecore