8,406 research outputs found

    An approach to safety analysis of clinical workflows

    Get PDF
    A clinical workflow considers the information and processes that are involved in providing a clinical service. They are safety critical since even minor faults have the potential to propagate and consequently cause harm to a patient, or even for a patient's life to be lost. Experiencing these kinds of failures has a destructive impact on all the involved parties. Due to the large number of processes and tasks included in the delivery of a clinical service, it can be difficult to determine the individuals or the processes that are responsible for adverse events, since such an analysis is typically complex and slow to do manually. Using automated tools to carry out an analysis can help in determining the root causes of potential adverse events and consequently help in avoiding preventable errors through either the alteration of existing workflows, or the design of a new workflow. This paper describes a technical approach to safety analysis of clinical workflows, utilising a safety analysis tool (Hierarchically-Performed Hazard Origin and Propagation Studies (HiP-HOPS)) that is already in use in the field of mechanical systems. The paper then demonstrates the applicability of the approach to clinical workflows by applying it to analyse the workflow in a radiology department. We conclude that the approach is applicable to this area of healthcare and provides a mechanism both for the systematic identification of adverse events and for the introduction of possible safeguards in clinical workflows

    Performance Analysis of Open Source Machine Learning Frameworks for Various Parameters in Single-Threaded and Multi-Threaded Modes

    Full text link
    The basic features of some of the most versatile and popular open source frameworks for machine learning (TensorFlow, Deep Learning4j, and H2O) are considered and compared. Their comparative analysis was performed and conclusions were made as to the advantages and disadvantages of these platforms. The performance tests for the de facto standard MNIST data set were carried out on H2O framework for deep learning algorithms designed for CPU and GPU platforms for single-threaded and multithreaded modes of operation Also, we present the results of testing neural networks architectures on H2O platform for various activation functions, stopping metrics, and other parameters of machine learning algorithm. It was demonstrated for the use case of MNIST database of handwritten digits in single-threaded mode that blind selection of these parameters can hugely increase (by 2-3 orders) the runtime without the significant increase of precision. This result can have crucial influence for optimization of available and new machine learning methods, especially for image recognition problems.Comment: 15 pages, 11 figures, 4 tables; this paper summarizes the activities which were started recently and described shortly in the previous conference presentations arXiv:1706.02248 and arXiv:1707.04940; it is accepted for Springer book series "Advances in Intelligent Systems and Computing

    Harnessing the Power of Many: Extensible Toolkit for Scalable Ensemble Applications

    Full text link
    Many scientific problems require multiple distinct computational tasks to be executed in order to achieve a desired solution. We introduce the Ensemble Toolkit (EnTK) to address the challenges of scale, diversity and reliability they pose. We describe the design and implementation of EnTK, characterize its performance and integrate it with two distinct exemplar use cases: seismic inversion and adaptive analog ensembles. We perform nine experiments, characterizing EnTK overheads, strong and weak scalability, and the performance of two use case implementations, at scale and on production infrastructures. We show how EnTK meets the following general requirements: (i) implementing dedicated abstractions to support the description and execution of ensemble applications; (ii) support for execution on heterogeneous computing infrastructures; (iii) efficient scalability up to O(10^4) tasks; and (iv) fault tolerance. We discuss novel computational capabilities that EnTK enables and the scientific advantages arising thereof. We propose EnTK as an important addition to the suite of tools in support of production scientific computing

    Taylorism, targets and the pursuit of quantity and quality by call centre management

    Get PDF
    The paper locates the rise of the call centre within the context of the development of Taylorist methods and technological change in office work in general. Managerial utilisation of targets to impose and measure employees' quantitative and qualitative performance is analysed in four case-study organisations. The paper concludes that call centre work reflects a pardigmic re-configuration of customer servicing operations, and that the continuing application of Taylorist methods appears likely

    Adaptive Process Management in Cyber-Physical Domains

    Get PDF
    The increasing application of process-oriented approaches in new challenging cyber-physical domains beyond business computing (e.g., personalized healthcare, emergency management, factories of the future, home automation, etc.) has led to reconsider the level of flexibility and support required to manage complex processes in such domains. A cyber-physical domain is characterized by the presence of a cyber-physical system coordinating heterogeneous ICT components (PCs, smartphones, sensors, actuators) and involving real world entities (humans, machines, agents, robots, etc.) that perform complex tasks in the “physical” real world to achieve a common goal. The physical world, however, is not entirely predictable, and processes enacted in cyber-physical domains must be robust to unexpected conditions and adaptable to unanticipated exceptions. This demands a more flexible approach in process design and enactment, recognizing that in real-world environments it is not adequate to assume that all possible recovery activities can be predefined for dealing with the exceptions that can ensue. In this chapter, we tackle the above issue and we propose a general approach, a concrete framework and a process management system implementation, called SmartPM, for automatically adapting processes enacted in cyber-physical domains in case of unanticipated exceptions and exogenous events. The adaptation mechanism provided by SmartPM is based on declarative task specifications, execution monitoring for detecting failures and context changes at run-time, and automated planning techniques to self-repair the running process, without requiring to predefine any specific adaptation policy or exception handler at design-time
    • 

    corecore