6,482 research outputs found

    A Review of Data-driven Robotic Process Automation Exploiting Process Mining

    Full text link
    Purpose: Process mining aims to construct, from event logs, process maps that can help discover, automate, improve, and monitor organizational processes. Robotic process automation (RPA) uses software robots to perform some tasks usually executed by humans. It is usually difficult to determine what processes and steps to automate, especially with RPA. Process mining is seen as one way to address such difficulty. This paper aims to assess the applicability of process mining algorithms in accelerating and improving the implementation of RPA, along with the challenges encountered throughout project lifecycles. Methodology: A systematic literature review was conducted to examine the approaches where process mining techniques were used to understand the as-is processes that can be automated with software robots. Eight databases were used to identify papers on this topic. Findings: A total of 19 papers, all published since 2018, were selected from 158 unique candidate papers and then analyzed. There is an increase in the number of publications in this domain. Originality: The literature currently lacks a systematic review that covers the intersection of process mining and robotic process automation. The literature mainly focuses on the methods to record the events that occur at the level of user interactions with the application, and on the preprocessing methods that are needed to discover routines with the steps that can be automated. Several challenges are faced with preprocessing such event logs, and many lifecycle steps of automation project are weakly supported by existing approaches.Comment: 29 pages, 5 figures, 5 table

    Learning from accidents : machine learning for safety at railway stations

    Get PDF
    In railway systems, station safety is a critical aspect of the overall structure, and yet, accidents at stations still occur. It is time to learn from these errors and improve conventional methods by utilizing the latest technology, such as machine learning (ML), to analyse accidents and enhance safety systems. ML has been employed in many fields, including engineering systems, and it interacts with us throughout our daily lives. Thus, we must consider the available technology in general and ML in particular in the context of safety in the railway industry. This paper explores the employment of the decision tree (DT) method in safety classification and the analysis of accidents at railway stations to predict the traits of passengers affected by accidents. The critical contribution of this study is the presentation of ML and an explanation of how this technique is applied for ensuring safety, utilizing automated processes, and gaining benefits from this powerful technology. To apply and explore this method, a case study has been selected that focuses on the fatalities caused by accidents at railway stations. An analysis of some of these fatal accidents as reported by the Rail Safety and Standards Board (RSSB) is performed and presented in this paper to provide a broader summary of the application of supervised ML for improving safety at railway stations. Finally, this research shows the vast potential of the innovative application of ML in safety analysis for the railway industry

    Building an Emulation Environment for Cyber Security Analyses of Complex Networked Systems

    Full text link
    Computer networks are undergoing a phenomenal growth, driven by the rapidly increasing number of nodes constituting the networks. At the same time, the number of security threats on Internet and intranet networks is constantly growing, and the testing and experimentation of cyber defense solutions requires the availability of separate, test environments that best emulate the complexity of a real system. Such environments support the deployment and monitoring of complex mission-driven network scenarios, thus enabling the study of cyber defense strategies under real and controllable traffic and attack scenarios. In this paper, we propose a methodology that makes use of a combination of techniques of network and security assessment, and the use of cloud technologies to build an emulation environment with adjustable degree of affinity with respect to actual reference networks or planned systems. As a byproduct, starting from a specific study case, we collected a dataset consisting of complete network traces comprising benign and malicious traffic, which is feature-rich and publicly available

    Fall Prediction and Prevention Systems: Recent Trends, Challenges, and Future Research Directions.

    Get PDF
    Fall prediction is a multifaceted problem that involves complex interactions between physiological, behavioral, and environmental factors. Existing fall detection and prediction systems mainly focus on physiological factors such as gait, vision, and cognition, and do not address the multifactorial nature of falls. In addition, these systems lack efficient user interfaces and feedback for preventing future falls. Recent advances in internet of things (IoT) and mobile technologies offer ample opportunities for integrating contextual information about patient behavior and environment along with physiological health data for predicting falls. This article reviews the state-of-the-art in fall detection and prediction systems. It also describes the challenges, limitations, and future directions in the design and implementation of effective fall prediction and prevention systems

    Automated Measurement of Adherence to Traumatic Brain Injury (TBI) Guidelines using Neurological ICU Data

    Get PDF
    Using a combination of physiological and treatment information from neurological ICU data-sets, adherence to traumatic brain injury (TBI) guidelines on hypotension, intracranial pressure (ICP) and cerebral perfusion pressure (CPP) is calculated automatically. The ICU output is evaluated to capture pressure events and actions taken by clinical staff for patient management, and are then re-expressed as simplified process models. The official TBI guidelines from the Brain Trauma Foundation are similarly evaluated, so the two structures can be compared and a quantifiable distance between the two calculated (the measure of adherence). The methods used include: the compilation of physiological and treatment information into event logs and subsequently process models; the expression of the BTF guidelines in process models within the real-time context of the ICU; a calculation of distance between the two processes using two algorithms (“Direct” and “Weighted”) building on work conducted in th e business process domain. Results are presented across two categories each with clinical utility (minute-by-minute and single patient stays) using a real ICU data-set. Results of two sample patients using a weighted algorithm show a non-adherence level of 6.25% for 42 mins and 56.25% for 708 mins and non-adherence of 18.75% for 17 minutes and 56.25% for 483 minutes. Expressed as two combinatorial metrics (duration/non-adherence (A) and duration * non-adherence (B)), which together indicate the clinical importance of the non-adherence, one has a mean of A=4.63 and B=10014.16 and the other a mean of A=0.43 and B=500.0

    Methodologies of Legacy Clinical Decision Support System -A Review

    Get PDF
    Information technology playing a prominent role in the field of medical by incorporating the Clinical Decision Support System(CDSS) in their routine practices. CDSS is a computer based interactive program to assist the physician to make the right decision at the right time. Now a day's Clinical decision support system is a dynamic research area in the field of computer, but the lack of the knowledge of the understanding as well as the functioning of the system ,make the adoption slow by the physician and patient. The literature review of this paper will focus on the overview of legacy CDSS, the kind of methodologies and classifier employed to prepare such decision support system using a non-technical approach to the physician and the strategy- makers . This study will provide the scope of understanding the clinical decision support along with the gateway to physician ,policy-makers to develop and deploy the decision support system as a healthcare service to make the quick, agile and right decision. Future direction to handle the uncertainties along with the challenges of clinical decision support system are also enlightened in this study

    Ontology-based annotation using naive Bayes and decision trees

    Get PDF
    The Cognitive Paradigm Ontology (CogPO) defines an ontological relationship between academic terms and experiments in the field of neuroscience. BrainMap (www.brainmap.org) is a database of literature describing these experiments, which are annotated by human experts based on the ontological framework defined in CogPO. We present a stochastic approach to automate this process. We begin with a gold standard corpus of abstracts annotated by experts, and model the annotations with a group of naive Bayes classifiers, then explore the inherent relationship among different components defined by the ontology using a probabilistic decision tree model. Our solution outperforms conventional text mining approaches by taking advantage of an ontology. We consider five essential ontological components (Stimulus Modality, Stimulus Type, Response Modality, Response Type, and Instructions) in CogPO, evaluate the probability of successfully categorizing a research paper on each component by training a basic multi-label naive Bayes classifier with a set of examples taken from the BrainMap database which are already manually annotated by human experts. According to the performance of the classifiers we create a decision tree to label the components sequentially on different levels. Each node of the decision tree is associated with a naive Bayes classifier built in different subspaces of the input universe. We first make decisions on those components whose labels are comparatively easy to predict, and then use these predetermined conditions to narrow down the input space along all tree paths, therefore boosting the performance of the naive Bayes classification upon components whose labels are difficult to predict. For annotating a new instance, we use the classifiers associated with the nodes to find labels for each component, starting from the root and then tracking down the tree perhaps on multiple paths. The annotation is completed when the bottom level is reached, where all labels produced along the paths are collected
    • …
    corecore