3,758 research outputs found

    Online failure prediction in air traffic control systems

    Get PDF
    This thesis introduces a novel approach to online failure prediction for mission critical distributed systems that has the distinctive features to be black-box, non-intrusive and online. The approach combines Complex Event Processing (CEP) and Hidden Markov Models (HMM) so as to analyze symptoms of failures that might occur in the form of anomalous conditions of performance metrics identified for such purpose. The thesis presents an architecture named CASPER, based on CEP and HMM, that relies on sniffed information from the communication network of a mission critical system, only, for predicting anomalies that can lead to software failures. An instance of Casper has been implemented, trained and tuned to monitor a real Air Traffic Control (ATC) system developed by Selex ES, a Finmeccanica Company. An extensive experimental evaluation of CASPER is presented. The obtained results show (i) a very low percentage of false positives over both normal and under stress conditions, and (ii) a sufficiently high failure prediction time that allows the system to apply appropriate recovery procedures

    Online failure prediction in air traffic control systems

    Get PDF
    This thesis introduces a novel approach to online failure prediction for mission critical distributed systems that has the distinctive features to be black-box, non-intrusive and online. The approach combines Complex Event Processing (CEP) and Hidden Markov Models (HMM) so as to analyze symptoms of failures that might occur in the form of anomalous conditions of performance metrics identified for such purpose. The thesis presents an architecture named CASPER, based on CEP and HMM, that relies on sniffed information from the communication network of a mission critical system, only, for predicting anomalies that can lead to software failures. An instance of Casper has been implemented, trained and tuned to monitor a real Air Traffic Control (ATC) system developed by Selex ES, a Finmeccanica Company. An extensive experimental evaluation of CASPER is presented. The obtained results show (i) a very low percentage of false positives over both normal and under stress conditions, and (ii) a sufficiently high failure prediction time that allows the system to apply appropriate recovery procedures

    Decision Support Systems

    Get PDF
    Decision support systems (DSS) have evolved over the past four decades from theoretical concepts into real world computerized applications. DSS architecture contains three key components: knowledge base, computerized model, and user interface. DSS simulate cognitive decision-making functions of humans based on artificial intelligence methodologies (including expert systems, data mining, machine learning, connectionism, logistical reasoning, etc.) in order to perform decision support functions. The applications of DSS cover many domains, ranging from aviation monitoring, transportation safety, clinical diagnosis, weather forecast, business management to internet search strategy. By combining knowledge bases with inference rules, DSS are able to provide suggestions to end users to improve decisions and outcomes. This book is written as a textbook so that it can be used in formal courses examining decision support systems. It may be used by both undergraduate and graduate students from diverse computer-related fields. It will also be of value to established professionals as a text for self-study or for reference

    On the Road to a Unified Market for Energy Efficiency: The Contribution of White Certificates Schemes

    Get PDF
    White certificates schemes mandate competing energy companies to promote energy efficiency with flexibility mechanisms, including the trading of energy savings. So far, stylized facts are lacking and outcomes are mainly country-specific. By comparing results of British, Italian and French experiences, we attempt to identify the core determinants of their performances. We show that (i) white certificates schemes are depicted in theoretical works as mandatory subsidies on energy efficiency goods recovered by an end-use energy tax, whereby white certificates exchanges are not a central feature; (ii) at current stages, existing schemes are cost-effective and economically efficient, with large discrepancies though; (iii) the hybrid subsidy-tax mechanism seems valid but conditional to cost pass through permissions; otherwise, obliged energy companies merely promote information on the “downstream” side (i.e. at the consumer level); (iv) although white certificates exchange between different types of actors involved can be important as in Italy, trade among obliged companies is negligible; instead, flexibility sustains vertical relationships between obliged parties and “upstream” partners (i.e. installers, energy service companies). In this respect, we support the view that white certificates schemes are a policy instrument of multi-functional nature (subsidisation, information, technology diffusion), whose static and dynamic efficiency depends upon the consistency between a proper definition of long-term energy savings, the appropriate cost-recovery permission and a fine coordination with other instruments. We finally propose a four stages deployment pattern, along which fragmented markets for energy efficient technologies get closer to create a unified market delivering energy efficiency as a homogeneous good.White Certificates Schemes, Static Efficiency, Dynamic Efficiency, Vertical Organisation, Policy Coordination

    Efficient Decision Support Systems

    Get PDF
    This series is directed to diverse managerial professionals who are leading the transformation of individual domains by using expert information and domain knowledge to drive decision support systems (DSSs). The series offers a broad range of subjects addressed in specific areas such as health care, business management, banking, agriculture, environmental improvement, natural resource and spatial management, aviation administration, and hybrid applications of information technology aimed to interdisciplinary issues. This book series is composed of three volumes: Volume 1 consists of general concepts and methodology of DSSs; Volume 2 consists of applications of DSSs in the biomedical domain; Volume 3 consists of hybrid applications of DSSs in multidisciplinary domains. The book is shaped upon decision support strategies in the new infrastructure that assists the readers in full use of the creative technology to manipulate input data and to transform information into useful decisions for decision makers

    Improving the Relevance of Cyber Incident Notification for Mission Assurance

    Get PDF
    Military organizations have embedded Information and Communication Technology (ICT) into their core mission processes as a means to increase operational efficiency, improve decision making quality, and shorten the kill chain. This dependence can place the mission at risk when the loss, corruption, or degradation of the confidentiality, integrity, and/or availability of a critical information resource occurs. Since the accuracy, conciseness, and timeliness of the information used in decision making processes dramatically impacts the quality of command decisions, and hence, the operational mission outcome; the recognition, quantification, and documentation of critical mission-information resource dependencies is essential for the organization to gain a true appreciation of its operational risk. This research identifies existing decision support systems and evaluates their capabilities as a means for capturing, maintaining and communicating mission-to-information resource dependency information in a timely and relevant manner to assure mission operations. This thesis answers the following research question: Which decision support technology is the best candidate for use in a cyber incident notification system to overcome limitations identified in the existing United States Air Force cyber incident notification process

    Optimising outcomes for potentially resectable pancreatic cancer through personalised predictive medicine : the application of complexity theory to probabilistic statistical modeling

    Get PDF
    Survival outcomes for pancreatic cancer remain poor. Surgical resection with adjuvant therapy is the only potentially curative treatment, but for many people surgery is of limited benefit. Neoadjuvant therapy has emerged as an alternative treatment pathway however the evidence base surrounding the treatment of potentially resectable pancreatic cancer is highly heterogeneous and fraught with uncertainty and controversy. This research seeks to engage with conjunctive theorising by avoiding simplification and abstraction to draw on different kinds of data from multiple sources to move research towards a theory that can build a rich picture of pancreatic cancer management pathways as a complex system. The overall aim is to move research towards personalised realistic medicine by using personalised predictive modeling to facilitate better decision making to achieve the optimisation of outcomes. This research is theory driven and empirically focused from a complexity perspective. Combining operational and healthcare research methodology, and drawing on influences from complementary paradigms of critical realism and systems theory, then enhancing their impact by using Cilliers’ complexity theory ‘lean ontology’, an open-world ontology is held and both epistemic reality and judgmental relativity are accepted. The use of imperfect data within statistical simulation models is explored to attempt to expand our capabilities for handling the emergent and uncertainty and to find other ways of relating to complexity within the field of pancreatic cancer research. Markov and discrete-event simulation modelling uncovered new insights and added a further dimension to the current debate by demonstrating that superior treatment pathway selection depended on individual patient and tumour factors. A Bayesian Belief Network was developed that modelled the dynamic nature of this complex system to make personalised prognostic predictions across competing treatments pathways throughout the patient journey to facilitate better shared clinical decision making with an accuracy exceeding existing predictive models.Survival outcomes for pancreatic cancer remain poor. Surgical resection with adjuvant therapy is the only potentially curative treatment, but for many people surgery is of limited benefit. Neoadjuvant therapy has emerged as an alternative treatment pathway however the evidence base surrounding the treatment of potentially resectable pancreatic cancer is highly heterogeneous and fraught with uncertainty and controversy. This research seeks to engage with conjunctive theorising by avoiding simplification and abstraction to draw on different kinds of data from multiple sources to move research towards a theory that can build a rich picture of pancreatic cancer management pathways as a complex system. The overall aim is to move research towards personalised realistic medicine by using personalised predictive modeling to facilitate better decision making to achieve the optimisation of outcomes. This research is theory driven and empirically focused from a complexity perspective. Combining operational and healthcare research methodology, and drawing on influences from complementary paradigms of critical realism and systems theory, then enhancing their impact by using Cilliers’ complexity theory ‘lean ontology’, an open-world ontology is held and both epistemic reality and judgmental relativity are accepted. The use of imperfect data within statistical simulation models is explored to attempt to expand our capabilities for handling the emergent and uncertainty and to find other ways of relating to complexity within the field of pancreatic cancer research. Markov and discrete-event simulation modelling uncovered new insights and added a further dimension to the current debate by demonstrating that superior treatment pathway selection depended on individual patient and tumour factors. A Bayesian Belief Network was developed that modelled the dynamic nature of this complex system to make personalised prognostic predictions across competing treatments pathways throughout the patient journey to facilitate better shared clinical decision making with an accuracy exceeding existing predictive models

    Monitoring and analysis system for performance troubleshooting in data centers

    Get PDF
    It was not long ago. On Christmas Eve 2012, a war of troubleshooting began in Amazon data centers. It started at 12:24 PM, with an mistaken deletion of the state data of Amazon Elastic Load Balancing Service (ELB for short), which was not realized at that time. The mistake first led to a local issue that a small number of ELB service APIs were affected. In about six minutes, it evolved into a critical one that EC2 customers were significantly affected. One example was that Netflix, which was using hundreds of Amazon ELB services, was experiencing an extensive streaming service outage when many customers could not watch TV shows or movies on Christmas Eve. It took Amazon engineers 5 hours 42 minutes to find the root cause, the mistaken deletion, and another 15 hours and 32 minutes to fully recover the ELB service. The war ended at 8:15 AM the next day and brought the performance troubleshooting in data centers to world’s attention. As shown in this Amazon ELB case.Troubleshooting runtime performance issues is crucial in time-sensitive multi-tier cloud services because of their stringent end-to-end timing requirements, but it is also notoriously difficult and time consuming. To address the troubleshooting challenge, this dissertation proposes VScope, a flexible monitoring and analysis system for online troubleshooting in data centers. VScope provides primitive operations which data center operators can use to troubleshoot various performance issues. Each operation is essentially a series of monitoring and analysis functions executed on an overlay network. We design a novel software architecture for VScope so that the overlay networks can be generated, executed and terminated automatically, on-demand. From the troubleshooting side, we design novel anomaly detection algorithms and implement them in VScope. By running anomaly detection algorithms in VScope, data center operators are notified when performance anomalies happen. We also design a graph-based guidance approach, called VFocus, which tracks the interactions among hardware and software components in data centers. VFocus provides primitive operations by which operators can analyze the interactions to find out which components are relevant to the performance issue. VScope’s capabilities and performance are evaluated on a testbed with over 1000 virtual machines (VMs). Experimental results show that the VScope runtime negligibly perturbs system and application performance, and requires mere seconds to deploy monitoring and analytics functions on over 1000 nodes. This demonstrates VScope’s ability to support fast operation and online queries against a comprehensive set of application to system/platform level metrics, and a variety of representative analytics functions. When supporting algorithms with high computation complexity, VScope serves as a ‘thin layer’ that occupies no more than 5% of their total latency. Further, by using VFocus, VScope can locate problematic VMs that cannot be found via solely application-level monitoring, and in one of the use cases explored in the dissertation, it operates with levels of perturbation of over 400% less than what is seen for brute-force and most sampling-based approaches. We also validate VFocus with real-world data center traces. The experimental results show that VFocus has troubleshooting accuracy of 83% on average.Ph.D
    corecore