13,364 research outputs found

    Seer: a lightweight online failure prediction approach

    Get PDF
    Online failure prediction aims to predict the manifestation of failures at runtime before the failures actually occur. Existing online failure prediction approaches typically operate on data which is either directly reported by the system under test or directly observable from outside system executions. These approaches generally refrain themselves from collecting internal execution data that can further improve the prediction quality. One reason behind this general trend is due to the runtime overhead cost incurred by the measurement instruments that are required to collect the data. In this work we conjecture that large cost reductions in collecting internal execution data for online failure prediction can derive from reducing the cost of the measurement instruments, while still supporting acceptable levels of prediction quality. To evaluate this conjecture, we present a lightweight online failure prediction approach, called Seer. Seer uses fast hardware performance counters to perform most of the data collection work. The data is augmented with further data collected by a minimal amount of software instrumentation that is added to the systems software. We refer to the data collected in this manner as hybrid spectra. We applied the proposed approach to three widely used open source subject applications and evaluated it by comparing and contrasting three types of hybrid spectra and two types of traditional software spectra. At the lowest level of runtime overheads attained in the experiments, the hybrid spectra predicted the failures about half way through the executions with an F-measure of 0.77 and a runtime overhead of 1.98%, on average. Comparing hybrid spectra to software spectra, we observed that, for comparable runtime overhead levels, the hybrid spectra provided significantly better prediction accuracies and earlier warnings for failures than the software spectra. Alternatively, for comparable accuracy levels, the hybrid spectra incurred significantly less runtime overheads and provided earlier warnings

    Proactive cloud management for highly heterogeneous multi-cloud infrastructures

    Get PDF
    Various literature studies demonstrated that the cloud computing paradigm can help to improve availability and performance of applications subject to the problem of software anomalies. Indeed, the cloud resource provisioning model enables users to rapidly access new processing resources, even distributed over different geographical regions, that can be promptly used in the case of, e.g., crashes or hangs of running machines, as well as to balance the load in the case of overloaded machines. Nevertheless, managing a complex geographically-distributed cloud deploy could be a complex and time-consuming task. Autonomic Cloud Manager (ACM) Framework is an autonomic framework for supporting proactive management of applications deployed over multiple cloud regions. It uses machine learning models to predict failures of virtual machines and to proactively redirect the load to healthy machines/cloud regions. In this paper, we study different policies to perform efficient proactive load balancing across cloud regions in order to mitigate the effect of software anomalies. These policies use predictions about the mean time to failure of virtual machines. We consider the case of heterogeneous cloud regions, i.e regions with different amount of resources, and we provide an experimental assessment of these policies in the context of ACM Framework

    An AI-Layered with Multi-Agent Systems Architecture for Prognostics Health Management of Smart Transformers:A Novel Approach for Smart Grid-Ready Energy Management Systems

    Get PDF
    After the massive integration of distributed energy resources, energy storage systems and the charging stations of electric vehicles, it has become very difficult to implement an efficient grid energy management system regarding the unmanageable behavior of the power flow within the grid, which can cause many critical problems in different grid stages, typically in the substations, such as failures, blackouts, and power transformer explosions. However, the current digital transition toward Energy 4.0 in Smart Grids allows the integration of smart solutions to substations by integrating smart sensors and implementing new control and monitoring techniques. This paper is proposing a hybrid artificial intelligence multilayer for power transformers, integrating different diagnostic algorithms, Health Index, and life-loss estimation approaches. After gathering different datasets, this paper presents an exhaustive algorithm comparative study to select the best fit models. This developed architecture for prognostic (PHM) health management is a hybrid interaction between evolutionary support vector machine, random forest, k-nearest neighbor, and linear regression-based models connected to an online monitoring system of the power transformer; these interactions are calculating the important key performance indicators which are related to alarms and a smart energy management system that gives decisions on the load management, the power factor control, and the maintenance schedule planning

    Proactive Scalability and Management of Resources in Hybrid Clouds via Machine Learning

    Get PDF
    In this paper, we present a novel framework for supporting the management and optimization of application subject to software anomalies and deployed on large scale cloud architectures, composed of different geographically distributed cloud regions. The framework uses machine learning models for predicting failures caused by accumulation of anomalies. It introduces a novel workload balancing approach and a proactive system scale up/scale down technique. We developed a prototype of the framework and present some experiments for validating the applicability of the proposed approache

    A 3D Framework for Characterizing Microstructure Evolution of Li-Ion Batteries

    Get PDF
    Lithium-ion batteries are commonly found in many modern consumer devices, ranging from portable computers and mobile phones to hybrid- and fully-electric vehicles. While improving efficiencies and increasing reliabilities are of critical importance for increasing market adoption of the technology, research on these topics is, to date, largely restricted to empirical observations and computational simulations. In the present study, it is proposed to use the modern technique of X-ray microscopy to characterize a sample of commercial 18650 cylindrical Li-ion batteries in both their pristine and aged states. By coupling this approach with 3D and 4D data analysis techniques, the present study aimed to create a research framework for characterizing the microstructure evolution leading to capacity fade in a commercial battery. The results indicated the unique capabilities of the microscopy technique to observe the evolution of these batteries under aging conditions, successfully developing a workflow for future research studies

    Learning Motion Predictors for Smart Wheelchair using Autoregressive Sparse Gaussian Process

    Full text link
    Constructing a smart wheelchair on a commercially available powered wheelchair (PWC) platform avoids a host of seating, mechanical design and reliability issues but requires methods of predicting and controlling the motion of a device never intended for robotics. Analog joystick inputs are subject to black-box transformations which may produce intuitive and adaptable motion control for human operators, but complicate robotic control approaches; furthermore, installation of standard axle mounted odometers on a commercial PWC is difficult. In this work, we present an integrated hardware and software system for predicting the motion of a commercial PWC platform that does not require any physical or electronic modification of the chair beyond plugging into an industry standard auxiliary input port. This system uses an RGB-D camera and an Arduino interface board to capture motion data, including visual odometry and joystick signals, via ROS communication. Future motion is predicted using an autoregressive sparse Gaussian process model. We evaluate the proposed system on real-world short-term path prediction experiments. Experimental results demonstrate the system's efficacy when compared to a baseline neural network model.Comment: The paper has been accepted to the International Conference on Robotics and Automation (ICRA2018

    Evaluating Software Architectures: Development Stability and Evolution

    Get PDF
    We survey seminal work on software architecture evaluationmethods. We then look at an emerging class of methodsthat explicates evaluating software architectures forstability and evolution. We define architectural stabilityand formulate the problem of evaluating software architecturesfor stability and evolution. We draw the attention onthe use of Architectures Description Languages (ADLs) forsupporting the evaluation of software architectures in generaland for architectural stability in specific

    AI and OR in management of operations: history and trends

    Get PDF
    The last decade has seen a considerable growth in the use of Artificial Intelligence (AI) for operations management with the aim of finding solutions to problems that are increasing in complexity and scale. This paper begins by setting the context for the survey through a historical perspective of OR and AI. An extensive survey of applications of AI techniques for operations management, covering a total of over 1200 papers published from 1995 to 2004 is then presented. The survey utilizes Elsevier's ScienceDirect database as a source. Hence, the survey may not cover all the relevant journals but includes a sufficiently wide range of publications to make it representative of the research in the field. The papers are categorized into four areas of operations management: (a) design, (b) scheduling, (c) process planning and control and (d) quality, maintenance and fault diagnosis. Each of the four areas is categorized in terms of the AI techniques used: genetic algorithms, case-based reasoning, knowledge-based systems, fuzzy logic and hybrid techniques. The trends over the last decade are identified, discussed with respect to expected trends and directions for future work suggested
    • …
    corecore