100,324 research outputs found

    Anomaly detection prototype for log-based predictive maintenance at INFN-CNAF tier-1

    Get PDF
    Splitting the evolution of HEP from the one of computational resources needed to perform analyses is, nowadays, not possible. Each year, in fact, LHC produces dozens of PetaBytes of data (e.g. collision data, particle simulation, metadata etc.) that need orchestrated computing resources for storage, computational power and high throughput networks to connect centers. As a consequence of the LHC upgrade, the Luminosity of the experiment will increase by a factor of 10 over its originally designed value, entailing a non negligible technical challenge at computing centers: it is expected, in fact, an uprising in the amount of data produced and processed by the experiment. With this in mind, the HEP Software Foundation took action and released a road-map document describing the actions needed to prepare the computational infrastructure to support the upgrade. As a part of this collective effort, involving all computing centres of the Grid, INFN-CNAF has set a preliminary study towards the development of AI driven maintenance paradigm. As a contribution to this preparatory study, this master thesis presents an original software prototype that has been developed to handle the task of identifying critical activity time windows of a specific service (StoRM). Moreover, the prototype explores the viability of a content extraction via Text Processing techniques, applying such strategies to messages belonging to anomalous time windows

    AutoML for Log File Analysis (ALFA) in a Production Line System of Systems pointed towards Predictive Maintenance

    Get PDF
    Automated machine learning and predictive maintenance have both become prominent terms in recent years. Combining these two fields of research by conducting log analysis using automated machine learning techniques to fuel predictive maintenance algorithms holds multiple advantages, especially when applied in a production line setting. This approach can be used for multiple applications in the industry, e.g., in semiconductor, automotive, metal, and many other industrial applications to improve the maintenance and production costs and quality. In this paper, we investigate the possibility to create a predictive maintenance framework using only easily available log data based on a neural network framework for predictive maintenance tasks. We outline the advantages of the ALFA (AutoML for Log File Analysis) approach, which are high efficiency in combination with a low entry border for novices, among others. In a production line setting, one would also be able to cope with concept drift and even with data of a new quality in a gradual manner. In the presented production line context, we also show the superior performance of multiple neural networks over a comprehensive neural network in practice. The proposed software architecture allows not only for the automated adaption to concept drift and even data of new quality but also gives access to the current performance of the used neural networks

    A Data Scientific Approach Towards Predictive Maintenance Application in Manufacturing Industry

    Get PDF
    Most industries have recently started to harness the power of data to assess their performance and improve their production systems for future competitiveness and sustainability. Therefore, utilization of data for obtaining insights through data-driven approaches is invading every domain of industrial applications. Predictive maintenance (PdM) is one of the highest impacted industrial use cases in data-driven applications due to its ability to predict machine failures by implementing machine learning algorithms. This study aims to propose a systematic data scientific approach to provide valuable insights by analysing industrial alarm and event log data, which might further be used for investigation in root cause understanding and planning of necessary maintenance activities. To do that, a Cross-Industry Standard Process for Data Mining (CRISP-DM) is followed as a reference model in this study. The results are presented by first understanding the relationship between alarms and product types being processed in the selected machines by using exploratory data analysis (EDA). Along with this, the behavior of problematic alarms is identified. Afterward, a predictive analysis formulated as a multi-class classification problem is performed using various Machine Learning (ML) models to predict the category of alarm and generate rules to be used for further investigation in maintenance planning. The performance of the developed models is evaluated based on the different metrics and the decision tree model is selected with the higher accuracy score among them. As a theoretical contribution, this study presents an implementation of predictive modeling in a structured way, which uses a systematic data scientific approach based on industrial alarm and event log data. On the other hand, as a practical contribution, this study provides a set of decision rules that can act as decision support for further exploration of possible in-depth root causes through the other contextual data, and hence it gives an initial foundation towards PdM application in the case company

    Reliability-centered maintenance: analyzing failure in harvest sugarcane machine using some generalizations of the Weibull distribution

    Full text link
    In this study we considered five generalizations of the standard Weibull distribution to describe the lifetime of two important components of harvest sugarcane machines. The harvesters considered in the analysis does the harvest of an average of 20 tons of sugarcane per hour and their malfunction may lead to major losses, therefore, an effective maintenance approach is of main interesting for cost savings. For the considered distributions, the mathematical background is presented. Maximum likelihood is used for parameter estimation. Further, different discrimination procedures were used to obtain the best fit for each component. At the end, we propose a maintenance scheduling for the components of the harvesters using predictive analysis

    The Co-Evolution of Test Maintenance and Code Maintenance through the lens of Fine-Grained Semantic Changes

    Full text link
    Automatic testing is a widely adopted technique for improving software quality. Software developers add, remove and update test methods and test classes as part of the software development process as well as during the evolution phase, following the initial release. In this work we conduct a large scale study of 61 popular open source projects and report the relationships we have established between test maintenance, production code maintenance, and semantic changes (e.g, statement added, method removed, etc.). performed in developers' commits. We build predictive models, and show that the number of tests in a software project can be well predicted by employing code maintenance profiles (i.e., how many commits were performed in each of the maintenance activities: corrective, perfective, adaptive). Our findings also reveal that more often than not, developers perform code fixes without performing complementary test maintenance in the same commit (e.g., update an existing test or add a new one). When developers do perform test maintenance, it is likely to be affected by the semantic changes they perform as part of their commit. Our work is based on studying 61 popular open source projects, comprised of over 240,000 commits consisting of over 16,000,000 semantic change type instances, performed by over 4,000 software engineers.Comment: postprint, ICSME 201

    Alarm-Based Prescriptive Process Monitoring

    Full text link
    Predictive process monitoring is concerned with the analysis of events produced during the execution of a process in order to predict the future state of ongoing cases thereof. Existing techniques in this field are able to predict, at each step of a case, the likelihood that the case will end up in an undesired outcome. These techniques, however, do not take into account what process workers may do with the generated predictions in order to decrease the likelihood of undesired outcomes. This paper proposes a framework for prescriptive process monitoring, which extends predictive process monitoring approaches with the concepts of alarms, interventions, compensations, and mitigation effects. The framework incorporates a parameterized cost model to assess the cost-benefit tradeoffs of applying prescriptive process monitoring in a given setting. The paper also outlines an approach to optimize the generation of alarms given a dataset and a set of cost model parameters. The proposed approach is empirically evaluated using a range of real-life event logs
    • …
    corecore