1,896 research outputs found

    IMPROVE - Innovative Modelling Approaches for Production Systems to Raise Validatable Efficiency

    Get PDF
    This open access work presents selected results from the European research and innovation project IMPROVE which yielded novel data-based solutions to enhance machine reliability and efficiency in the fields of simulation and optimization, condition monitoring, alarm management, and quality prediction

    Summarizing Industrial Log Data with Latent Dirichlet Allocation

    Get PDF
    Industrial systems and equipment produce large log files recording their activities and possible problems. This data is often used for troubleshooting and root cause analysis, but using the raw log data is poorly suited for direct human analysis. Existing approaches based on data mining and machine learning focus on troubleshooting and root cause analysis. However, if a good summary of industrial log files was available, the files could be used to monitor equipment and industrial processes and act more proactively on problems. This contribution shows how a topic modeling approach based on Latent Dirichlet Allocation (LDA) helps to understand, organize and summarize industrial log files. The approach was tested on a real-world industrial dataset and evaluated quantitatively by direct annotation

    Decision Support System for Improved Operations, Maintenance, and Safety: a Data-Driven Approach

    Get PDF
    With industry 4.0, a new era of the industrial revolution with a focus on automation, inter-connectivity, machine learning, and real-time data collection and analysis are emerging. The smart digital technology which includes smart sensors, data acquisition, processing, and control based on big data, machine learning, and Artificial Intelligence (AI) provides boundless opportunities for the end-users to operate their plants under more optimized, reliable, and safer conditions. During an abnormal event in an industrial facility, operators are inundated with information to infer and act. Hence, there is a critical need to develop solutions that assist operators during such critical events. Also, because of the obsolescence challenges of typical industrial control systems, a new paradigm of Open Process Automation (OPA) is emerging. OPA requires a Real-time Operational Technology (OT) services to analyze the data generated by the sensors and control loops to assist the process plant operations by developing applications for advanced computing platforms in open source software platforms. The aim of this research is to highlight the potential applications of big data analytics, machine learning, and AI methods and develop solutions for plant operation, maintenance, process safety and risk management for real industry problems. This research work includes: 1. an alarm management framework integrated with data-driven (Key Performance Indicators) KPIs bench-marking, and a visualization tool is developed to address alarm management challenges; 2. a deep learning-based data-driven process fault detection and diagnosis method on cloud computing to identify abnormal process conditions; and 3. applications such as predictive maintenance, dynamic risk mapping, incident database analysis, application of Natural Language Processing (NLP) for text classification, and barrier assessment for dynamic risk mapping, A unified workflow approach is used to define the data-sources, applicable domains, and develop proposed applications. This work integrates data generated by field instrumentation, expert knowledge with data analytics and AI techniques to provide guidance to the operator or engineer to effectively take proactive decisions through “action-boards”. The robustness of the developed methods and algorithms is validated using real and simulated data sets. The proposed methods and results provide a future road map for any organization to deal with data integration with such applications leading to productive, safer and more reliable operations

    A Comparison Between Functional and Traditional Interface Displays in Support of Console Operator Performance and Workload

    Get PDF
    In the petrochemical industry, schematic interfaces have been traditionally used as the main interface for console operators to monitor activities. There is limited research in this industry investigating alternative interface types to better support console operator’s decisions during alarm management. Furthermore, even less of that research includes eye-tracking as a measure for console operator situation awareness (SA). This research aimed to investigate an alternative interface, called a functional interface, in its level of support of console operator situation awareness, accuracy, subjective workload, and average response time. Additionally, eye-tracking was incorporated to explore its value as measure for situation awareness on interfaces in petrochemical control rooms. This research used a 2x3 factorial design to explore the effects of interface type (schematic vs. functional) and complexity level (easy, medium, and hard) in engineering students at Louisiana State University (LSU). The experiment involved three 30 minute simulations on either the schematic or the functional interface design of a main overview display that is typically seen in a refinery. The dependent variables included SA, subjective workload, accuracy, average response time, and eye fixation percentages for certain areas of interest (AOI). The mixed model analyses showed that there were no significant differences between interface types for any dependent variables except for the eye fixations in non-AOIs during non-alarm times. Participants spent significantly less time looking at non-AOIs during non-alarm times for the functional interface than the schematic. For complexity levels, there were no significant differences except for average response times. Average response times were were significantly higher for the medium level then the easy or hard levels. Also, the eye-tracking results showed that participants spent significantly less time in the intended AOIs and non-intended areas on the easy complexity level than the medium or hard. There was a significant positive correlation between the fixation percentages of the intended AOI during alarm times and SA1, indicating that eye-tracking was able to capture participants noticing process deviations during the simulation. Eye-tracking appears to be a good measure of SA1 among console operators. Overall, this research does not provide evidence that functional interfaces provide more support of console operator SA, workload, or performance

    Cyber security of smart building ecosystems

    Get PDF
    Abstract. Building automation systems are used to create energy-efficient and customisable commercial and residential buildings. During the last two decades, these systems have become more and more interconnected to reduce expenses and expand their capabilities by allowing vendors to perform maintenance and by letting building users to control the machines remotely. This interconnectivity has brought new opportunities on how building data can be collected and put to use, but it has also increased the attack surface of smart buildings by introducing security challenges that need to be addressed. Traditional building automation systems with their proprietary communication protocols and interfaces are giving way to interoperable systems utilising open technologies. This interoperability is an important aspect in streamlining the data collection process by ensuring that different components of the environment are able to exchange information and operate in a coordinated manner. Turning these opportunities into actual products and platforms requires multi-sector collaboration and joint research projects, so that the buildings of tomorrow can become reality with as few compromises as possible. This work examines one of these experimental project platforms, KEKO ecosystem, with the focus on assessing the cyber security challenges faced by the platform by using the well-recognised MITRE ATT&CK knowledge base of adversary tactics and techniques. The assessment provides a detailed categorisation of identified challenges and recommendations on how they should be addressed. This work also presents one possible solution for improving the detection of offensive techniques targeting building automation by implementing a monitoring pipeline within the experimental platform, and a security event API that can be integrated to a remote SIEM system to increase visibility on the platform’s data processing operations

    Developing an agent-based evacuation simulation model based on the study of human behaviour in fire investigation reports

    Get PDF
    Fire disasters happen every day all over the world. These hazardous events threaten people's lives and force an immediate movement of people wanting to escape from a dangerous area. Evacuation drills are held to encourage people to practise evacuation skills and to ensure they are familiar with the environment. However, these drills cannot accurately represent real emergency situations and, in some cases, people may be injured during practice. Therefore, modelling pedestrian motion and crowd dynamics in evacuation situations has important implications for human safety, building design, and evacuation processes. This thesis focuses on indoor pedestrian evacuation in fire disasters. To understand how humans behave in emergency situations, and to simulate more realistic human behaviour, this thesis studies human behaviour from fire investigation reports, which provide a variety details about the building, fire circumstance, and human behaviour from professional fire investigation teams. A generic agent-based evacuation model is developed based on common human behaviour that indentified in the fire investigation reports studied. A number of human evacuation behaviours are selected and then used to design different types of agents, assigning with various characteristics. In addition, the interactions between various agents and an evacuation timeline are modelled to simulate human behaviour and evacuation phenomena during evacuation. The application developed is validated using three specific real fire cases to evaluate how closely the simulation results reflected reality. The model provides information on the number of casualties, high-risk areas, egress selections, and evacuation time. In addition, changes to the building configuration, number of occupants, and location of fire origin are tested in order to predict potential risk areas, building capacity and evacuation time for different situations. Consequently, the application can be used to inform building designs, evacuation plans, and priority rescue processes

    CBR and MBR techniques: review for an application in the emergencies domain

    Get PDF
    The purpose of this document is to provide an in-depth analysis of current reasoning engine practice and the integration strategies of Case Based Reasoning and Model Based Reasoning that will be used in the design and development of the RIMSAT system. RIMSAT (Remote Intelligent Management Support and Training) is a European Commission funded project designed to: a.. Provide an innovative, 'intelligent', knowledge based solution aimed at improving the quality of critical decisions b.. Enhance the competencies and responsiveness of individuals and organisations involved in highly complex, safety critical incidents - irrespective of their location. In other words, RIMSAT aims to design and implement a decision support system that using Case Base Reasoning as well as Model Base Reasoning technology is applied in the management of emergency situations. This document is part of a deliverable for RIMSAT project, and although it has been done in close contact with the requirements of the project, it provides an overview wide enough for providing a state of the art in integration strategies between CBR and MBR technologies.Postprint (published version

    A dependability framework for WSN-based aquatic monitoring systems

    Get PDF
    Wireless Sensor Networks (WSN) are being progressively used in several application areas, particularly to collect data and monitor physical processes. Moreover, sensor nodes used in environmental monitoring applications, such as the aquatic sensor networks, are often subject to harsh environmental conditions while monitoring complex phenomena. Non-functional requirements, like reliability, security or availability, are increasingly important and must be accounted for in the application development. For that purpose, there is a large body of knowledge on dependability techniques for distributed systems, which provides a good basis to understand how to satisfy these non-functional requirements of WSN-based monitoring applications. Given the data-centric nature of monitoring applications, it is of particular importance to ensure that data is reliable or, more generically, that it has the necessary quality. The problem of ensuring the desired quality of data for dependable monitoring using WSNs is studied herein. With a dependability-oriented perspective, it is reviewed the possible impairments to dependability and the prominent existing solutions to solve or mitigate these impairments. Despite the variety of components that may form a WSN-based monitoring system, it is given particular attention to understanding which faults can affect sensors, how they can affect the quality of the information, and how this quality can be improved and quantified. Open research issues for the specific case of aquatic monitoring applications are also discussed. One of the challenges in achieving a dependable system behavior is to overcome the external disturbances affecting sensor measurements and detect the failure patterns in sensor data. This is a particular problem in environmental monitoring, due to the difficulty in distinguishing a faulty behavior from the representation of a natural phenomenon. Existing solutions for failure detection assume that physical processes can be accurately modeled, or that there are large deviations that may be detected using coarse techniques, or more commonly that it is a high-density sensor network with value redundant sensors. This thesis aims at defining a new methodology for dependable data quality in environmental monitoring systems, aiming to detect faulty measurements and increase the sensors data quality. The framework of the methodology is overviewed through a generically applicable design, which can be employed to any environment sensor network dataset. The methodology is evaluated in various datasets of different WSNs, where it is used machine learning to model each sensor behavior, exploiting the existence of correlated data provided by neighbor sensors. It is intended to explore the data fusion strategies in order to effectively detect potential failures for each sensor and, simultaneously, distinguish truly abnormal measurements from deviations due to natural phenomena. This is accomplished with the successful application of the methodology to detect and correct outliers, offset and drifting failures in real monitoring networks datasets. In the future, the methodology can be applied to optimize the data quality control processes of new and already operating monitoring networks, and assist in the networks maintenance operations.As redes de sensores sem fios (RSSF) têm vindo cada vez mais a serem utilizadas em diversas áreas de aplicação, em especial para monitorizar e capturar informação de processos físicos em meios naturais. Neste contexto, os sensores que estão em contacto direto com o respectivo meio ambiente, como por exemplo os sensores em meios aquáticos, estão sujeitos a condições adversas e complexas durante o seu funcionamento. Esta complexidade conduz à necessidade de considerarmos, durante o desenvolvimento destas redes, os requisitos não funcionais da confiabilidade, da segurança ou da disponibilidade elevada. Para percebermos como satisfazer estes requisitos da monitorização com base em RSSF para aplicações ambientais, já existe uma boa base de conhecimento sobre técnicas de confiabilidade em sistemas distribuídos. Devido ao foco na obtenção de dados deste tipo de aplicações de RSSF, é particularmente importante garantir que os dados obtidos na monitorização sejam confiáveis ou, de uma forma mais geral, que tenham a qualidade necessária para o objetivo pretendido. Esta tese estuda o problema de garantir a qualidade de dados necessária para uma monitorização confiável usando RSSF. Com o foco na confiabilidade, revemos os possíveis impedimentos à obtenção de dados confiáveis e as soluções existentes capazes de corrigir ou mitigar esses impedimentos. Apesar de existir uma grande variedade de componentes que formam ou podem formar um sistema de monitorização com base em RSSF, prestamos particular atenção à compreensão das possíveis faltas que podem afetar os sensores, a como estas faltas afetam a qualidade dos dados recolhidos pelos sensores e a como podemos melhorar os dados e quantificar a sua qualidade. Tendo em conta o caso específico dos sistemas de monitorização em meios aquáticos, discutimos ainda as várias linhas de investigação em aberto neste tópico. Um dos desafios para se atingir um sistema de monitorização confiável é a deteção da influência de fatores externos relacionados com o ambiente monitorizado, que afetam as medições obtidas pelos sensores, bem como a deteção de comportamentos de falha nas medições. Este desafio é um problema particular na monitorização em ambientes naturais adversos devido à dificuldade da distinção entre os comportamentos associados às falhas nos sensores e os comportamentos dos sensores afetados pela à influência de um evento natural. As soluções existentes para este problema, relacionadas com deteção de faltas, assumem que os processos físicos a monitorizar podem ser modelados de forma eficaz, ou que os comportamentos de falha são caraterizados por desvios elevados do comportamento expectável de forma a serem facilmente detetáveis. Mais frequentemente, as soluções assumem que as redes de sensores contêm um número suficientemente elevado de sensores na área monitorizada e, consequentemente, que existem sensores redundantes relativamente à medição. Esta tese tem como objetivo a definição de uma nova metodologia para a obtenção de qualidade de dados confiável em sistemas de monitorização ambientais, com o intuito de detetar a presença de faltas nas medições e aumentar a qualidade dos dados dos sensores. Esta metodologia tem uma estrutura genérica de forma a ser aplicada a uma qualquer rede de sensores ambiental ou ao respectivo conjunto de dados obtido pelos sensores desta. A metodologia é avaliada através de vários conjuntos de dados de diferentes RSSF, em que aplicámos técnicas de aprendizagem automática para modelar o comportamento de cada sensor, com base na exploração das correlações existentes entre os dados obtidos pelos sensores da rede. O objetivo é a aplicação de estratégias de fusão de dados para a deteção de potenciais falhas em cada sensor e, simultaneamente, a distinção de medições verdadeiramente defeituosas de desvios derivados de eventos naturais. Este objectivo é cumprido através da aplicação bem sucedida da metodologia para detetar e corrigir outliers, offsets e drifts em conjuntos de dados reais obtidos por redes de sensores. No futuro, a metodologia pode ser aplicada para otimizar os processos de controlo da qualidade de dados quer de novos sistemas de monitorização, quer de redes de sensores já em funcionamento, bem como para auxiliar operações de manutenção das redes.Laboratório Nacional de Engenharia Civi

    Performance Analysis Of Data-Driven Algorithms In Detecting Intrusions On Smart Grid

    Get PDF
    The traditional power grid is no longer a practical solution for power delivery due to several shortcomings, including chronic blackouts, energy storage issues, high cost of assets, and high carbon emissions. Therefore, there is a serious need for better, cheaper, and cleaner power grid technology that addresses the limitations of traditional power grids. A smart grid is a holistic solution to these issues that consists of a variety of operations and energy measures. This technology can deliver energy to end-users through a two-way flow of communication. It is expected to generate reliable, efficient, and clean power by integrating multiple technologies. It promises reliability, improved functionality, and economical means of power transmission and distribution. This technology also decreases greenhouse emissions by transferring clean, affordable, and efficient energy to users. Smart grid provides several benefits, such as increasing grid resilience, self-healing, and improving system performance. Despite these benefits, this network has been the target of a number of cyber-attacks that violate the availability, integrity, confidentiality, and accountability of the network. For instance, in 2021, a cyber-attack targeted a U.S. power system that shut down the power grid, leaving approximately 100,000 people without power. Another threat on U.S. Smart Grids happened in March 2018 which targeted multiple nuclear power plants and water equipment. These instances represent the obvious reasons why a high level of security approaches is needed in Smart Grids to detect and mitigate sophisticated cyber-attacks. For this purpose, the US National Electric Sector Cybersecurity Organization and the Department of Energy have joined their efforts with other federal agencies, including the Cybersecurity for Energy Delivery Systems and the Federal Energy Regulatory Commission, to investigate the security risks of smart grid networks. Their investigation shows that smart grid requires reliable solutions to defend and prevent cyber-attacks and vulnerability issues. This investigation also shows that with the emerging technologies, including 5G and 6G, smart grid may become more vulnerable to multistage cyber-attacks. A number of studies have been done to identify, detect, and investigate the vulnerabilities of smart grid networks. However, the existing techniques have fundamental limitations, such as low detection rates, high rates of false positives, high rates of misdetection, data poisoning, data quality and processing, lack of scalability, and issues regarding handling huge volumes of data. Therefore, these techniques cannot ensure safe, efficient, and dependable communication for smart grid networks. Therefore, the goal of this dissertation is to investigate the efficiency of machine learning in detecting cyber-attacks on smart grids. The proposed methods are based on supervised, unsupervised machine and deep learning, reinforcement learning, and online learning models. These models have to be trained, tested, and validated, using a reliable dataset. In this dissertation, CICDDoS 2019 was used to train, test, and validate the efficiency of the proposed models. The results show that, for supervised machine learning models, the ensemble models outperform other traditional models. Among the deep learning models, densely neural network family provides satisfactory results for detecting and classifying intrusions on smart grid. Among unsupervised models, variational auto-encoder, provides the highest performance compared to the other unsupervised models. In reinforcement learning, the proposed Capsule Q-learning provides higher detection and lower misdetection rates, compared to the other model in literature. In online learning, the Online Sequential Euclidean Distance Routing Capsule Network model provides significantly better results in detecting intrusion attacks on smart grid, compared to the other deep online models
    corecore