6,878 research outputs found

    Events Recognition System for Water Treatment Works

    Get PDF
    The supply of drinking water in sufficient quantity and required quality is a challenging task for water companies. Tackling this task successfully depends largely on ensuring a continuous high quality level of water treatment at Water Treatment Works (WTW). Therefore, processes at WTWs are highly automated and controlled. A reliable and rapid detection of faulty sensor data and failure events at WTWs processes is of prime importance for its efficient and effective operation. Therefore, the vast majority of WTWs operated in the UK make use of event detection systems that automatically generate alarms after the detection of abnormal behaviour on observed signals to ensure an early detection of WTW’s process failures. Event detection systems usually deployed at WTWs apply thresholds to the monitored signals for the recognition of WTW’s faulty processes. The research work described in this thesis investigates new methods for near real-time event detection at WTWs by the implementation of statistical process control and machine learning techniques applied for an automated near real-time recognition of failure events at WTWs processes. The resulting novel Hybrid CUSUM Event Recognition System (HC-ERS) makes use of new online sensor data validation and pre-processing techniques and utilises two distinct detection methodologies: first for fault detection on individual signals and second for the recognition of faulty processes and events at WTWs. The fault detection methodology automatically detects abnormal behaviour of observed water quality parameters in near real-time using the data of the corresponding sensors that is online validated and pre-processed. The methodology utilises CUSUM control charts to predict the presence of faults by tracking the variation of each signal individually to identify abnormal shifts in its mean. The basic CUSUM methodology was refined by investigating optimised interdependent parameters for each signal individually. The combined predictions of CUSUM fault detection on individual signals serves the basis for application of the second event detection methodology. The second event detection methodology automatically identifies faults at WTW’s processes respectively failure events at WTWs in near real-time, utilising the faults detected by CUSUM fault detection on individual signals beforehand. The method applies Random Forest classifiers to predict the presence of an event at WTW’s processes. All methods have been developed to be generic and generalising well across different drinking water treatment processes at WTWs. HC-ERS has proved to be effective in the detection of failure events at WTWs demonstrated by the application on real data of water quality signals with historical events from a UK’s WTWs. The methodology achieved a peak F1 value of 0.84 and generates 0.3 false alarms per week. These results demonstrate the ability of method to automatically and reliably detect failure events at WTW’s processes in near real-time and also show promise for practical application of the HC-ERS in industry. The combination of both methodologies presents a unique contribution to the field of near real-time event detection at WTW

    A framework for the classification of accounts manipulations

    Get PDF
    Accounts manipulations have been a matter of research, discussion and, even, controversy in several countries such as the United States, Canada, the United Kingdom, Australia and France. The objective of this paper is to elaborate a general framework for classifying accounts manipulations through a thorough review of the literature. This framework is based on the desire to influence the market participants' perception of the risk associated to the firm. The risk is materialized through the earnings per share and the debt/equity ratio. The literature on this topic is already very rich, although we have identified series of areas in need for further research.accounts manipulations; earnings management; income smoothing; big bath accounting; creative accounting

    The Austrian Insurance Industry: A Structure, Conduct and Performance Analysis

    Get PDF
    There exist a vast number of studies on the banking industry. However, the insurance industry remains relatively unexplored. Increasingly, Austrian insurance institutions are becoming important as financial intermediaries in the domestic market, and – based on proximity advantage – also in the Central and Eastern European markets. This paper applies the structure, conduct and performance (SCP) approach to a sample of 52 Austrian insurance firms. The main finding is that the standard SCP hypothesis of highly concentrated markets, which create incentives to engage in collusive behaviour and which in turn leads to higher industry profit rates, cannot be supported by the Austrian insurance industry leads to higher industry profit rates, cannot be supported by the Austrian insurance industry.Insurance industry, Market structure, Conduct and performance, Industrial organisation

    A Hybrid Context-aware Middleware for Relevant Information Delivery in Multi-Role and Multi-User Monitoring Systems: An Application to the Building Management Domain

    Get PDF
    Recent advances in information and communications technology (ICT) have greatly extended capabilities and functionalities of control and monitoring systems including Building Management Systems (BMS). Specifically, it is now possible to integrate diverse set of devices and information systems providing heterogeneous data. This data, in turn, is now available on the higher levels of the system architectures, providing more information on the matter at hand and enabling principal possibility of better-informed decisions. Furthermore, the diversity and availability of information have made control and monitoring systems more attractive to new user groups, who now have the opportunity to find needed information, which was not available before. Thus, modern control and monitoring systems are well-equipped, multi-functional systems, which incorporate great number and variety of data sources and are used by multiple users with their special tasks and information needs.In theory, the diversity and availability of new data should lead to more informed users and better decisions. In practice, it overwhelms user capacities to perceive all available information and leads to the situations, where important data is hindered and lost, therefore complicating understanding of the ongoing status. Thus, there is a need in development of new solutions, which would reduce the unnecessary information burden to the users of the system, while keeping them well informed with respect to their personal needs and responsibilities.This dissertation proposes the middleware for relevant information delivery in multi-role and multi-user BMS, which is capable of analysing ongoing situations in the environment and delivering information personalized to specific user needs. The middleware implementation is based on a novel hybrid approach, which involve semantic modelling of the contextual information and fusion of this information with runtime device data by means of Complex Event Processing (CEP). The context model is actively used at the configuration stages of the middleware, which enables flexible redirection of information flows, simplified (re)configuration of the solution, and consideration of additional information at the runtime phases. The CEP utilizes contextual information and enables temporal reasoning support in combination with runtime analysis capabilities, thus processing ongoing data from devices and delivering personalized information flows. In addition, the work proposes classification and combination principles of ongoing system notifications, which further specialize information flows in accordance to user needs and environment status.The middleware and corresponding principles (e.g. knowledge modelling, classification and combination of ongoing notifications) have been designed contemplating the building management (BM) domain. A set of experiments on real data from rehabilitation facility has been carried out demonstrating applicability of the approach with respect to delivered information and performance considerations. It is expected that with minor modifications the approach has the potential of being adopted for control and monitoring systems of discrete manufacturing domain

    FLAGS : a methodology for adaptive anomaly detection and root cause analysis on sensor data streams by fusing expert knowledge with machine learning

    Get PDF
    Anomalies and faults can be detected, and their causes verified, using both data-driven and knowledge-driven techniques. Data-driven techniques can adapt their internal functioning based on the raw input data but fail to explain the manifestation of any detection. Knowledge-driven techniques inherently deliver the cause of the faults that were detected but require too much human effort to set up. In this paper, we introduce FLAGS, the Fused-AI interpretabLe Anomaly Generation System, and combine both techniques in one methodology to overcome their limitations and optimize them based on limited user feedback. Semantic knowledge is incorporated in a machine learning technique to enhance expressivity. At the same time, feedback about the faults and anomalies that occurred is provided as input to increase adaptiveness using semantic rule mining methods. This new methodology is evaluated on a predictive maintenance case for trains. We show that our method reduces their downtime and provides more insight into frequently occurring problems. (C) 2020 The Authors. Published by Elsevier B.V

    Advanced Process Monitoring for Industry 4.0

    Get PDF
    This book reports recent advances on Process Monitoring (PM) to cope with the many challenges raised by the new production systems, sensors and “extreme data” conditions that emerged with Industry 4.0. Concepts such as digital-twins and deep learning are brought to the PM arena, pushing forward the capabilities of existing methodologies to handle more complex scenarios. The evolution of classical paradigms such as Latent Variable modeling, Six Sigma and FMEA are also covered. Applications span a wide range of domains such as microelectronics, semiconductors, chemicals, materials, agriculture, as well as the monitoring of rotating equipment, combustion systems and membrane separation processes

    Road conditions monitoring using semantic segmentation of smartphone motion sensor data

    Get PDF
    Many studies and publications have been written about the use of moving object analysis to locate a specific item or replace a lost object in video sequences. Using semantic analysis, it could be challenging to pinpoint each meaning and follow the movement of moving objects. Some machine learning algorithms have turned to the right interpretation of photos or video recordings to communicate coherently. The technique converts visual patterns and features into visual language using dense and sparse optical flow algorithms. To semantically partition smartphone motion sensor data for any video categorization, using integrated bidirectional Long Short-Term Memory layers, this paper proposes a redesigned U-Net architecture. Experiments show that the proposed technique outperforms several existing semantic segmentation algorithms using z-axis accelerometer and z-axis gyroscope properties. The video sequence's numerous moving elements are synchronised with one another to follow the scenario. Also, the objective of this work is to assess the proposed model on roadways and other moving objects using five datasets (self-made dataset and the pothole600 dataset). After looking at the map or tracking an object, the results should be given together with the diagnosis of the moving object and its synchronization with video clips. The suggested model's goals were developed using a machine learning method that combines the validity of the results with the precision of finding the necessary moving parts. Python 3.7 platforms were used to complete the project since they are user-friendly and highly efficient platforms

    MĂłdulo de GestĂŁo de alarmes na IndĂșstria 4.0

    Get PDF
    Com a constante evolução tecnolĂłgica, mais particularmente na ĂĄrea da indĂșstria, tornou-se necessĂĄria a evolução dos mĂ©todos de monitorização de forma a garantir a segurança e o funcionamento devido de todos os domĂ­nios em questĂŁo. Com o intuito de contribuir para a monitorização de empresas com alto envolvimento tecnolĂłgico, o GECAD iniciou o desenvolvimento de um projeto, Alarm Management Model for 4.0 Industry, que consiste no desenvolvimento de um sistema capaz de gerir alarmes de vĂĄrias fontes diferentes, facilitando o acesso a informação sobre os mesmos de forma clara e objetiva. O foco principal do projeto passa por definir, com base nos alarmes detetados, prioridades entre eles de forma individual ou combinada e permitir ao utilizador ter uma experiĂȘncia simples e eficaz aquando da utilização do sistema desenvolvido.With the constant technology evolution, more particularly in the industrial area, it became the necessary the evolution of the monitorization methods in order to guarantee the security and the behavior of all the compartments in the system. With the goal of contributing to the monitorization of this type of companies, GECAD started the development of a new project, Alarm Management Model for 4.0 Industry that consists of developing a system capable of detecting alarms from various sources and manage it. The main focus of the project is about defining, based on the detected alarms, the individual and combined priority of them so it allows the user to have a simpler experience when using the developed system

    Movement Analytics: Current Status, Application to Manufacturing, and Future Prospects from an AI Perspective

    Full text link
    Data-driven decision making is becoming an integral part of manufacturing companies. Data is collected and commonly used to improve efficiency and produce high quality items for the customers. IoT-based and other forms of object tracking are an emerging tool for collecting movement data of objects/entities (e.g. human workers, moving vehicles, trolleys etc.) over space and time. Movement data can provide valuable insights like process bottlenecks, resource utilization, effective working time etc. that can be used for decision making and improving efficiency. Turning movement data into valuable information for industrial management and decision making requires analysis methods. We refer to this process as movement analytics. The purpose of this document is to review the current state of work for movement analytics both in manufacturing and more broadly. We survey relevant work from both a theoretical perspective and an application perspective. From the theoretical perspective, we put an emphasis on useful methods from two research areas: machine learning, and logic-based knowledge representation. We also review their combinations in view of movement analytics, and we discuss promising areas for future development and application. Furthermore, we touch on constraint optimization. From an application perspective, we review applications of these methods to movement analytics in a general sense and across various industries. We also describe currently available commercial off-the-shelf products for tracking in manufacturing, and we overview main concepts of digital twins and their applications
    • 

    corecore