20,660 research outputs found

    Determining appropriate data analytics for transformer health monitoring

    Get PDF
    Transformers are vital assets for the safe, reliable and cost-effective operation of nuclear power plants. The unexpected failure of a transformer can lead to different consequences ranging from a lack of export capability, with the corresponding economic penalties, to catastrophic failure, with the associated health, safety and economic effects. Condition monitoring techniques examine the health of the transformer periodically, with the aim to identify early indicators of anomalies. However, many transformer failures occur because diagnostic and monitoring models do not identify degraded conditions in time. Therefore, health monitoring is an essential component to transformer lifecycle management. Existing tools for transformer health monitoring use traditional dissolved gas analysis based diagnostics techniques. With the advance of prognostics and health management (PHM) applications, we can enhance traditional transformer health monitoring techniques using PHM analytics. The design of an appropriate data analytics system requires a multi-stage design process including: (i) specification of engineering requirements; (ii) characterization of existing data sources and analytics to identify complementary techniques; (iii) development of the functional specification of the analytics suite to formalize its behavior, and finally (iv) deployment, validation, and verification of the functional requirements in the final platform. Accordingly, in this paper we propose a transformer analytics suite which incorporates anomaly detection, diagnostics, and prognostics modules in order to complement existing tools for transformer health monitoring

    Big Data Analytics for QoS Prediction Through Probabilistic Model Checking

    Get PDF
    As competitiveness increases, being able to guaranting QoS of delivered services is key for business success. It is thus of paramount importance the ability to continuously monitor the workflow providing a service and to timely recognize breaches in the agreed QoS level. The ideal condition would be the possibility to anticipate, thus predict, a breach and operate to avoid it, or at least to mitigate its effects. In this paper we propose a model checking based approach to predict QoS of a formally described process. The continous model checking is enabled by the usage of a parametrized model of the monitored system, where the actual value of parameters is continuously evaluated and updated by means of big data tools. The paper also describes a prototype implementation of the approach and shows its usage in a case study.Comment: EDCC-2014, BIG4CIP-2014, Big Data Analytics, QoS Prediction, Model Checking, SLA compliance monitorin

    Decentralization in the EU Emissions Trading Scheme and Lessons for Global Policy

    Get PDF
    In 2005, the European Union introduced the largest and most ambitious emissions trading program in the world to meet its Kyoto commitments for the containment of global climate change. The EU Emissions Trading Scheme (EU ETS) has some distinctive features that differentiate it from the more standard model of emissions trading. In particular, it has a relatively decentralized structure that gives individual member states responsibility for setting targets, allocating permits, determining verification and enforcement, and making some choices about flexibility. It is also a “cap-within-a-cap,” seeking to achieve the Kyoto targets while only covering about half of EU emissions. Finally, it is a program that many hope will link with other greenhouse gas trading programs in the future—something we have not seen among existing trading systems. Examining these features coupled with recent EU ETS experience offers lessons about how cost effectiveness, equity, flexibility, and compliance fare in a multi-jurisdictional trading program, and highlights the challenges facing a global emissions trading regime.emissions trading, Kyoto Protocol, European Union, linking, climate change

    Semantic Support for Log Analysis of Safety-Critical Embedded Systems

    Full text link
    Testing is a relevant activity for the development life-cycle of Safety Critical Embedded systems. In particular, much effort is spent for analysis and classification of test logs from SCADA subsystems, especially when failures occur. The human expertise is needful to understand the reasons of failures, for tracing back the errors, as well as to understand which requirements are affected by errors and which ones will be affected by eventual changes in the system design. Semantic techniques and full text search are used to support human experts for the analysis and classification of test logs, in order to speedup and improve the diagnosis phase. Moreover, retrieval of tests and requirements, which can be related to the current failure, is supported in order to allow the discovery of available alternatives and solutions for a better and faster investigation of the problem.Comment: EDCC-2014, BIG4CIP-2014, Embedded systems, testing, semantic discovery, ontology, big dat

    Building Information Modelling [BIM] for energy efficiency in housing refurbishments

    Get PDF
    Building Information modelling offers potential process and delivery improvements throughout the lifecycle of built assets. However, there is limited research in the use of BIM for energy efficiency in housing refurbishments. The UK has over 300,000 solid wall homes with very poor energy efficiency. A BIM based solution for the retrofit of solid wall housing using lean and collaborative improvement techniques will offer a cost effective, comprehensive solution that is less disruptive, reduces waste and increases accuracy, leading to high quality outcomes. The aim of this research is to develop a BIM based protocol supporting development of 'what if' scenarios in housing retrofits for high efficiency thermal improvements, aiming to reduce costs and disruption for users. The paper presents a literature review on the topic and discusses the research method for the research project (S-IMPLER)

    Use of supervised machine learning for GNSS signal spoofing detection with validation on real-world meaconing and spoofing data : part I

    Get PDF
    The vulnerability of the Global Navigation Satellite System (GNSS) open service signals to spoofing and meaconing poses a risk to the users of safety-of-life applications. This risk consists of using manipulated GNSS data for generating a position-velocity-timing solution without the user's system being aware, resulting in presented hazardous misleading information and signal integrity deterioration without an alarm being triggered. Among the number of proposed spoofing detection and mitigation techniques applied at different stages of the signal processing, we present a method for the cross-correlation monitoring of multiple and statistically significant GNSS observables and measurements that serve as an input for the supervised machine learning detection of potentially spoofed or meaconed GNSS signals. The results of two experiments are presented, in which laboratory-generated spoofing signals are used for training and verification within itself, while two different real-world spoofing and meaconing datasets were used for the validation of the supervised machine learning algorithms for the detection of the GNSS spoofing and meaconing
    corecore