48 research outputs found

    Toward Biologically-Inspired Self-Healing, Resilient Architectures for Digital Instrumentation and Control Systems and Embedded Devices

    Get PDF
    Digital Instrumentation and Control (I&C) systems in safety-related applications of next generation industrial automation systems require high levels of resilience against different fault classes. One of the more essential concepts for achieving this goal is the notion of resilient and survivable digital I&C systems. In recent years, self-healing concepts based on biological physiology have received attention for the design of robust digital systems. However, many of these approaches have not been architected from the outset with safety in mind, nor have they been targeted for the automation community where a significant need exists. This dissertation presents a new self-healing digital I&C architecture called BioSymPLe, inspired from the way nature responds, defends and heals: the stem cells in the immune system of living organisms, the life cycle of the living cell, and the pathway from Deoxyribonucleic acid (DNA) to protein. The BioSymPLe architecture is integrating biological concepts, fault tolerance techniques, and operational schematics for the international standard IEC 61131-3 to facilitate adoption in the automation industry. BioSymPLe is organized into three hierarchical levels: the local function migration layer from the top side, the critical service layer in the middle, and the global function migration layer from the bottom side. The local layer is used to monitor the correct execution of functions at the cellular level and to activate healing mechanisms at the critical service level. The critical layer is allocating a group of functional B cells which represent the building block that executes the intended functionality of critical application based on the expression for DNA genetic codes stored inside each cell. The global layer uses a concept of embryonic stem cells by differentiating these type of cells to repair the faulty T cells and supervising all repair mechanisms. Finally, two industrial applications have been mapped on the proposed architecture, which are capable of tolerating a significant number of faults (transient, permanent, and hardware common cause failures CCFs) that can stem from environmental disturbances and we believe the nexus of its concepts can positively impact the next generation of critical systems in the automation industry

    Selection of a new hardware and software platform for railway interlocking

    Get PDF
    The interlocking system is one of the main actors for safe railway transportation. In most cases, the whole system is supplied by a single vendor. The recent regulations from the European Union direct for an “open” architecture to invite new game changers and reduce life-cycle costs. The objective of the thesis is to propose an alternative platform that could replace a legacy interlocking system. In the thesis, various commercial off-the-shelf hardware and software products are studied which could be assembled to compose an alternative interlocking platform. The platform must be open enough to adapt to any changes in the constituent elements and abide by the proposed baselines of new standardization initiatives, such as ERTMS, EULYNX, and RCA. In this thesis, a comparative study is performed between these products based on hardware capacity, architecture, communication protocols, programming tools, security, railway certifications, life-cycle issues, etc

    An Algebraic Framework for the Real-Time Solution of Inverse Problems on Embedded Systems

    Full text link
    This article presents a new approach to the real-time solution of inverse problems on embedded systems. The class of problems addressed corresponds to ordinary differential equations (ODEs) with generalized linear constraints, whereby the data from an array of sensors forms the forcing function. The solution of the equation is formulated as a least squares (LS) problem with linear constraints. The LS approach makes the method suitable for the explicit solution of inverse problems where the forcing function is perturbed by noise. The algebraic computation is partitioned into a initial preparatory step, which precomputes the matrices required for the run-time computation; and the cyclic run-time computation, which is repeated with each acquisition of sensor data. The cyclic computation consists of a single matrix-vector multiplication, in this manner computation complexity is known a-priori, fulfilling the definition of a real-time computation. Numerical testing of the new method is presented on perturbed as well as unperturbed problems; the results are compared with known analytic solutions and solutions acquired from state-of-the-art implicit solvers. The solution is implemented with model based design and uses only fundamental linear algebra; consequently, this approach supports automatic code generation for deployment on embedded systems. The targeting concept was tested via software- and processor-in-the-loop verification on two systems with different processor architectures. Finally, the method was tested on a laboratory prototype with real measurement data for the monitoring of flexible structures. The problem solved is: the real-time overconstrained reconstruction of a curve from measured gradients. Such systems are commonly encountered in the monitoring of structures and/or ground subsidence.Comment: 24 pages, journal articl

    Systematic Model-based Design Assurance and Property-based Fault Injection for Safety Critical Digital Systems

    Get PDF
    With advances in sensing, wireless communications, computing, control, and automation technologies, we are witnessing the rapid uptake of Cyber-Physical Systems across many applications including connected vehicles, healthcare, energy, manufacturing, smart homes etc. Many of these applications are safety-critical in nature and they depend on the correct and safe execution of software and hardware that are intrinsically subject to faults. These faults can be design faults (Software Faults, Specification faults, etc.) or physically occurring faults (hardware failures, Single-event-upsets, etc.). Both types of faults must be addressed during the design and development of these critical systems. Several safety-critical industries have widely adopted Model-Based Engineering paradigms to manage the design assurance processes of these complex CPSs. This thesis studies the application of IEC 61508 compliant model-based design assurance methodology on a representative safety-critical digital architecture targeted for the Nuclear power generation facilities. The study presents detailed experiences and results to demonstrate the benefits of Model testing in finding design flaws and its relevance to subsequent verification steps in the workflow. Additionally, to study the impact of physical faults on the digital architecture we develop a novel property-based fault injection method that overcomes few deficiencies of traditional fault injection methods. The model-based fault injection approach presented here guarantees high efficiency and near-exhaustive input/state/fault space coverage, by utilizing formal model checking principles to identify fault activation conditions and prove the fault tolerance features. The fault injection framework facilitates automated integration of fault saboteurs throughout the model to enable exhaustive fault location coverage in the model

    OPTIMIZATION OF PRODUCTION LINES USING ADVANCED CNC INTERPOLATION METHODS AND DISTRIBUTION OF CONTROL LOGIC

    Get PDF
    These days, information technology really makes the difference in manufacturing industry. High performance computers allow to realize control algorithms of increasing complexity and high speed reliable computer networks allows the communication between different devices and realization of advanced distributed control applications. In this thesis, we focus on the optimization of the production lines using two different approaches. First we focus on the improvement of a single workstation of the production line, then we focus on the improvement of the interactions between various stations of the production line.. A typical workstation that can be found in a production line is the machine tool for manufacturing workpieces. Advances in manufacturing technologies allow to increase quality and efficiency in production lines, but also ask for new and increasing requirements on the motion planning and control systems. The increase of CPU processing power has permitted, in traditional CNC systems, the introduction of NURBS interpolation capabilities, thus determining a further increase in machining quality and efficiency. This has posed new and still unsolved issues, such as the need to satisfy multiple opposite constraints like limiting chord error, acceleration and jerk and offering real-time guarantees. In addition, the ability of privileging the production throughput by relaxing one or more of the previous constraints in a simple way has emerged as another requirement of modern manufacturing plants. Nevertheless, none of the existing NURBS interpolators have these characteristics. In this thesis, we propose a NURBS interpolator that is able to satisfy all the manufacturing technology requirements and is able to respect, thanks to its bounded computational complexity, the position control real-time constraints. Such interpolator is easily reconfigurable, i.e. it can relax some of the constraints and can be adapted in order to include constraints that were not originally considered. Performances of the proposed algorithm have been evaluated both by simulations and by real milling experiments. However, improvements in productivity of a the machine tool can be neutralized if the various workstations of the production line are not properly synchronized. Distributed control allows to improve the coordination of different workstations but its design is challenging. The IEC 61499 standard has been developed to ease the modeling and design of distributed control systems, providing advanced concepts of software engineering (such as abstraction, encapsulation, reuse) to the world of control engineering. The introduction of such standard in already existing control environments poses challenges, since the widespread IEC 61131-3 programming standard is not compatible with the new standard. In order to solve this problem, this thesis presents an architecture that permits to integrate modules of the two standards, allowing to exploit the benefits of both. The proposed architecture is based on coexistence of control logic of both standards. Each standard interacts with some particular interfaces that encapsulate information and functionalities to be exchanged with the other standard. A methodology of integration of 61131-3 modules in a 61499 distributed solution based on such architecture is also developed, and it is described via a case study to prove feasibility and benefits

    A review of architectures and concepts for intelligence in future electric energy system

    Get PDF
    Renewable energy sources are one key enabler to decrease greenhouse gas emissions and to cope with the anthropogenic climate change. Their intermittent behavior and limited storage capabilities present a new challenge to power system operators to maintain power quality and reliability. Additional technical complexity arises from the large number of small distributed generation units and their allocation within the power system. Market liberalization and changing regulatory framework lead to additional organizational complexity. As a result, the design and operation of the future electric energy system have to be redefined. Sophisticated information and communication architectures, automation concepts, and control approaches are necessary in order to manage the higher complexity of so-called smart grids. This paper provides an overview of the state of the art and recent developments enabling higher intelligence in future smart grids. The integration of renewable sources and storage systems into the power grids is analyzed. Energy management and demand response methods and important automation paradigms and domain standards are also reviewed.info:eu-repo/semantics/publishedVersio

    Adaptive-Filter PMU Hardware Validation to IEEE C37.118.1a Requirements : Strathclyde ENG52 REG D6 Report

    Get PDF
    This report documents the implementation and testing of a hardware Phasor Measurement Unit (PMU) prototype, using a Beckhoff-based hardware platform. This platform offers several convenient features for PMU development, such as hardware modularity, support for integrating C++ and Simulink models, IEEE 1588 support, and scalability to multiple measurement locations. The Strathclyde M-class PMU algorithm can be deployed on this platform requiring less than 8% of the CPU time of a single CPU core, with 10 kHz analogue sampling. A closed-loop testing procedure, using RTDS hardware and software, has been used to quantify the performance of the Strathclyde PMU algorithm. With proper calibration of the analogue system, as would be the case for a PMU to be deployed in the field, the PMU can achieve relatively low error metrics according to the Synchrophasor standard requirements. For example, for the “static” PMU tests, Total Vector Error (TVE) values as low as 0.01% can be achieved (where the Synchrophasor standard requires a maximum TVE of 1%). Additional tests with multiple disturbances and with emulation of a power system fault have been conducted to demonstrate that PMU algorithms require resilience under realistic worst-case scenarios – and to make a case for testing all PMUs in this way. A new method has been devised for accurately and conveniently characterising the reporting latency of PMUs. This method can also be used to measure the end-to-end performance of transmitting PMU data over wide-area communications networks, thereby providing more accurate knowledge of the actual latency of the measurement systems used to implement novel power system control and protection schemes. The algorithm will be integrated within Synaptec’s passive and distributed optical sensing platform for wide area synchrophasor-based monitoring, protection, and control

    The DS-Pnet modeling formalism for cyber-physical system development

    Get PDF
    This work presents the DS-Pnet modeling formalism (Dataflow, Signals and Petri nets), designed for the development of cyber-physical systems, combining the characteristics of Petri nets and dataflows to support the modeling of mixed systems containing both reactive parts and data processing operations. Inheriting the features of the parent IOPT Petri net class, including an external interface composed of input and output signals and events, the addition of dataflow operations brings enhanced modeling capabilities to specify mathematical data transformations and graphically express the dependencies between signals. Data-centric systems, that do not require reactive controllers, are designed using pure dataflow models. Component based model composition enables reusing existing components, create libraries of previously tested components and hierarchically decompose complex systems into smaller sub-systems. A precise execution semantics was defined, considering the relationship between dataflow and Petri net nodes, providing an abstraction to define the interface between reactive controllers and input and output signals, including analog sensors and actuators. The new formalism is supported by the IOPT-Flow Web based tool framework, offering tools to design and edit models, simulate model execution on the Web browser, plus model-checking and software/hardware automatic code generation tools to implement controllers running on embedded devices (C,VHDL and JavaScript). A new communication protocol was created to permit the automatic implementation of distributed cyber-physical systems composed of networks of remote components communicating over the Internet. The editor tool connects directly to remote embedded devices running DS-Pnet models and may import remote components into new models, contributing to simplify the creation of distributed cyber-physical applications, where the communication between distributed components is specified just by drawing arcs. Several application examples were designed to validate the proposed formalism and the associated framework, ranging from hardware solutions, industrial applications to distributed software applications
    corecore