6,796 research outputs found

    Process Mining of Programmable Logic Controllers: Input/Output Event Logs

    Full text link
    This paper presents an approach to model an unknown Ladder Logic based Programmable Logic Controller (PLC) program consisting of Boolean logic and counters using Process Mining techniques. First, we tap the inputs and outputs of a PLC to create a data flow log. Second, we propose a method to translate the obtained data flow log to an event log suitable for Process Mining. In a third step, we propose a hybrid Petri net (PN) and neural network approach to approximate the logic of the actual underlying PLC program. We demonstrate the applicability of our proposed approach on a case study with three simulated scenarios

    Development and implementation of a LabVIEW based SCADA system for a meshed multi-terminal VSC-HVDC grid scaled platform

    Get PDF
    This project is oriented to the development of a Supervisory, Control and Data Acquisition (SCADA) software to control and supervise electrical variables from a scaled platform that represents a meshed HVDC grid employing National Instruments hardware and LabVIEW logic environment. The objective is to obtain real time visualization of DC and AC electrical variables and a lossless data stream acquisition. The acquisition system hardware elements have been configured, tested and installed on the grid platform. The system is composed of three chassis, each inside of a VSC terminal cabinet, with integrated Field-Programmable Gate Arrays (FPGAs), one of them connected via PCI bus to a local processor and the rest too via Ethernet through a switch. Analogical acquisition modules were A/D conversion takes place are inserted into the chassis. A personal computer is used as host, screen terminal and storing space. There are two main access modes to the FPGAs through the real time system. It has been implemented a Scan mode VI to monitor all the grid DC signals and a faster FPGA access mode VI to monitor one converter AC and DC values. The FPGA application consists of two tasks running at different rates and a FIFO has been implemented to communicate between them without data loss. Multiple structures have been tested on the grid platform and evaluated, ensuring the compliance of previously established specifications, such as sampling and scanning rate, screen refreshment or possible data loss. Additionally a turbine emulator was implemented and tested in Labview for further testing

    Society-in-the-Loop: Programming the Algorithmic Social Contract

    Full text link
    Recent rapid advances in Artificial Intelligence (AI) and Machine Learning have raised many questions about the regulatory and governance mechanisms for autonomous machines. Many commentators, scholars, and policy-makers now call for ensuring that algorithms governing our lives are transparent, fair, and accountable. Here, I propose a conceptual framework for the regulation of AI and algorithmic systems. I argue that we need tools to program, debug and maintain an algorithmic social contract, a pact between various human stakeholders, mediated by machines. To achieve this, we can adapt the concept of human-in-the-loop (HITL) from the fields of modeling and simulation, and interactive machine learning. In particular, I propose an agenda I call society-in-the-loop (SITL), which combines the HITL control paradigm with mechanisms for negotiating the values of various stakeholders affected by AI systems, and monitoring compliance with the agreement. In short, `SITL = HITL + Social Contract.'Comment: (in press), Ethics of Information Technology, 201

    Semantic Support for Log Analysis of Safety-Critical Embedded Systems

    Full text link
    Testing is a relevant activity for the development life-cycle of Safety Critical Embedded systems. In particular, much effort is spent for analysis and classification of test logs from SCADA subsystems, especially when failures occur. The human expertise is needful to understand the reasons of failures, for tracing back the errors, as well as to understand which requirements are affected by errors and which ones will be affected by eventual changes in the system design. Semantic techniques and full text search are used to support human experts for the analysis and classification of test logs, in order to speedup and improve the diagnosis phase. Moreover, retrieval of tests and requirements, which can be related to the current failure, is supported in order to allow the discovery of available alternatives and solutions for a better and faster investigation of the problem.Comment: EDCC-2014, BIG4CIP-2014, Embedded systems, testing, semantic discovery, ontology, big dat

    Remote Cell Growth Sensing Using Self-Sustained Bio-Oscillations

    Get PDF
    A smart sensor system for cell culture real-time supervision is proposed, allowing for a significant reduction in human effort applied to this type of assay. The approach converts the cell culture under test into a suitable “biological” oscillator. The system enables the remote acquisition and management of the “biological” oscillation signals through a secure web interface. The indirectly observed biological properties are cell growth and cell number, which are straightforwardly related to the measured bio-oscillation signal parameters, i.e., frequency and amplitude. The sensor extracts the information without complex circuitry for acquisition and measurement, taking advantage of the microcontroller features. A discrete prototype for sensing and remote monitoring is presented along with the experimental results obtained from the performed measurements, achieving the expected performance and outcomes

    New Long-term Historical Data Recording and Failure Analysis System for the CERN Cryoplants

    Get PDF
    CERN uses several liquid helium cryoplants (total of 21) for cooling large variety of superconducting devices namely: accelerating cavities, magnets for accelerators and particle detectors. The cryoplants are remotely operated from several control rooms using industrial standard supervision systems, which allows the instant display of all plant data and the trends, over several days, for the most important signals. The monitoring of the cryoplant performance during transient conditions and normal operation over several months asks for a long-term recording of all plant parameters. An historical data recording system has been developed, which collects data from all cryoplants, stores them in a centralized database over a period of one year and allows an user-friendly graphical visualization. In particular, a novel tool was developed for debugging causes of plant failures by comparing selected reference data with the simultaneous evolution of all plant data. The paper describes the new system, already in operation with 11 cryoplants

    Digital twins configurator for HIL simulations

    Get PDF
    corecore