36 research outputs found

    GRL: A Specification Language for Globally Asynchronous Locally Synchronous Systems

    Get PDF
    International audienceA GALS (Globally Asynchronous, Locally Synchronous) system consists of several synchronous subsystems that evolve concurrently and interact with each other asynchronously. Most formalisms and design tools support either the synchronous paradigm or the asynchronous paradigm but rarely combine both, which requires an intricate modeling of GALS systems. In this paper, we present a new language, called GRL (GALS Representation Language) designed to model GALS systems in an abstract and versatile manner for the purpose of formal verification. GRL has formal semantics combining the synchronous reactive model underlying dataflow languages and the asynchronous concurrent model underlying process algebras. We present the basic concepts and the main constructs of the language, together with an illustrative example

    Anpassen verteilter eingebetteter Anwendungen im laufenden Betrieb

    Get PDF
    The availability of third-party apps is among the key success factors for software ecosystems: The users benefit from more features and innovation speed, while third-party solution vendors can leverage the platform to create successful offerings. However, this requires a certain decoupling of engineering activities of the different parties not achieved for distributed control systems, yet. While late and dynamic integration of third-party components would be required, resulting control systems must provide high reliability regarding real-time requirements, which leads to integration complexity. Closing this gap would particularly contribute to the vision of software-defined manufacturing, where an ecosystem of modern IT-based control system components could lead to faster innovations due to their higher abstraction and availability of various frameworks. Therefore, this thesis addresses the research question: How we can use modern IT technologies and enable independent evolution and easy third-party integration of software components in distributed control systems, where deterministic end-to-end reactivity is required, and especially, how can we apply distributed changes to such systems consistently and reactively during operation? This thesis describes the challenges and related approaches in detail and points out that existing approaches do not fully address our research question. To tackle this gap, a formal specification of a runtime platform concept is presented in conjunction with a model-based engineering approach. The engineering approach decouples the engineering steps of component definition, integration, and deployment. The runtime platform supports this approach by isolating the components, while still offering predictable end-to-end real-time behavior. Independent evolution of software components is supported through a concept for synchronous reconfiguration during full operation, i.e., dynamic orchestration of components. Time-critical state transfer is supported, too, and can lead to bounded quality degradation, at most. The reconfiguration planning is supported by analysis concepts, including simulation of a formally specified system and reconfiguration, and analyzing potential quality degradation with the evolving dataflow graph (EDFG) method. A platform-specific realization of the concepts, the real-time container architecture, is described as a reference implementation. The model and the prototype are evaluated regarding their feasibility and applicability of the concepts by two case studies. The first case study is a minimalistic distributed control system used in different setups with different component variants and reconfiguration plans to compare the model and the prototype and to gather runtime statistics. The second case study is a smart factory showcase system with more challenging application components and interface technologies. The conclusion is that the concepts are feasible and applicable, even though the concepts and the prototype still need to be worked on in future -- for example, to reach shorter cycle times.Eine große Auswahl von Drittanbieter-Lösungen ist einer der Schlüsselfaktoren für Software Ecosystems: Nutzer profitieren vom breiten Angebot und schnellen Innovationen, während Drittanbieter über die Plattform erfolgreiche Lösungen anbieten können. Das jedoch setzt eine gewisse Entkopplung von Entwicklungsschritten der Beteiligten voraus, welche für verteilte Steuerungssysteme noch nicht erreicht wurde. Während Drittanbieter-Komponenten möglichst spät -- sogar Laufzeit -- integriert werden müssten, müssen Steuerungssysteme jedoch eine hohe Zuverlässigkeit gegenüber Echtzeitanforderungen aufweisen, was zu Integrationskomplexität führt. Dies zu lösen würde insbesondere zur Vision von Software-definierter Produktion beitragen, da ein Ecosystem für moderne IT-basierte Steuerungskomponenten wegen deren höherem Abstraktionsgrad und der Vielzahl verfügbarer Frameworks zu schnellerer Innovation führen würde. Daher behandelt diese Dissertation folgende Forschungsfrage: Wie können wir moderne IT-Technologien verwenden und unabhängige Entwicklung und einfache Integration von Software-Komponenten in verteilten Steuerungssystemen ermöglichen, wo Ende-zu-Ende-Echtzeitverhalten gefordert ist, und wie können wir insbesondere verteilte Änderungen an solchen Systemen konsistent und im Vollbetrieb vornehmen? Diese Dissertation beschreibt Herausforderungen und verwandte Ansätze im Detail und zeigt auf, dass existierende Ansätze diese Frage nicht vollständig behandeln. Um diese Lücke zu schließen, beschreiben wir eine formale Spezifikation einer Laufzeit-Plattform und einen zugehörigen Modell-basierten Engineering-Ansatz. Dieser Ansatz entkoppelt die Design-Schritte der Entwicklung, Integration und des Deployments von Komponenten. Die Laufzeit-Plattform unterstützt den Ansatz durch Isolation von Komponenten und zugleich Zeit-deterministischem Ende-zu-Ende-Verhalten. Unabhängige Entwicklung und Integration werden durch Konzepte für synchrone Rekonfiguration im Vollbetrieb unterstützt, also durch dynamische Orchestrierung. Dies beinhaltet auch Zeit-kritische Zustands-Transfers mit höchstens begrenzter Qualitätsminderung, wenn überhaupt. Rekonfigurationsplanung wird durch Analysekonzepte unterstützt, einschließlich der Simulation formal spezifizierter Systeme und Rekonfigurationen und der Analyse der etwaigen Qualitätsminderung mit dem Evolving Dataflow Graph (EDFG). Die Real-Time Container Architecture wird als Referenzimplementierung und Evaluationsplattform beschrieben. Zwei Fallstudien untersuchen Machbarkeit und Nützlichkeit der Konzepte. Die erste verwendet verschiedene Varianten und Rekonfigurationen eines minimalistischen verteilten Steuerungssystems, um Modell und Prototyp zu vergleichen sowie Laufzeitstatistiken zu erheben. Die zweite Fallstudie ist ein Smart-Factory-Demonstrator, welcher herausforderndere Applikationskomponenten und Schnittstellentechnologien verwendet. Die Konzepte sind den Studien nach machbar und nützlich, auch wenn sowohl die Konzepte als auch der Prototyp noch weitere Arbeit benötigen -- zum Beispiel, um kürzere Zyklen zu erreichen

    A Capacity Planning Simulation Model for Reconfigurable Manufacturing Systems

    Get PDF
    Important objectives and challenges in today’s manufacturing environment include the introduction of new products and the designing and developing of reconfigurable manufacturing systems. The objective of this research is to investigate and support the reconfigurability of a manufacturing system in terms of scalability by applying a discrete-event simulation modelling technique integrated with flexible capacity control functions and communication rules for re-scaling process. Moreover, the possible extension of integrating the discrete-event simulation with an agent-based model is presented as a framework. The benefits of this framework are collaborative decision making using agents for flexible reaction to system changes and system performance improvement. AnyLogic multi-method simulation modelling platform is utilized to design and create different simulation modelling scenarios. The developed capacity planning simulation model results are demonstrated in terms of a case study using the configurable assembly Learning Factory (iFactory) in the Intelligent Manufacturing Systems (IMS) Center at the University of Windsor. The main benefit of developed capacity planning simulation in comparison to traditional discrete-event simulation is, with a single simulation run, the recommended capacity for manufacturing system will be determined instead of running several discrete-event simulation models to find the needed capacity

    The DS-Pnet modeling formalism for cyber-physical system development

    Get PDF
    This work presents the DS-Pnet modeling formalism (Dataflow, Signals and Petri nets), designed for the development of cyber-physical systems, combining the characteristics of Petri nets and dataflows to support the modeling of mixed systems containing both reactive parts and data processing operations. Inheriting the features of the parent IOPT Petri net class, including an external interface composed of input and output signals and events, the addition of dataflow operations brings enhanced modeling capabilities to specify mathematical data transformations and graphically express the dependencies between signals. Data-centric systems, that do not require reactive controllers, are designed using pure dataflow models. Component based model composition enables reusing existing components, create libraries of previously tested components and hierarchically decompose complex systems into smaller sub-systems. A precise execution semantics was defined, considering the relationship between dataflow and Petri net nodes, providing an abstraction to define the interface between reactive controllers and input and output signals, including analog sensors and actuators. The new formalism is supported by the IOPT-Flow Web based tool framework, offering tools to design and edit models, simulate model execution on the Web browser, plus model-checking and software/hardware automatic code generation tools to implement controllers running on embedded devices (C,VHDL and JavaScript). A new communication protocol was created to permit the automatic implementation of distributed cyber-physical systems composed of networks of remote components communicating over the Internet. The editor tool connects directly to remote embedded devices running DS-Pnet models and may import remote components into new models, contributing to simplify the creation of distributed cyber-physical applications, where the communication between distributed components is specified just by drawing arcs. Several application examples were designed to validate the proposed formalism and the associated framework, ranging from hardware solutions, industrial applications to distributed software applications

    A Model-based Approach for Designing Cyber-Physical Production Systems

    Get PDF
    The most recent development trend related to manufacturing is called "Industry 4.0". It proposes to transition from "blind" mechatronics systems to Cyber-Physical Production Systems (CPPSs). Such systems are capable of communicating with each other, acquiring and transmitting real-time production data. Their management and control require a structured software architecture, which is tipically referred to as the "Automation Pyramid". The design of both the software architecture and the components (i.e., the CPPSs) is a complex task, where the complexity is induced by the heterogeneity of the required functionalities. In such a context, the target of this thesis is to propose a model-based framework for the analysis and the design of production lines, compliant with the Industry 4.0 paradigm. In particular, this framework exploits the Systems Modeling Language (SysML) as a unified representation for the different viewpoints of a manufacturing system. At the components level, the structural and behavioral diagrams provided by SysML are used to produce a set of logical propositions about the system and components under design. Such an approach is specifically tailored towards constructing Assume-Guarantee contracts. By exploiting reactive synthesis techniques, contracts are used to prototype portions of components' behaviors and to verify whether implementations are consistent with the requirements. At the software level, the framework proposes a particular architecture based on the concept of "service". Such an architecture facilitates the reconfiguration of components and integrates an advanced scheduling technique, taking advantage of the production recipe SysML model. The proposed framework has been built coupled with the construction of the ICE Laboratory, a research facility consisting of a full-fledged production line. Such an approach has been adopted to construct models of the laboratory, to virtual prototype parts of the system and to manage the physical system through the proposed software architecture

    Acquisition and Declarative Analytical Processing of Spatio-Temporal Observation Data

    Get PDF
    A generic framework for spatio-temporal observation data acquisition and declarative analytical processing has been designed and implemented in this Thesis. The main contributions of this Thesis may be summarized as follows: 1) generalization of a data acquisition and dissemination server, with great applicability in many scientific and industrial domains, providing flexibility in the incorporation of different technologies for data acquisition, data persistence and data dissemination, 2) definition of a new hybrid logical-functional paradigm to formalize a novel data model for the integrated management of entity and sampled data, 3) definition of a novel spatio-temporal declarative data analysis language for the previous data model, 4) definition of a data warehouse data model supporting observation data semantics, including application of the above language to the declarative definition of observation processes executed during observation data load, and 5) column-oriented parallel and distributed implementation of the spatial analysis declarative language. The huge amount of data to be processed forces the exploitation of current multi-core hardware architectures and multi-node cluster infrastructures

    Contribution à la commande sûre des Systèmes à Événements Discrets

    Get PDF
    Les activités de recherche rentrent dans le spectre de la section 61 du CNU et ont pour domaine l’Automatique des Systèmes à Événements Discrets (SED). Elles sont conduites en vue d’accroître la sûreté de fonctionnement des systèmes automatisés comme ceux qu’il est possible de trouver dans le cadre de la production manufacturière, de la production d'énergie ou du transport. Une grande partie de ces recherches a concerné la conception sûre des systèmes de contrôle-commande à base d’Automates Programmables Industriels (API) et plus particulièrement les thématiques suivantes :- la vérification formelle de programmes de contrôle-commande,- la synthèse algébrique de programmes de contrôle-commande à partir de spécifications informelles,- le test de conformité d’un contrôleur logique vis-à-vis de sa spécification.D'autres recherches ont porté sur la formalisation des outils pour l’analyse de sûreté, utilisés dans le cadre de l’analyse prévisionnelle des risques d’un équipement ou d’une installation industrielle. Cette formalisation des outils utilisés en sûreté a été faite en examinant avec un point de vue SED une problématique qui ne l’était pas à son origine. Il a été étudié :- la modélisation algébrique des arbres de défaillances dynamiques,- l’analyse prévisionnelle des risques d’un point de vue qualitatif pour les systèmes réparables à partir de Boolean logic Driven Markov Processes (BDMPs),- l’analyse prévisionnelle des risques d’un point de vue quantitatif pour les systèmes réparables à l’aide de chaînes de Markov.D'une manière générale, ces activités de recherche ont pour objectif de proposer des apports formels ou méthodologiques à des outils de modélisation généralement issus de l’industrie tout en répondant à des besoins industriels déjà présents ou sur le point de le devenir

    Data Analytics and Machine Learning to Enhance the Operational Visibility and Situation Awareness of Smart Grid High Penetration Photovoltaic Systems

    Get PDF
    Electric utilities have limited operational visibility and situation awareness over grid-tied distributed photovoltaic systems (PV). This will pose a risk to grid stability when the PV penetration into a given feeder exceeds 60% of its peak or minimum daytime load. Third-party service providers offer only real-time monitoring but not accurate insights into system performance and prediction of productions. PV systems also increase the attack surface of distribution networks since they are not under the direct supervision and control of the utility security analysts. Six key objectives were successfully achieved to enhance PV operational visibility and situation awareness: (1) conceptual cybersecurity frameworks for PV situation awareness at device, communications, applications, and cognitive levels; (2) a unique combinatorial approach using LASSO-Elastic Net regularizations and multilayer perceptron for PV generation forecasting; (3) applying a fixed-point primal dual log-barrier interior point method to expedite AC optimal power flow convergence; (4) adapting big data standards and capability maturity models to PV systems; (5) using K-nearest neighbors and random forests to impute missing values in PV big data; and (6) a hybrid data-model method that takes PV system deration factors and historical data to estimate generation and evaluate system performance using advanced metrics. These objectives were validated on three real-world case studies comprising grid-tied commercial PV systems. The results and conclusions show that the proposed imputation approach improved the accuracy by 91%, the estimation method performed better by 75% and 10% for two PV systems, and the use of the proposed forecasting model improved the generalization performance and reduced the likelihood of overfitting. The application of primal dual log-barrier interior point method improved the convergence of AC optimal power flow by 0.7 and 0.6 times that of the currently used deterministic models. Through the use of advanced performance metrics, it is shown how PV systems of different nameplate capacities installed at different geographical locations can be directly evaluated and compared over both instantaneous as well as extended periods of time. The results of this dissertation will be of particular use to multiple stakeholders of the PV domain including, but not limited to, the utility network and security operation centers, standards working groups, utility equipment, and service providers, data consultants, system integrator, regulators and public service commissions, government bodies, and end-consumers

    Systematic Approaches for Telemedicine and Data Coordination for COVID-19 in Baja California, Mexico

    Get PDF
    Conference proceedings info: ICICT 2023: 2023 The 6th International Conference on Information and Computer Technologies Raleigh, HI, United States, March 24-26, 2023 Pages 529-542We provide a model for systematic implementation of telemedicine within a large evaluation center for COVID-19 in the area of Baja California, Mexico. Our model is based on human-centric design factors and cross disciplinary collaborations for scalable data-driven enablement of smartphone, cellular, and video Teleconsul-tation technologies to link hospitals, clinics, and emergency medical services for point-of-care assessments of COVID testing, and for subsequent treatment and quar-antine decisions. A multidisciplinary team was rapidly created, in cooperation with different institutions, including: the Autonomous University of Baja California, the Ministry of Health, the Command, Communication and Computer Control Center of the Ministry of the State of Baja California (C4), Colleges of Medicine, and the College of Psychologists. Our objective is to provide information to the public and to evaluate COVID-19 in real time and to track, regional, municipal, and state-wide data in real time that informs supply chains and resource allocation with the anticipation of a surge in COVID-19 cases. RESUMEN Proporcionamos un modelo para la implementación sistemática de la telemedicina dentro de un gran centro de evaluación de COVID-19 en el área de Baja California, México. Nuestro modelo se basa en factores de diseño centrados en el ser humano y colaboraciones interdisciplinarias para la habilitación escalable basada en datos de tecnologías de teleconsulta de teléfonos inteligentes, celulares y video para vincular hospitales, clínicas y servicios médicos de emergencia para evaluaciones de COVID en el punto de atención. pruebas, y para el tratamiento posterior y decisiones de cuarentena. Rápidamente se creó un equipo multidisciplinario, en cooperación con diferentes instituciones, entre ellas: la Universidad Autónoma de Baja California, la Secretaría de Salud, el Centro de Comando, Comunicaciones y Control Informático. de la Secretaría del Estado de Baja California (C4), Facultades de Medicina y Colegio de Psicólogos. Nuestro objetivo es proporcionar información al público y evaluar COVID-19 en tiempo real y rastrear datos regionales, municipales y estatales en tiempo real que informan las cadenas de suministro y la asignación de recursos con la anticipación de un aumento de COVID-19. 19 casos.ICICT 2023: 2023 The 6th International Conference on Information and Computer Technologieshttps://doi.org/10.1007/978-981-99-3236-
    corecore