151 research outputs found

    Modeling and Simulation Methodologies for Digital Twin in Industry 4.0

    Get PDF
    The concept of Industry 4.0 represents an innovative vision of what will be the factory of the future. The principles of this new paradigm are based on interoperability and data exchange between dierent industrial equipment. In this context, Cyber- Physical Systems (CPSs) cover one of the main roles in this revolution. The combination of models and the integration of real data coming from the field allows to obtain the virtual copy of the real plant, also called Digital Twin. The entire factory can be seen as a set of CPSs and the resulting system is also called Cyber-Physical Production System (CPPS). This CPPS represents the Digital Twin of the factory with which it would be possible analyze the real factory. The interoperability between the real industrial equipment and the Digital Twin allows to make predictions concerning the quality of the products. More in details, these analyses are related to the variability of production quality, prediction of the maintenance cycle, the accurate estimation of energy consumption and other extra-functional properties of the system. Several tools [2] allow to model a production line, considering dierent aspects of the factory (i.e. geometrical properties, the information flows etc.) However, these simulators do not provide natively any solution for the design integration of CPSs, making impossible to have precise analysis concerning the real factory. Furthermore, for the best of our knowledge, there are no solution regarding a clear integration of data coming from real equipment into CPS models that composes the entire production line. In this context, the goal of this thesis aims to define an unified methodology to design and simulate the Digital Twin of a plant, integrating data coming from real equipment. In detail, the presented methodologies focus mainly on: integration of heterogeneous models in production line simulators; Integration of heterogeneous models with ad-hoc simulation strategies; Multi-level simulation approach of CPS and integration of real data coming from sensors into models. All the presented contributions produce an environment that allows to perform simulation of the plant based not only on synthetic data, but also on real data coming from equipments

    Reversible Computation: Extending Horizons of Computing

    Get PDF
    This open access State-of-the-Art Survey presents the main recent scientific outcomes in the area of reversible computation, focusing on those that have emerged during COST Action IC1405 "Reversible Computation - Extending Horizons of Computing", a European research network that operated from May 2015 to April 2019. Reversible computation is a new paradigm that extends the traditional forwards-only mode of computation with the ability to execute in reverse, so that computation can run backwards as easily and naturally as forwards. It aims to deliver novel computing devices and software, and to enhance existing systems by equipping them with reversibility. There are many potential applications of reversible computation, including languages and software tools for reliable and recovery-oriented distributed systems and revolutionary reversible logic gates and circuits, but they can only be realized and have lasting effect if conceptual and firm theoretical foundations are established first

    Efficient Modelling and Simulation Methodology for the Design of Heterogeneous Mixed-Signal Systems on Chip

    Get PDF
    Systems on Chip (SoCs) and Systems in Package (SiPs) are key parts of a continuously broadening range of products, from chip cards and mobile phones to cars. Besides an increasing amount of digital hardware and software for data processing and storage, they integrate more and more analogue/RF circuits, sensors, and actuators to interact with their (analogue) environment. This trend towards more complex and heterogeneous systems with more intertwined functionalities is made possible by the continuous advances in the manufacturing technologies and pushed by market demand for new products and product variants. Therefore, the reuse and retargeting of existing component designs becomes more and more important. However, all these factors make the design process increasingly complex and multidisciplinary. Nowadays, the design of the individual components is usually well understood and optimised through the usage of a diversity of CAD/EDA tools, design languages, and data formats. These are based on applying specific modelling/abstraction concepts, description formalisms (also called Models of Computation (MoCs)) and analysis/simulation methods. The designer has to bridge the gaps between tools and methodologies using manual conversion of models and proprietary tool couplings/integrations, which is error-prone and time-consuming. A common design methodology and platform to manage, exchange, and collaboratively develop models of different formats and of different levels of abstraction is missing. The verification of the overall system is a big problem, as it requires the availability of compatible models for each component at the right level of abstraction to achieve satisfying results with respect to the system functionality and test coverage, but at the same time acceptable simulation performance in terms of accuracy and speed. Thus, the big challenge is the parallel integration of these very different part design processes. Therefore, the designers need a common design and simulation platform to create and refine an executable specification of the overall system (a virtual prototype) on a high level of abstraction, which supports different MoCs. This makes possible the exploration of different architecture options, estimation of the performance, validation of re-used parts, verification of the interfaces between heterogeneous components and interoperability with other systems as well as the assessment of the impacts of the future working environment and the manufacturing technologies used to realise the system. For embedded Analogue and Mixed-Signal (AMS) systems, the C++-based SystemC with its AMS extensions, to which recent standardisation the author contributed, is currently establishing itself as such a platform. This thesis describes the author's contribution to solve the modelling and simulation challenges mentioned above in three thematic phases. In the first phase, the prototype of a web-based platform to collect models from different domains and levels of abstraction together with their associated structural and semantical meta information has been developed and is called ModelLib. This work included the implementation of a hierarchical access control mechanism, which is able to protect the Intellectual Property (IP) constituted by the model at different levels of detail. The use cases developed for this tool show how it can support the AMS SoC design process by fostering the reuse and collaborative development of models for tasks like architecture exploration, system validation, and creation of more and more elaborated models of the system. The experiences from the ModelLib development delivered insight into which aspects need to be especially addressed throughout the development of models to make them reusable: mainly flexibility, documentation, and validation. This was the starting point for the development of an efficient modelling methodology for the top-down design and bottom-up verification of RF Systems based on the systematic usage of behavioural models in the second phase. One outcome is the developed library of well documented, parameterisable, and pin-accurate VHDL-AMS models of typical analogue/digital/RF components of a transceiver. The models offer the designer two sets of parameters: one based on the performance specifications and one based on the device parameters back-annotated from the transistor-level implementation. The abstraction level used for the description of the respective analogue/digital/RF component behaviour has been chosen to achieve a good trade-off between accuracy, fidelity, and simulation performance. The pin-accurate model interfaces facilitate the integration of transistor-level models for the validation of the behavioural models or the verification of a component implementation in the system context. These properties make the models suitable for different design tasks such as architecture exploration or overall system validation. This is demonstrated on a model of a binary Frequency-Shift Keying (FSK) transmitter parameterised to meet very different target specifications. This project showed also the limits in terms of abstraction and simulation performance of the "classical" AMS Hardware Description Languages (HDLs). Therefore, the third and last phase was dedicated to further raise the abstraction level for the description of complex and heterogeneous AMS SoCs and thus enable their efficient simulation using different synchronised MoCs. This work uses the C++-based simulation framework SystemC with its AMS extensions. New modelling capabilities going beyond the standardised SystemC AMS extensions have been introduced to describe energy conserving multi-domain systems in a formal and consistent way at a high level of abstraction. To this end, all constants, variables, and parameters of the system model, which represent a physical quantity, can now declare their dimension and associated system of units as an intrinsic part of their data type. Assignments to them need to contain besides the value also the correct measurement unit. This allows a much more precise but still compact definition of the models' interfaces and equations. Thus, the C++ compiler can check the correct assembly of the components and the coherency of the equations by means of dimensional analysis. The implementation is based on the Boost.Units library, which employs template metaprogramming techniques. A dedicated filter for the measurement units data types has been implemented to simplify the compiler messages and thus facilitate the localisation of unit errors. To ensure the reusability of models despite precisely defined interfaces, their interfaces and behaviours need to be parametrisable in a well-defined manner. The enabling implementation techniques for this have been demonstrated with the developed library of generic block diagram component models for the Timed Data Flow (TDF) MoC of the SystemC AMS extensions. These techniques are also the key to integrate a new MoC based on the bond graph formalism into the SystemC AMS extensions. Bond graphs facilitate the unified description of the energy conserving parts of heterogeneous systems with the help of a small set of modelling primitives parametrisable to the physical domain. The resulting models have a simulation performance comparable to an equivalent signal flow model

    Quantum Europe, Quantum Poland

    Get PDF
    Quantum Information Technologies promises are very serious, greatly exceeding only technical and market levels. Development of QIT in Europe, treated as building a new infrastructural civilization level, requires a broader view of coordination, funding and priority-setting policy. Simple measures used in the case of the development of new technologies, but not creating a significant ecosystem, are insufficient in this case. Quantum technologies are poised to create a new information layer of knowledge-based society. In this essay, the author subjectively addresses some of the issues such as: what we already know and what we don't know, and what efforts are being made in Europe

    Reversible Computation: Extending Horizons of Computing

    Get PDF
    This open access State-of-the-Art Survey presents the main recent scientific outcomes in the area of reversible computation, focusing on those that have emerged during COST Action IC1405 "Reversible Computation - Extending Horizons of Computing", a European research network that operated from May 2015 to April 2019. Reversible computation is a new paradigm that extends the traditional forwards-only mode of computation with the ability to execute in reverse, so that computation can run backwards as easily and naturally as forwards. It aims to deliver novel computing devices and software, and to enhance existing systems by equipping them with reversibility. There are many potential applications of reversible computation, including languages and software tools for reliable and recovery-oriented distributed systems and revolutionary reversible logic gates and circuits, but they can only be realized and have lasting effect if conceptual and firm theoretical foundations are established first

    A Model-based Approach for Designing Cyber-Physical Production Systems

    Get PDF
    The most recent development trend related to manufacturing is called "Industry 4.0". It proposes to transition from "blind" mechatronics systems to Cyber-Physical Production Systems (CPPSs). Such systems are capable of communicating with each other, acquiring and transmitting real-time production data. Their management and control require a structured software architecture, which is tipically referred to as the "Automation Pyramid". The design of both the software architecture and the components (i.e., the CPPSs) is a complex task, where the complexity is induced by the heterogeneity of the required functionalities. In such a context, the target of this thesis is to propose a model-based framework for the analysis and the design of production lines, compliant with the Industry 4.0 paradigm. In particular, this framework exploits the Systems Modeling Language (SysML) as a unified representation for the different viewpoints of a manufacturing system. At the components level, the structural and behavioral diagrams provided by SysML are used to produce a set of logical propositions about the system and components under design. Such an approach is specifically tailored towards constructing Assume-Guarantee contracts. By exploiting reactive synthesis techniques, contracts are used to prototype portions of components' behaviors and to verify whether implementations are consistent with the requirements. At the software level, the framework proposes a particular architecture based on the concept of "service". Such an architecture facilitates the reconfiguration of components and integrates an advanced scheduling technique, taking advantage of the production recipe SysML model. The proposed framework has been built coupled with the construction of the ICE Laboratory, a research facility consisting of a full-fledged production line. Such an approach has been adopted to construct models of the laboratory, to virtual prototype parts of the system and to manage the physical system through the proposed software architecture

    Fault-based Analysis of Industrial Cyber-Physical Systems

    Get PDF
    The fourth industrial revolution called Industry 4.0 tries to bridge the gap between traditional Electronic Design Automation (EDA) technologies and the necessity of innovating in many indus- trial fields, e.g., automotive, avionic, and manufacturing. This complex digitalization process in- volves every industrial facility and comprises the transformation of methodologies, techniques, and tools to improve the efficiency of every industrial process. The enhancement of functional safety in Industry 4.0 applications needs to exploit the studies related to model-based and data-driven anal- yses of the deployed Industrial Cyber-Physical System (ICPS). Modeling an ICPS is possible at different abstraction levels, relying on the physical details included in the model and necessary to describe specific system behaviors. However, it is extremely complicated because an ICPS is com- posed of heterogeneous components related to different physical domains, e.g., digital, electrical, and mechanical. In addition, it is also necessary to consider not only nominal behaviors but even faulty behaviors to perform more specific analyses, e.g., predictive maintenance of specific assets. Nevertheless, these faulty data are usually not present or not available directly from the industrial machinery. To overcome these limitations, constructing a virtual model of an ICPS extended with different classes of faults enables the characterization of faulty behaviors of the system influenced by different faults. In literature, these topics are addressed with non-uniformly approaches and with the absence of standardized and automatic methodologies for describing and simulating faults in the different domains composing an ICPS. This thesis attempts to overcome these state-of-the-art gaps by proposing novel methodologies, techniques, and tools to: model and simulate analog and multi-domain systems; abstract low-level models to higher-level behavioral models; and monitor industrial systems based on the Industrial Internet of Things (IIOT) paradigm. Specifically, the proposed contributions involve the exten- sion of state-of-the-art fault injection practices to improve the ICPSs safety, the development of frameworks for safety operations automatization, and the definition of a monitoring framework for ICPSs. Overall, fault injection in analog and digital models is the state of the practice to en- sure functional safety, as mentioned in the ISO 26262 standard specific for the automotive field. Starting from state-of-the-art defects defined for analog descriptions, new defects are proposed to enhance the IEEE P2427 draft standard for analog defect modeling and coverage. Moreover, dif- ferent techniques to abstract a transistor-level model to a behavioral model are proposed to speed up the simulation of faulty circuits. Therefore, unlike the electrical domain, there is no extensive use of fault injection techniques in the mechanical one. Thus, extending the fault injection to the mechanical and thermal fields allows for supporting the definition and evaluation of more reliable safety mechanisms. Hence, a taxonomy of mechanical faults is derived from the electrical domain by exploiting the physical analogies. Furthermore, specific tools are built for automatically instru- menting different descriptions with multi-domain faults. The entire work is proposed as a basis for supporting the creation of increasingly resilient and secure ICPS that need to preserve functional safety in any operating context

    An Approach to Automatically Distribute and Access Knowledge within Networked Embedded Systems in Factory Automation

    Get PDF
    This thesis presents a novel approach for automatically distribute and access knowledge within factory automation systems built by networked embedded systems. Developments on information, communication and computational technologies are making possible the distribution of tasks within different control resources, resources which are networked and working towards a common objective optimizing desired parameters. A fundamental task for introducing autonomy to these systems, is the option for represent knowledge, distributed within the automation network and to ensure its access by providing access mechanisms. This research work focuses on the processes for automatically distribute and access the knowledge.Recently, the industrial world has embraced service-oriented as architectural (SOA) patterns for relaxing the software integration costs of factory automation systems. This pattern defines a services provider offering a particular functionality, and service requesters which are entities looking for getting their needs satisfied. Currently, there are a few technologies allowing to implement a SOA solution, among those, Web Technologies are gaining special attention for their solid presence in other application fields. Providers and services using Web technologies for expressing their needs and skills are called Web Services. One of the main advantage of services is the no need for the service requester to know how the service provider is accomplishing the functionality or where the execution of the service is taking place. This benefit is recently stressed by the irruption of Cloud Computing, allowing the execution of certain process by the cloud resources.The caption of human knowledge and the representation of that knowledge in a machine interpretable manner has been an interesting research topic for the last decades. A well stablished mechanism for the representation of knowledge is the utilization of Ontologies. This mechanism allows machines to access that knowledge and use reasoning engines in order to create reasoning machines. The presence of a knowledge base allows as clearly the better identification of the web services, which is achievable by adding semantic notations to the service descriptors. The resulting services are called semantic web services.With the latest advances on computational resources, system can be built by a large number of constrained devices, yet easily connected, building a network of computational nodes, nodes that will be dedicated to execute control and communication tasks for the systems. These tasks are commanded by high level commanding systems, such as Manufacturing Execution Systems (MES) and Enterprise Resource Planning (ERP) modules. The aforementioned technologies allow a vertical approach for communicating commanding options from MES and ERP directly to the control nodes. This scenario allows to break down monolithic MES systems into small distributed functionalities, if these functionalities use Web standards for interacting and a knowledge base as main input for information, then we are arriving to the concept of Open KnowledgeDriven MES Systems (OKD-MES).The automatic distribution of the knowledge base in an OKD-MES mechanism and the accomplishment of the reasoning process in a distributed manner are the main objectives for this research. Thus, this research work describes the decentralization and management of knowledge descriptions which are currently handled by the Representation Layer (RPL) of the OKD-MES framework. This is achieved within the encapsulation of ontology modules which may be integrated by a distributed reasoning process on incoming requests. Furthermore, this dissertation presents the concept, principles and architecture for implementing Private Local Automation Clouds (PLACs), built by CPS.The thesis is an article thesis and is composed by 9 original and referred articles and supported by 7 other articles presented by the author

    Context Aware Computing for The Internet of Things: A Survey

    Get PDF
    As we are moving towards the Internet of Things (IoT), the number of sensors deployed around the world is growing at a rapid pace. Market research has shown a significant growth of sensor deployments over the past decade and has predicted a significant increment of the growth rate in the future. These sensors continuously generate enormous amounts of data. However, in order to add value to raw sensor data we need to understand it. Collection, modelling, reasoning, and distribution of context in relation to sensor data plays critical role in this challenge. Context-aware computing has proven to be successful in understanding sensor data. In this paper, we survey context awareness from an IoT perspective. We present the necessary background by introducing the IoT paradigm and context-aware fundamentals at the beginning. Then we provide an in-depth analysis of context life cycle. We evaluate a subset of projects (50) which represent the majority of research and commercial solutions proposed in the field of context-aware computing conducted over the last decade (2001-2011) based on our own taxonomy. Finally, based on our evaluation, we highlight the lessons to be learnt from the past and some possible directions for future research. The survey addresses a broad range of techniques, methods, models, functionalities, systems, applications, and middleware solutions related to context awareness and IoT. Our goal is not only to analyse, compare and consolidate past research work but also to appreciate their findings and discuss their applicability towards the IoT.Comment: IEEE Communications Surveys & Tutorials Journal, 201

    A Holistic Approach to Functional Safety for Networked Cyber-Physical Systems

    Get PDF
    Functional safety is a significant concern in today's networked cyber-physical systems such as connected machines, autonomous vehicles, and intelligent environments. Simulation is a well-known methodology for the assessment of functional safety. Simulation models of networked cyber-physical systems are very heterogeneous relying on digital hardware, analog hardware, and network domains. Current functional safety assessment is mainly focused on digital hardware failures while minor attention is devoted to analog hardware and not at all to the interconnecting network. In this work we believe that in networked cyber-physical systems, the dependability must be verified not only for the nodes in isolation but also by taking into account their interaction through the communication channel. For this reason, this work proposes a holistic methodology for simulation-based safety assessment in which safety mechanisms are tested in a simulation environment reproducing the high-level behavior of digital hardware, analog hardware, and network communication. The methodology relies on three main automatic processes: 1) abstraction of analog models to transform them into system-level descriptions, 2) synthesis of network infrastructures to combine multiple cyber-physical systems, and 3) multi-domain fault injection in digital, analog, and network. Ultimately, the flow produces a homogeneous optimized description written in C++ for fast and reliable simulation which can have many applications. The focus of this thesis is performing extensive fault simulation and evaluating different functional safety metrics, \eg, fault and diagnostic coverage of all the safety mechanisms
    • …
    corecore