77 research outputs found

    A Multicomponent Distributed Framework for Smart Production System Modeling and Simulation

    Get PDF
    In order to control manufacturing systems, managers need risk and performance evaluation methods and simulation tools. However, these simulation techniques must evolve towards being multiperformance, multiactor, and multisimulation tools, and this requires interoperability between those distributed components. This paper presents an integrated platform that brings interoperability to several simulation components. This work expands the process modeling tool Papyrus to allow it to communicate with external components through both distributed simulation and cosimulation standards. The distributed modeling and simulation framework (DMSF) platform takes its environment into consideration in order to evaluate the sustainability of the system while integrating external heterogeneous components. For instance, a DMSF connection with external IoT devices has been implemented. Moreover, the orchestration of different smart manufacturing components and services is achieved through configurable business models. As a result, an automotive industry case study has successfully been tested to demonstrate the sustainability of smart supply chains and manufacturing factories, allowing better connectivity with their real environments

    Virtual Communication Stack: Towards Building Integrated Simulator of Mobile Ad Hoc Network-based Infrastructure for Disaster Response Scenarios

    Full text link
    Responses to disastrous events are a challenging problem, because of possible damages on communication infrastructures. For instance, after a natural disaster, infrastructures might be entirely destroyed. Different network paradigms were proposed in the literature in order to deploy adhoc network, and allow dealing with the lack of communications. However, all these solutions focus only on the performance of the network itself, without taking into account the specificities and heterogeneity of the components which use it. This comes from the difficulty to integrate models with different levels of abstraction. Consequently, verification and validation of adhoc protocols cannot guarantee that the different systems will work as expected in operational conditions. However, the DEVS theory provides some mechanisms to allow integration of models with different natures. This paper proposes an integrated simulation architecture based on DEVS which improves the accuracy of ad hoc infrastructure simulators in the case of disaster response scenarios.Comment: Preprint. Unpublishe

    A SYSTEMC/SIMULINK CO-SIMULATION ENVIRONMENT OF THE JPEG ALGORITHM

    Get PDF
    In the past decades, many factors have been continuously increasing like the functionality of embedded systems as well as the time-to-market pressure has been continuously increasing. Simulation of an entire system including both hardware and software from early design stages is one of the effective approaches to improve the design productivity. A large number of research efforts on hardware/software (HW/SW) co-simulation have been made so far. Real-time operating systems have become one of the important components in the embedded systems. However, in order to validate function of the entire system, this system has to be simulated together with application software and hardware. Indeed, traditional methods of verification have proven to be insufficient for complex digital systems. Register transfer level test-benches have become too complex to manage and too slow to execute. New methods and verification techniques began to emerge over the past few years. Highlevel test-benches, assertion-based verification, formal methods, hardware verification languages are just a few examples of the intense research activities driving the verification domain

    Co-simulation of a Low-Voltage Utility Grid Controlled over IEC 61850 protocol

    No full text
    International audienceThis paper presents a co-simulation model using MATLABÂź toolboxes to illustrate an interaction between the communication system and the energy grid, coherent with the concept of smart grid that employs IEC 61850 communication standard. The MMS (Manufacturing Message Specification) protocol supported by IEC 61850, based on TCP/IP is used for the vertical communication between the Supervisory and Data Acquisition (SCADA) system and Intelligent Electronic Devices (IEDs) embedding the local control of different parts of the smart grid. In this paper an IED supporting the power control of a photovoltaic (PV) plant connected to a low-voltage (LV) utility grid is considered. Communication system consisting of the transport layer and a router placed on the network layer is modeled as an event driven system using SimEventsÂź toolbox and energy grid is modeled as a time-driven system using SimPowerSystemsÂź toolbox. Co-simulation results are obtained by combining different communication scenarios and time-varying irradiance scenarios for thee PV plant when the PV plant is required to provide a certain power in response to a power reference received from SCADA over the communication network. The analysis aims at illustrating the impact that stochastic behavior and delays due to network communication have on the global system behavior

    Survey of scientific programming techniques for the management of data-intensive engineering environments

    Get PDF
    The present paper introduces and reviews existing technology and research works in the field of scientific programming methods and techniques in data-intensive engineering environments. More specifically, this survey aims to collect those relevant approaches that have faced the challenge of delivering more advanced and intelligent methods taking advantage of the existing large datasets. Although existing tools and techniques have demonstrated their ability to manage complex engineering processes for the development and operation of safety-critical systems, there is an emerging need to know how existing computational science methods will behave to manage large amounts of data. That is why, authors review both existing open issues in the context of engineering with special focus on scientific programming techniques and hybrid approaches. 1193 journal papers have been found as the representative in these areas screening 935 to finally make a full review of 122. Afterwards, a comprehensive mapping between techniques and engineering and nonengineering domains has been conducted to classify and perform a meta-analysis of the current state of the art. As the main result of this work, a set of 10 challenges for future data-intensive engineering environments have been outlined.The current work has been partially supported by the Research Agreement between the RTVE (the Spanish Radio and Television Corporation) and the UC3M to boost research in the field of Big Data, Linked Data, Complex Network Analysis, and Natural Language. It has also received the support of the Tecnologico Nacional de Mexico (TECNM), National Council of Science and Technology (CONACYT), and the Public Education Secretary (SEP) through PRODEP

    Modelling and Co-simulation of Multi-Energy Systems: Distributed Software Methods and Platforms

    Get PDF
    L'abstract Ăš presente nell'allegato / the abstract is in the attachmen

    New architecture for heterogeneous real-time simulation

    Get PDF
    This thesis investigates a new architecture for modeling and simulating complex distributed real-time systems. Modeling adequately a large distributed real time system may involve, due to its complexity, several different theoretical vehicles such as queuing theory, finite state machines, and others. Currently there are no software tools, which would offer combining such heterogeneous features into a single comprehensive simulation environment. This study involves integrating 3 tools, SES/workbench, an offline simulator using queuing theory as its modeling discipline, ObjecTime as a real-time simulator based on finite state machines as its modeling discipline, and VxWorks real-time kernel used for free modeling in the VMEbus environment. We developed an architecture, which connects all 3 simulators into an integrated system, in which parameters and simulation results can be freely exchanged between tools. In addition, the system is enhanced by a web-based interface, which can be used to provide input and obtain output of the entire system and help in distributing the simulation over the Internet. The new architecture was extensively tested and applied to a large-scale distributed embedded simulation in a military environment

    Towards a method to quantitatively measure toolchain interoperability in the engineering lifecycle: A case study of digital hardware design

    Get PDF
    The engineering lifecycle of cyber-physical systems is becoming more challenging than ever. Multiple engineering disciplines must be orchestrated to produce both a virtual and physical version of the system. Each engineering discipline makes use of their own methods and tools generating different types of work products that must be consistently linked together and reused throughout the lifecycle. Requirements, logical/descriptive and physical/analytical models, 3D designs, test case descriptions, product lines, ontologies, evidence argumentations, and many other work products are continuously being produced and integrated to implement the technical engineering and technical management processes established in standards such as the ISO/IEC/IEEE 15288:2015 "Systems and software engineering-System life cycle processes". Toolchains are then created as a set of collaborative tools to provide an executable version of the required technical processes. In this engineering environment, there is a need for technical interoperability enabling tools to easily exchange data and invoke operations among them under different protocols, formats, and schemas. However, this automation of tasks and lifecycle processes does not come free of charge. Although enterprise integration patterns, shared and standardized data schemas and business process management tools are being used to implement toolchains, the reality shows that in many cases, the integration of tools within a toolchain is implemented through point-to-point connectors or applying some architectural style such as a communication bus to ease data exchange and to invoke operations. In this context, the ability to measure the current and expected degree of interoperability becomes relevant: 1) to understand the implications of defining a toolchain (need of different protocols, formats, schemas and tool interconnections) and 2) to measure the effort to implement the desired toolchain. To improve the management of the engineering lifecycle, a method is defined: 1) to measure the degree of interoperability within a technical engineering process implemented with a toolchain and 2) to estimate the effort to transition from an existing toolchain to another. A case study in the field of digital hardware design comprising 6 different technical engineering processes and 7 domain engineering tools is conducted to demonstrate and validate the proposed method.The work leading to these results has received funding from the H2020-ECSEL Joint Undertaking (JU) under grant agreement No 826452-“Arrowhead Tools for Engineering of Digitalisation Solutions” and from specific national programs and/or funding authorities. Funding for APC: Universidad Carlos III de Madrid (Read & Publish Agreement CRUE-CSIC 2023)

    WG1N5315 - Response to Call for AIC evaluation methodologies and compression technologies for medical images: LAR Codec

    Get PDF
    This document presents the LAR image codec as a response to Call for AIC evaluation methodologies and compression technologies for medical images.This document describes the IETR response to the specific call for contributions of medical imaging technologies to be considered for AIC. The philosophy behind our coder is not to outperform JPEG2000 in compression; our goal is to propose an open source, royalty free, alternative image coder with integrated services. While keeping the compression performances in the same range as JPEG2000 but with lower complexity, our coder also provides services such as scalability, cryptography, data hiding, lossy to lossless compression, region of interest, free region representation and coding
    • 

    corecore