20,395 research outputs found

    A fine-grain time-sharing Time Warp system

    Get PDF
    Although Parallel Discrete Event Simulation (PDES) platforms relying on the Time Warp (optimistic) synchronization protocol already allow for exploiting parallelism, several techniques have been proposed to further favor performance. Among them we can mention optimized approaches for state restore, as well as techniques for load balancing or (dynamically) controlling the speculation degree, the latter being specifically targeted at reducing the incidence of causality errors leading to waste of computation. However, in state of the art Time Warp systems, events’ processing is not preemptable, which may prevent the possibility to promptly react to the injection of higher priority (say lower timestamp) events. Delaying the processing of these events may, in turn, give rise to higher incidence of incorrect speculation. In this article we present the design and realization of a fine-grain time-sharing Time Warp system, to be run on multi-core Linux machines, which makes systematic use of event preemption in order to dynamically reassign the CPU to higher priority events/tasks. Our proposal is based on a truly dual mode execution, application vs platform, which includes a timer-interrupt based support for bringing control back to platform mode for possible CPU reassignment according to very fine grain periods. The latter facility is offered by an ad-hoc timer-interrupt management module for Linux, which we release, together with the overall time-sharing support, within the open source ROOT-Sim platform. An experimental assessment based on the classical PHOLD benchmark and two real world models is presented, which shows how our proposal effectively leads to the reduction of the incidence of causality errors, as compared to traditional Time Warp, especially when running with higher degrees of parallelism

    Kompics: a message-passing component model for building distributed systems

    Get PDF
    The Kompics component model and programming framework was designedto simplify the development of increasingly complex distributed systems. Systems built with Kompics leverage multi-core machines out of the box and they can be dynamically reconfigured to support hot software upgrades. A simulation framework enables deterministic debugging and reproducible performance evaluation of unmodified Kompics distributed systems. We describe the component model and show how to program and compose event-based distributed systems. We present the architectural patterns and abstractions that Kompics facilitates and we highlight a case study of a complex distributed middleware that we have built with Kompics. We show how our approach enables systematic development and evaluation of large-scale and dynamic distributed systems

    Simulation of Mixed Critical In-vehicular Networks

    Full text link
    Future automotive applications ranging from advanced driver assistance to autonomous driving will largely increase demands on in-vehicular networks. Data flows of high bandwidth or low latency requirements, but in particular many additional communication relations will introduce a new level of complexity to the in-car communication system. It is expected that future communication backbones which interconnect sensors and actuators with ECU in cars will be built on Ethernet technologies. However, signalling from different application domains demands for network services of tailored attributes, including real-time transmission protocols as defined in the TSN Ethernet extensions. These QoS constraints will increase network complexity even further. Event-based simulation is a key technology to master the challenges of an in-car network design. This chapter introduces the domain-specific aspects and simulation models for in-vehicular networks and presents an overview of the car-centric network design process. Starting from a domain specific description language, we cover the corresponding simulation models with their workflows and apply our approach to a related case study for an in-car network of a premium car

    Statistical Model Checking of e-Motions Domain-Specific Modeling Languages

    Get PDF
    Domain experts may use novel tools that allow them to de- sign and model their systems in a notation very close to the domain problem. However, the use of tools for the statistical analysis of stochas- tic systems requires software engineers to carefully specify such systems in low level and specific languages. In this work we line up both sce- narios, specific domain modeling and statistical analysis. Specifically, we have extended the e-Motions system, a framework to develop real-time domain-specific languages where the behavior is specified in a natural way by in-place transformation rules, to support the statistical analysis of systems defined using it. We discuss how restricted e-Motions sys- tems are used to produce Maude corresponding specifications, using a model transformation from e-Motions to Maude, which comply with the restrictions of the VeStA tool, and which can therefore be used to per- form statistical analysis on the stochastic systems thus generated. We illustrate our approach with a very simple messaging distributed system.Universidad de Málaga Campus de Excelencia Internacional Andalucía Tech. Research Project TIN2014-52034-R an

    Real-time and fault tolerance in distributed control software

    Get PDF
    Closed loop control systems typically contain multitude of spatially distributed sensors and actuators operated simultaneously. So those systems are parallel and distributed in their essence. But mapping this parallelism onto the given distributed hardware architecture, brings in some additional requirements: safe multithreading, optimal process allocation, real-time scheduling of bus and network resources. Nowadays, fault tolerance methods and fast even online reconfiguration are becoming increasingly important. All those often conflicting requirements, make design and implementation of real-time distributed control systems an extremely difficult task, that requires substantial knowledge in several areas of control and computer science. Although many design methods have been proposed so far, none of them had succeeded to cover all important aspects of the problem at hand. [1] Continuous increase of production in embedded market, makes a simple and natural design methodology for real-time systems needed more then ever

    Methodology for object-oriented real-time systems analysis and design: Software engineering

    Get PDF
    Successful application of software engineering methodologies requires an integrated analysis and design life-cycle in which the various phases flow smoothly 'seamlessly' from analysis through design to implementation. Furthermore, different analysis methodologies often lead to different structuring of the system so that the transition from analysis to design may be awkward depending on the design methodology to be used. This is especially important when object-oriented programming is to be used for implementation when the original specification and perhaps high-level design is non-object oriented. Two approaches to real-time systems analysis which can lead to an object-oriented design are contrasted: (1) modeling the system using structured analysis with real-time extensions which emphasizes data and control flows followed by the abstraction of objects where the operations or methods of the objects correspond to processes in the data flow diagrams and then design in terms of these objects; and (2) modeling the system from the beginning as a set of naturally occurring concurrent entities (objects) each having its own time-behavior defined by a set of states and state-transition rules and seamlessly transforming the analysis models into high-level design models. A new concept of a 'real-time systems-analysis object' is introduced and becomes the basic building block of a series of seamlessly-connected models which progress from the object-oriented real-time systems analysis and design system analysis logical models through the physical architectural models and the high-level design stages. The methodology is appropriate to the overall specification including hardware and software modules. In software modules, the systems analysis objects are transformed into software objects

    Enhancing the Supply Chain Performance by Integrating Simulated and Physical Agents into Organizational Information Systems

    Get PDF
    As the business environment gets more complicated, organizations must be able to respond to the business changes and adjust themselves quickly to gain their competitive advantages. This study proposes an integrated agent system, called SPA, which coordinates simulated and physical agents to provide an efficient way for organizations to meet the challenges in managing supply chains. In the integrated framework, physical agents coordinate with inter-organizations\' physical agents to form workable business processes and detect the variations occurring in the outside world, whereas simulated agents model and analyze the what-if scenarios to support physical agents in making decisions. This study uses a supply chain that produces digital still cameras as an example to demonstrate how the SPA works. In this example, individual information systems of the involved companies equip with the SPA and the entire supply chain is modeled as a hierarchical object oriented Petri nets. The SPA here applies the modified AGNES data clustering technique and the moving average approach to help each firm generalize customers\' past demand patterns and forecast their future demands. The amplitude of forecasting errors caused by bullwhip effects is used as a metric to evaluate the degree that the SPA affects the supply chain performance. The experimental results show that the SPA benefits the entire supply chain by reducing the bullwhip effects and forecasting errors in a dynamic environment.Supply Chain Performance Enhancement; Bullwhip Effects; Simulated Agents; Physical Agents; Dynamic Customer Demand Pattern Discovery
    • …
    corecore