131,941 research outputs found

    Bridging MoCs in SystemC specifications of heterogeneous systems

    Get PDF
    In order to get an efficient specification and simulation of a heterogeneous system, the choice of an appropriate model of computation (MoC) for each system part is essential. The choice depends on the design domain (e.g., analogue or digital), and the suitable abstraction level used to specify and analyse the aspects considered to be important in each system part. In practice, MoC choice is implicitly made by selecting a suitable language and a simulation tool for each system part. This approach requires the connection of different languages and simulation tools when the specification and simulation of the system are considered as a whole. SystemC is able to support a more unified specification methodology and simulation environment for heterogeneous system, since it is extensible by libraries that support additional MoCs. A major requisite of these libraries is to provide means to connect system parts which are specified using different MoCs. However, these connection means usually do not provide enough flexibility to select and tune the right conversion semantic in amixed-level specification, simulation, and refinement process. In this article, converter channels, a flexible approach for MoC connection within a SystemC environment consisting of three extensions, namely, SystemC-AMS, HetSC, and OSSS+R, are presented.This work is supported by the FP6-2005-IST-5 European project

    Novel Simulink blockset for image processing codesign

    Get PDF
    In this paper we present a novel Simulink blockset which is conceived especially for image processing hybrid systems. The blockset is composed of several libraries with different abstraction levels (high-level, hardware and software) allowing the co-design and simulation of an entire hardware/software system. This background setting is complemented by a set of tools that automates the transition through the different descriptions of the system and helps to make the critical decisions in the partition phase. In addition, a co-design flow is proposed to take advantage of the blockset features. All these items shape a "co-design environment" that allows an easy and quick exploration of the design space to obtain optimised solutions which satisfy the system restriction

    Consistent and efficient output-streams management in optimistic simulation platforms

    Get PDF
    Optimistic synchronization is considered an effective means for supporting Parallel Discrete Event Simulations. It relies on a speculative approach, where concurrent processes execute simulation events regardless of their safety, and consistency is ensured via proper rollback mechanisms, upon the a-posteriori detection of causal inconsistencies along the events' execution path. Interactions with the outside world (e.g. generation of output streams) are a well-known problem for rollback-based systems, since the outside world may have no notion of rollback. In this context, approaches for allowing the simulation modeler to generate consistent output rely on either the usage of ad-hoc APIs (which must be provided by the underlying simulation kernel) or temporary suspension of processing activities in order to wait for the final outcome (commit/rollback) associated with a speculatively-produced output. In this paper we present design indications and a reference implementation for an output streams' management subsystem which allows the simulation-model writer to rely on standard output-generation libraries (e.g. stdio) within code blocks associated with event processing. Further, the subsystem ensures that the produced output is consistent, namely associated with events that are eventually committed, and system-wide ordered along the simulation time axis. The above features jointly provide the illusion of a classical (simple to deal with) sequential programming model, which spares the developer from being aware that the simulation program is run concurrently and speculatively. We also show, via an experimental study, how the design/development optimizations we present lead to limited overhead, giving rise to the situation where the simulation run would have been carried out with near-to-zero or reduced output management cost. At the same time, the delay for materializing the output stream (making it available for any type of audit activity) is shown to be fairly limited and constant, especially for good mixtures of I/O-bound vs CPU-bound behaviors at the application level. Further, the whole output streams' management subsystem has been designed in order to provide scalability for I/O management on clusters. © 2013 ACM

    Nmag micromagnetic simulation tool - software engineering lessons learned

    Full text link
    We review design and development decisions and their impact for the open source code Nmag from a software engineering in computational science point of view. We summarise lessons learned and recommendations for future computational science projects. Key lessons include that encapsulating the simulation functionality in a library of a general purpose language, here Python, provides great flexibility in using the software. The choice of Python for the top-level user interface was very well received by users from the science and engineering community. The from-source installation in which required external libraries and dependencies are compiled from a tarball was remarkably robust. In places, the code is a lot more ambitious than necessary, which introduces unnecessary complexity and reduces main- tainability. Tests distributed with the package are useful, although more unit tests and continuous integration would have been desirable. The detailed documentation, together with a tutorial for the usage of the system, was perceived as one of its main strengths by the community.Comment: 7 pages, 5 figures, Software Engineering for Science, ICSE201

    Virtual Platform for Embedded System Power Estimation

    Get PDF
    International audienceIn this paper, we propose a virtual platform for power estimation of processor based embedded systems. Our platform consists of a combination of Functional Level Power Analysis (FLPA) for power modeling and fast system prototyping for transactional simulation. In this proposal, we aim at esti- mating power automatically with the help of power models and SystemC IP libraries. These libraries are enriched with different power models and various hardware components required by the embedded platforms. This will allow to use our proposed virtual platform to port various hardware systems and applications in the same environment in order to satisfy the requirements of reliable and efficient design space exploration. Our experiments performed on this virtual embedded platform show that the obtained power estimation results are less than 3% of error in an average for all the processor when compared to the real board measurements

    Performance analysis and optimization of in-situ integration of simulation with data analysis: zipping applications up

    Get PDF
    This paper targets an important class of applications that requires combining HPC simulations with data analysis for online or real-time scientific discovery. We use the state-of-the-art parallel-IO and data-staging libraries to build simulation-time data analysis workflows, and conduct performance analysis with real-world applications of computational fluid dynamics (CFD) simulations and molecular dynamics (MD) simulations. Driven by in-depth performance inefficiency analysis, we design an end-to-end application-level approach to eliminating the interlocks and synchronizations existent in the present methods. Our new approach employs both task parallelism and pipeline parallelism to reduce synchronizations effectively. In addition, we design a fully asynchronous, fine-grain, and pipelining runtime system, which is named Zipper. Zipper is a multi-threaded distributed runtime system and executes in a layer below the simulation and analysis applications. To further reduce the simulation application's stall time and enhance the data transfer performance, we design a concurrent data transfer optimization that uses both HPC network and parallel file system for improved bandwidth. The scalability of the Zipper system has been verified by a performance model and various empirical large scale experiments. The experimental results on an Intel multicore cluster as well as a Knight Landing HPC system demonstrate that the Zipper based approach can outperform the fastest state-of-the-art I/O transport library by up to 220% using 13,056 processor cores

    Multi-stakeholder Interactive Simulation for Federated Satellite Systems

    Get PDF
    Federated satellite systems (FSS) are a new class of space-based systems which emphasize a distributed architecture. New information exchanging functions among FSS members enable data transportation, storage, and processing as on-orbit services. As a system-of-systems, however there are significant technical and social barriers to designing a FSS. To mitigate these challenges, this paper develops a multi-stakeholder interactive simulation for use in future design activities. An FSS simulation interface is defined using the High Level Architecture to include orbital and surface assets and associated transmitters, receivers, and signals for communication. Sample simulators (federates) using World Wind and Orekit open source libraries are applied in a prototype simulation (federation). The application case studies a conceptual FSS using the International Space Station (ISS) as a service platform to serve Earth-observing customers in sun-synchronous orbits (SSO). Results identify emergent effects between FSS members including favorable ISS power conditions and potential service bottlenecks to serving SSO customers
    corecore