6 research outputs found

    MGSim - Simulation tools for multi-core processor architectures

    Get PDF
    MGSim is an open source discrete event simulator for on-chip hardware components, developed at the University of Amsterdam. It is intended to be a research and teaching vehicle to study the fine-grained hardware/software interactions on many-core and hardware multithreaded processors. It includes support for core models with different instruction sets, a configurable multi-core interconnect, multiple configurable cache and memory models, a dedicated I/O subsystem, and comprehensive monitoring and interaction facilities. The default model configuration shipped with MGSim implements Microgrids, a many-core architecture with hardware concurrency management. MGSim is furthermore written mostly in C++ and uses object classes to represent chip components. It is optimized for architecture models that can be described as process networks.Comment: 33 pages, 22 figures, 4 listings, 2 table

    microgrid of

    No full text
    Developing a reference implementation for

    Rudolf J. Strijkers Universiteit van Amsterdam The Network is in the Computer

    No full text
    The conceptual model of the current Internet, the Internet model, is over twenty years old. It was designed to provide end-to-end connectivity even if many routers and links would fail. As a result, end-users get a generic “best-effort “ network service. In practice, users and distributed systems assume that the Internet works just like a telephone network: reliable, constant bit-rate and predictable QoS. The Internet works because IP networks are over-dimensioned, keeping congestion and latency reasonably low. The success of the WWW and e-mail, which have become basic services just as telephony and TV, leads us to believe that IP could take over all other communication forms. Alas, VoIP and broadcasting services demand QoS guarantees, which the state-of-the-art routing and networking technologies of IP still cannot deliver. The current solution is more over-dimensioning, resulting in even less utilisation of network capacity. When thousands of wireless sensor devices form a network, monitoring the environments we live in, it is impractical to over-dimension the network. There might be hardly enough resources available to communicate sensor data. Large sensor networks are already a reality: London alone has a network of more than 150.000 surveillance cameras. In the case an even
    corecore