1,968 research outputs found

    MGSim - Simulation tools for multi-core processor architectures

    Get PDF
    MGSim is an open source discrete event simulator for on-chip hardware components, developed at the University of Amsterdam. It is intended to be a research and teaching vehicle to study the fine-grained hardware/software interactions on many-core and hardware multithreaded processors. It includes support for core models with different instruction sets, a configurable multi-core interconnect, multiple configurable cache and memory models, a dedicated I/O subsystem, and comprehensive monitoring and interaction facilities. The default model configuration shipped with MGSim implements Microgrids, a many-core architecture with hardware concurrency management. MGSim is furthermore written mostly in C++ and uses object classes to represent chip components. It is optimized for architecture models that can be described as process networks.Comment: 33 pages, 22 figures, 4 listings, 2 table

    lmproving Microcontroller and Computer Architecture Education through Software Simulation

    Get PDF
    In this thesis, we aim to improve the outcomes of students learning Computer Architecture and Embedded Systems topics within Software and Computer Engineering programs. We develop a simulation of processors that attempts to improve the visibility of hardware within the simulation environment and replace existing solutions in use within the classroom. We designate a series of requirements of a successful simulation suite based on current state-of-the-art simulations within literature. Provided these requirements, we build a quantitative rating of the same set of simulations. Additionally, we rate our previously implemented tool, hc12sim, with current solutions. Using the gaps in implementations from our state-of-the-art survey, we develop two solutions. First, we developed a web-based solution using the Scala.js compiler for Scala with an event-driven simulation engine through Akka. This Scala model implements a VHDL-like DSL for instruction control definition. Next we propose tools for developing cross-platform native applications through a project-based build system within CMake and a continuous integration pipeline using Vagrant, Oracle VirtualBox and Jenkins. Lastly, we propose a configuration-driven processor simulation built from the original hc12sim project that utilizes a Lua-based scripting interface for processor configuration. While we considered other high-level languages, Lua best fit our requirements allowing students to use a modern high-level programming language for processor configuration. Instruction controls are defined through Lua functions using high-level constructs that implicitly trigger low-level simulation events. Lastly, we conclude with suggestions for building a new solution that would better meet requirements set forth in our research question building from successful aspects from this work

    Tap-and-2-split switch design based on integrated optics for light-tree routing in WDM networks

    Get PDF
    This paper presents a novel cost-effective multicast-capable optical cross connect (MC-OXC) node architecture that features both tap-and-continue and tap-and-binary-split functionality. This architecture provides an interesting balance between simplicity, power efficiency and overall wavelength consumption with respect to models based on TaC (Tap and Continue) or SaD (Split-and-Delivery). The main component of this node is a novel Tap-and-2-Split Switch (Ta2S). In this paper, we propose and analyse an implementation of this switch based on integrated optics (namely, MMI taps and MZI switches), and we characterize and compare it with other alternatives implemented with the same technology. The study shows that, thanks to the presented Ta2S design, the 2-Split Tap Continue (2STC) node scales better in terms of number of components than the other alternatives. Moreover, it is more power efficient than the SaD design and requires less wavelengths than TaC thanks to the binary split capability. On the other hand, simulation results reveal that the 2-split condition does not add a significant additional wavelength consumption in usual network topologies with respect to SaD.Publicad

    Leveraging HTC for UK eScience with very large Condor pools: demand for transforming untapped power into results

    Get PDF
    We provide an insight into the demand from the UK eScience community for very large HighThroughput Computing resources and provide an example of such a resource in current productionuse: the 930-node eMinerals Condor pool at UCL. We demonstrate the significant benefits thisresource has provided to UK eScientists via quickly and easily realising results throughout a rangeof problem areas. We demonstrate the value added by the pool to UCL I.S infrastructure andprovide a case for the expansion of very large Condor resources within the UK eScience Gridinfrastructure. We provide examples of the technical and administrative difficulties faced whenscaling up to institutional Condor pools, and propose the introduction of a UK Condor/HTCworking group to co-ordinate the mid to long term UK eScience Condor development, deploymentand support requirements, starting with the inaugural UK Condor Week in October 2004

    A platform to support object database research

    Get PDF
    Databases play a key role in an increasingly diverse range of applications and settings. New requirements are continually emerging and may differ substantially from one domain to another, sometimes even to the point of conflict. To address these challenges, database systems are evolving to cater for new application domains. Yet little attention has been given to the process of researching and developing database concepts in response to new requirements. We present a platform designed to support database research in terms of experimentation with different aspects of database systems ranging from the data model to the distribution architecture. Our platform is based on the notion of metamodel extension modules, inspired by proposals for adaptive and configurable database management systems. However, rather than building a tailored system from existing components, we focus on the process of designing new components. To qualitatively evaluate our platform, we present a series of case studies where our approach was used successfully to experiment with concepts designed to support a variety of novel application domains

    Methods and design issues for next generation network-aware applications

    Get PDF
    Networks are becoming an essential component of modern cyberinfrastructure and this work describes methods of designing distributed applications for high-speed networks to improve application scalability, performance and capabilities. As the amount of data generated by scientific applications continues to grow, to be able to handle and process it, applications should be designed to use parallel, distributed resources and high-speed networks. For scalable application design developers should move away from the current component-based approach and implement instead an integrated, non-layered architecture where applications can use specialized low-level interfaces. The main focus of this research is on interactive, collaborative visualization of large datasets. This work describes how a visualization application can be improved through using distributed resources and high-speed network links to interactively visualize tens of gigabytes of data and handle terabyte datasets while maintaining high quality. The application supports interactive frame rates, high resolution, collaborative visualization and sustains remote I/O bandwidths of several Gbps (up to 30 times faster than local I/O). Motivated by the distributed visualization application, this work also researches remote data access systems. Because wide-area networks may have a high latency, the remote I/O system uses an architecture that effectively hides latency. Five remote data access architectures are analyzed and the results show that an architecture that combines bulk and pipeline processing is the best solution for high-throughput remote data access. The resulting system, also supporting high-speed transport protocols and configurable remote operations, is up to 400 times faster than a comparable existing remote data access system. Transport protocols are compared to understand which protocol can best utilize high-speed network connections, concluding that a rate-based protocol is the best solution, being 8 times faster than standard TCP. An HD-based remote teaching application experiment is conducted, illustrating the potential of network-aware applications in a production environment. Future research areas are presented, with emphasis on network-aware optimization, execution and deployment scenarios

    Developing sustainability pathways for social simulation tools and services

    Get PDF
    The use of cloud technologies to teach agent-based modelling and simulation (ABMS) is an interesting application of a nascent technological paradigm that has received very little attention in the literature. This report fills that gap and aims to help instructors, teachers and demonstrators to understand why and how cloud services are appropriate solutions to common problems they face delivering their study programmes, as well as outlining the many cloud options available. The report first introduces social simulation and considers how social simulation is taught. Following this factors affecting the implementation of agent-based models are explored, with attention focused primarily on the modelling and execution platforms currently available, the challenges associated with implementing agent-based models, and the technical architectures that can be used to support the modelling, simulation and teaching process. This sets the context for an extended discussion on cloud computing including service and deployment models, accessing cloud resources, the financial implications of adopting the cloud, and an introduction to the evaluation of cloud services within the context of developing, executing and teaching agent-based models
    • …
    corecore