1,502 research outputs found

    MGSim - Simulation tools for multi-core processor architectures

    Get PDF
    MGSim is an open source discrete event simulator for on-chip hardware components, developed at the University of Amsterdam. It is intended to be a research and teaching vehicle to study the fine-grained hardware/software interactions on many-core and hardware multithreaded processors. It includes support for core models with different instruction sets, a configurable multi-core interconnect, multiple configurable cache and memory models, a dedicated I/O subsystem, and comprehensive monitoring and interaction facilities. The default model configuration shipped with MGSim implements Microgrids, a many-core architecture with hardware concurrency management. MGSim is furthermore written mostly in C++ and uses object classes to represent chip components. It is optimized for architecture models that can be described as process networks.Comment: 33 pages, 22 figures, 4 listings, 2 table

    Design for Implementation of Image Processing Algorithms

    Get PDF
    Color image processing algorithms are first developed using a high-level mathematical modeling language. Current integrated development environments offer libraries of intrinsic functions, which on one hand enable faster development, but on the other hand hide the use of fundamental operations. The latter have to be detailed for an efficient hardware and/or software physical implementation. Based on the experience accumulated in the process of implementing a segmentation algorithm, this thesis outlines a design for implementation methodology comprised of a development flow and associated guidelines. The methodology enables algorithm developers to iteratively optimize their algorithms while maintaining the level of image integrity required by their application. Furthermore, it does not require algorithm developers to change their current development process. Rather, the design for implementation methodology is best suited for optimizing a functionally correct algorithm, thus appending to an algorithm developer\u27s design process of choice. The application of this methodology to four segmentation algorithm steps produced measured results with 2-D correlation coefficients (CORR2) better than 0.99, peak-signal-to-noise-ratio (PSNR) better than 70 dB, and structural-similarity-index (SSIM) better than 0.98, for a majority of test cases

    Pre-validation of SoC via hardware and software co-simulation

    Get PDF
    Abstract. System-on-chips (SoCs) are complex entities consisting of multiple hardware and software components. This complexity presents challenges in their design, verification, and validation. Traditional verification processes often test hardware models in isolation until late in the development cycle. As a result, cooperation between hardware and software development is also limited, slowing down bug detection and fixing. This thesis aims to develop, implement, and evaluate a co-simulation-based pre-validation methodology to address these challenges. The approach allows for the early integration of hardware and software, serving as a natural intermediate step between traditional hardware model verification and full system validation. The co-simulation employs a QEMU CPU emulator linked to a register-transfer level (RTL) hardware model. This setup enables the execution of software components, such as device drivers, on the target instruction set architecture (ISA) alongside cycle-accurate RTL hardware models. The thesis focuses on two primary applications of co-simulation. Firstly, it allows software unit tests to be run in conjunction with hardware models, facilitating early communication between device drivers, low-level software, and hardware components. Secondly, it offers an environment for using software in functional hardware verification. A significant advantage of this approach is the early detection of integration errors. Software unit tests can be executed at the IP block level with actual hardware models, a task previously only possible with costly system-level prototypes. This enables earlier collaboration between software and hardware development teams and smoothens the transition to traditional system-level validation techniques.Järjestelmäpiirin esivalidointi laitteiston ja ohjelmiston yhteissimulaatiolla. Tiivistelmä. Järjestelmäpiirit (SoC) ovat monimutkaisia kokonaisuuksia, jotka koostuvat useista laitteisto- ja ohjelmistokomponenteista. Tämä monimutkaisuus asettaa haasteita niiden suunnittelulle, varmennukselle ja validoinnille. Perinteiset varmennusprosessit testaavat usein laitteistomalleja eristyksissä kehityssyklin loppuvaiheeseen saakka. Tämän myötä myös yhteistyö laitteisto- ja ohjelmistokehityksen välillä on vähäistä, mikä hidastaa virheiden tunnistamista ja korjausta. Tämän diplomityön tavoitteena on kehittää, toteuttaa ja arvioida laitteisto-ohjelmisto-yhteissimulointiin perustuva esivalidointimenetelmä näiden haasteiden ratkaisemiseksi. Menetelmä mahdollistaa laitteiston ja ohjelmiston varhaisen integroinnin, toimien luonnollisena välietappina perinteisen laitteistomallin varmennuksen ja koko järjestelmän validoinnin välillä. Yhteissimulointi käyttää QEMU suoritinemulaattoria, joka on yhdistetty rekisterinsiirtotason (RTL) laitteistomalliin. Tämä mahdollistaa ohjelmistokomponenttien, kuten laiteajureiden, suorittamisen kohdejärjestelmän käskysarja-arkkitehtuurilla (ISA) yhdessä kellosyklitarkkojen RTL laitteistomallien kanssa. Työ keskittyy kahteen yhteissimulaation pääsovellukseen. Ensinnäkin se mahdollistaa ohjelmiston yksikkötestien suorittamisen laitteistomallien kanssa, varmistaen kommunikaation laiteajurien, matalan tason ohjelmiston ja laitteistokomponenttien välillä. Toiseksi se tarjoaa ympäristön ohjelmiston käyttämiseen toiminnallisessa laitteiston varmennuksessa. Merkittävä etu tästä lähestymistavasta on integraatiovirheiden varhainen havaitseminen. Ohjelmiston yksikkötestejä voidaan suorittaa jo IP-lohkon tasolla oikeilla laitteistomalleilla, mikä on aiemmin ollut mahdollista vain kalliilla järjestelmätason prototyypeillä. Tämä mahdollistaa aikaisemman ohjelmisto- ja laitteistokehitystiimien välisen yhteistyön ja helpottaa siirtymistä perinteisiin järjestelmätason validointimenetelmiin

    MaxHadoop: An Efficient Scalable Emulation Tool to Test SDN Protocols in Emulated Hadoop Environments

    Get PDF
    AbstractThis paper presents MaxHadoop, a flexible and scalable emulation tool, which allows the efficient and accurate emulation of Hadoop environments over Software Defined Networks (SDNs). Hadoop has been designed to manage endless data-streams over networks, making it a tailored candidate to support the new class of network services belonging to Big Data. The development of Hadoop is contemporary with the evolution of networks towards the new architectures "Software Defined." To create our emulation environment, tailored to SDNs, we employ MaxiNet, given its capability of emulating large-scale SDNs. We make it possible to emulate realistic Hadoop scenarios on large-scale SDNs using low-cost commodity hardware, by resolving a few key limitations of MaxiNet through appropriate configuration settings. We validate the MaxHadoop emulator by executing two benchmarks, namely WordCount and TeraSort, to evaluate a set of Key Performance Indicators. The tests' outcomes evidence that MaxHadoop outperforms other existing emulation tools running over commodity hardware. Finally, we show the potentiality of MaxHadoop by utilizing it to perform a comparison of SDN-based network protocols

    Agent oriented AmI engineering

    Get PDF

    Méthodologie de génération de plateforme de prototypage à base de multi-fpga

    Get PDF
    Multi-FPGA based prototyping is no longer optional for hardware/software integration. We can classify multi-FPGA prototyping platforms in three categories: off-the-shelf, custom and cabling. The cabling platform is semi off-the-shelf and semi custom. Nevertheless, crafting a custom and a cabling platform is today a manual process, which is time-consuming. The performance and the cost of the platform lie on the FPGA expertise and SoC DUT knowledge of the engineers. Compared to OTS platforms, the added value, in terms of performance, of cabling or custom platforms can be heavily impaired by an inefficient board design. Moreover, FPGA I/Os are becoming a scarce resource, worsening the inter-FPGA bandwidth generation after generation. Therefore, it becomes more and more difficult to prototype an SoC/ASIC design at proper performance. The contributions of the manuscript are: (1). An automatic implementation flow for an OTS platform is proposed. (2). An automatic design flow for creating a custom platform is proposed, thus increasing the productivity, enabling the board exploration, and optimizing cost and performance. (3). The cabling platform is proposed where one board is composed of one FPGA and several connectors, with an algorithm to automatically find a solution for the cable distribution. (4). Thanks to the developed automatic tools, the three different multi-FPGA platforms are compared. The custom platform always achieves better performance and lower deployment cost, but still with 3-5 months in time of availability. If the performance or the deployment cost are not rigorous constraints, the cabling platform offers an attractive alternative compared to others.Face à la difficulté de l’intégration matériel/logiciel, le prototypage à base de multi-FPGA devient obligatoire dans la vérification pré-silicium. Les plateformes de prototypage peuvent être classées en trois catégories: OTS, sur mesure et câblées. La plateforme câblée est semi OTS et semi sur mesure. Néanmoins, la création d’une plateforme sur mesure et câblée est un processus manuel et chronophage. La performance et le coût de la plateforme dépend de l'expérience de concepteurs en expertise de FPGA et connaissance du système sur puce. Par rapport à des plateformes OTS, la valeur ajoutée, en terme de performance, des plateformes câblées ou sur mesure peuvent être fortement dégradée par une carte inefficace. En plus, FPGA E/S devient une ressource rare, aggravant la bande passante inter-FPGA. Par conséquent, il devient de plus en plus difficile de prototyper un design à une performance satisfaisante. Les contributions sont: (1). Un flot de implémentation automatique pour une plateforme OTS. (2). Un flot de conception automatique pour créer une plateforme sur mesure, ainsi augmentant la productivité, permettant l’exploration de carte et optimisant le coût et la performance. (3). La plateforme câblée avec un algorithme permettant automatiquement de trouver une solution pour la distribution des câbles. (4). Grâce aux flots automatique, les trois plateformes sont comparées. La plateforme sur mesure toujours réalise plus de performance et moins de coût de déploiement, mais encore avec 3-5 mois en temps de disponibilité. Si la performance ou le coût de déploiement ne sont pas les contraintes strictes, la plateforme câblée est une alternative intéressante par rapport aux autres
    • …
    corecore