73 research outputs found

    Transformation�based implementation and optimization of programs exploiting the basic Andorra model.

    Get PDF
    The characteristics of CC and CLP systems are in principle very dierent However a recent trend towards convergence in the implementation techniques for these systems can be observed While CLP and Prolog systems have been incorporating capabilities to deal with userdened suspension and coroutining CC compilers have been trying to coalesce negrained tasks into coarsergrained sequential threads This convergence of techniques opens up the possibility of having a general purpose kernel language and abstract machine to serve as a compilation target for a variety of userlevel languages We propose a transformation technique directed towards such an objective In particular we report on techniques to support the Andorra computational model essentially emulating the AndorraI system via program transformation into a sequential language with delay primitives The system is automatic comprising an optional program analyzer and a basic transformer to the kernel language It turns out that a simple parallel CLP or Prolog system with dynamic scheduling is sucient as a kernel language for this purpose The preliminary results are quite encouraging performance of the resulting system is comparable to the current AndorraI implementation

    Visualization of Read-Copy-Update synchronization contexts in C code

    Get PDF
    The Read-Copy-Update (RCU) mechanism is a way of synchronizing concurrent access to variables with the goal of prioritizing read performance over strict consistency guarantees. The main idea behind this mechanism is that RCU avoids the use of lock primitives while multiple threads try to read and update elements concurrently. In this case, elements are linked together through pointers in a shared data structure. RCU is used in the Linux kernel, but there are user-space libraries which implement the technique as well. One of the user-space solutions is liburcu that is a C language library. Earlier, we defined our code comprehension framework for easing the development of RCU solutions. In this paper, we present our visualization techniques for the Microsoft’s Monaco Editor

    Visualization of Read-Copy-Update synchronization contexts in C code

    Get PDF
    The Read-Copy-Update (RCU) mechanism is a way of synchronizing concurrent access to variables with the goal of prioritizing read performance over strict consistency guarantees. The main idea behind this mechanism is that RCU avoids the use of lock primitives while multiple threads try to read and update elements concurrently. In this case, elements are linked together through pointers in a shared data structure. RCU is used in the Linux kernel, but there are user-space libraries which implement the technique as well. One of the user-space solutions is liburcu that is a C language library. Earlier, we defined our code comprehension framework for easing the development of RCU solutions. In this paper, we present our visualization techniques for the Microsoft’s Monaco Editor

    Next generation of Exascale-class systems: ExaNeSt project and the status of its interconnect and storage development

    Get PDF
    The ExaNeSt project started on December 2015 and is funded by EU H2020 research framework (call H2020-FETHPC-2014, n. 671553) to study the adoption of low-cost, Linux-based power-efficient 64-bit ARM processors clusters for Exascale-class systems. The ExaNeSt consortium pools partners with industrial and academic research expertise in storage, interconnects and applications that share a vision of an European Exascale-class supercomputer. The common goal is designing and implementing a physical rack prototype together with its cooling system, the non-volatile memory (NVM) architecture and a unified low-latency interconnect able to test different options for network and storage. Furthermore, the consortium goal is to provide real HPC applications to validate the system. In this paper we describe the unified data and storage network architecture, reporting on the status of development of different testbeds and highlighting preliminary benchmark results obtained through the execution of scientific, engineering and data analytics scalable application kernels

    Design Space Exploration and Resource Management of Multi/Many-Core Systems

    Get PDF
    The increasing demand of processing a higher number of applications and related data on computing platforms has resulted in reliance on multi-/many-core chips as they facilitate parallel processing. However, there is a desire for these platforms to be energy-efficient and reliable, and they need to perform secure computations for the interest of the whole community. This book provides perspectives on the aforementioned aspects from leading researchers in terms of state-of-the-art contributions and upcoming trends

    Vérification dynamique ciblée et interactive de programmes grâce à une architecture modulaire

    Get PDF
    Le cycle de développement d’une application contient plusieurs phases, de l’écriture au soutien technique suivant la publication. Une phase particulièrement importante est la vérification du programme. Il s’agit de vérifier que le programme produit répond à la spécification au sens large, c’est à dire qu’il présente le comportement prévu sans bogue, quel que soit le scénario et les entrées présentées. De nombreux outils sont disponibles pour assister l’utilisateur dans cette tâche. Parmi ceux-ci, on trouve les outils de vérification formelle qui permettent de modéliser le déroulement d’un programme et d’en prouver mathématiquement la validité. Les analyses statiques peinent cependant à vérifier des programmes complexes, et une autre famille d’outils est souvent nécessaire. Ce sont les outils dynamiques, qui vérifient l’intégrité du programme pendant son exécution. Dans ce domaine, on trouve surtout des outils spécialisés et efficaces, mais peu flexibles. En effet, en l’absence d’une structure commune, beaucoup d’outils réécrivent intégralement toutes les fonctionnalités de base, et ce coût de développement fait qu’ils se limitent souvent aux fonctionnalités strictement nécessaires. Peu d’outils proposent ainsi une instrumentation dynamique ou une interface graphique.----------ABSTRACT: The development cycle of an application covers multiple different stages, from code writing to technical support. One crucial phase is program verification and debugging. During this stage, the developers need to make sure that the program they deliver corresponds to both its explicit and implicit specification, meaning that it has to behave correctly and without any bug whatever input is given to it. Multiple tools exist to assist the developers. Among them, formal verification is a method which proves mathematically the validity of a program by modeling its behavior. However, this type of static analysis struggle to analyse properly complex programs, and developers often also rely on dynamic tools, which check the integrity of a running program. A large number of specialized tools exist in that domain, but they often go for a lean approach, with little flexibility and adaptability. This is partly due to the lack of a common framework for high performance runtime verification tools. Most tools have to reprogram every functionality from the ground up, which means they often limit their scope to what is strictly necessary to reduce development costs. Features such as dynamic instrumentation or even a graphical user interface are seldom available. As part of this research project, we propose a solution to this problem, taking example on the recent development in integrated development environments. The goal is to provide modularity in order to share underlying features as much as possible. This removes the need for rewriting those basic features and enables developers to focus on more advanced tasks, which in turn produces better verification tools

    Fast Monte Carlo Simulations for Quality Assurance in Radiation Therapy

    Get PDF
    Monte Carlo (MC) simulation is generally considered to be the most accurate method for dose calculation in radiation therapy. However, it suffers from the low simulation efficiency (hours to days) and complex configuration, which impede its applications in clinical studies. The recent rise of MRI-guided radiation platform (e.g. ViewRay’s MRIdian system) brings urgent need of fast MC algorithms because the introduced strong magnetic field may cause big errors to other algorithms. My dissertation focuses on resolving the conflict between accuracy and efficiency of MC simulations through 4 different approaches: (1) GPU parallel computation, (2) Transport mechanism simplification, (3) Variance reduction, (4) DVH constraint. Accordingly, we took several steps to thoroughly study the performance and accuracy influence of these methods. As a result, three Monte Carlo simulation packages named gPENELOPE, gDPMvr and gDVH were developed for subtle balance between performance and accuracy in different application scenarios. For example, the most accurate gPENELOPE is usually used as golden standard for radiation meter model, while the fastest gDVH is usually used for quick in-patient dose calculation, which significantly reduces the calculation time from 5 hours to 1.2 minutes (250 times faster) with only 1% error introduced. In addition, a cross-platform GUI integrating simulation kernels and 3D visualization was developed to make the toolkit more user-friendly. After the fast MC infrastructure was established, we successfully applied it to four radiotherapy scenarios: (1) Validate the vender provided Co60 radiation head model by comparing the dose calculated by gPENELOPE to experiment data; (2) Quantitatively study the effect of magnetic field to dose distribution and proposed a strategy to improve treatment planning efficiency; (3) Evaluate the accuracy of the build-in MC algorithm of MRIdian’s treatment planning system. (4) Perform quick quality assurance (QA) for the “online adaptive radiation therapy” that doesn’t permit enough time to perform experiment QA. Many other time-sensitive applications (e.g. motional dose accumulation) will also benefit a lot from our fast MC infrastructure

    Nouvelle architecture pour les environnements de développement intégré et traçage de logiciel

    Get PDF
    La conception et le développement de logiciels requièrent souvent l’utilisation d’un Environnement de Développement Intégré (EDI) pour assister et faciliter le travail des développeurs. Les EDI offrent, à travers une interface graphique, des outils pour l’édition, la compilation et le débogage du code. Cependant, lorsque ces outils ne sont pas adaptés et suffisants pour la détection de défauts de performance sur des logiciels complexes, comme les systèmes distribués, les développeurs se tournent vers des techniques de traçage. Des logiciels appelés traceurs récoltent des informations précises pendant l’exécution du système instrumenté, et les regroupent dans une trace. Une trace peut contenir une quantité importante de données. Des outils spécialisés ont été développés afin d’en automatiser le processus d’analyse et de visualisation. Au fur et à mesure qu’un logiciel grandit et se complexifie, l’utilisation de ces outils d’analyse et de visualisation devient tout aussi importante qu’un débogueur. Néanmoins, ces outils sont complexes, autonomes et difficilement réutilisables dans d’autres systèmes. De plus, ils ne supportent pas les mêmes analyses, les mêmes formats de trace, ni les mêmes cas d’utilisation, ce qui implique que le développeur ait besoin d’installer plusieurs outils pour arriver à ses fins. Dans le cadre de ce projet, nous cherchons donc à résoudre ces problèmes et à intégrer l’analyse et la visualisation de trace non seulement dans les EDI, mais dans tout autre système qui pourrait en bénéficier, tels que les serveurs d’intégration continue ou encore les systèmes de monitorage. Par conséquent, nous proposons une nouvelle architecture logicielle flexible basée sur une approche client-serveur, d’architecture orientée service et multicouche. Notre travail s’étend à l’implémentation de l’architecture du serveur au sein du projet Trace Compass et l’implémentation de l’architecture du client au sein d’un nouveau projet appelé TraceScape. Toutes nos contributions sont disponibles à code source ouvert. Des tests de performance ont été menés afin d’évaluer le surcoût associé à la nouvelle architecture par rapport à la précédente approche, et les résultats indiquent un surcoût acceptable.----------ABSTRACT: Creating software often requires using an Integrated Development Environment (IDE) to help and facilitate the development work. With a simplified user interface, IDEs provide many useful tools such as a code editor, a compiler, and a debugger. Nonetheless, when those tools are not enough to detect performance defects in a large, complex and multithreaded system, developers use tracing techniques. A program called tracer collects accurate information during the execution of an instrumented system. A trace could contain a lot of data, and specialized tools have been developed to analyze traces automatically and show the results in interactive views. As the software grows and becomes more complex, using trace visualization tools must be part of the developer tool environment, like the debugger in the software development process. However, trace visualization tools are sophisticated, standalone and hardly reusable in other systems such as an IDE. Moreover, they have their specific trace format support, specific use cases, and specific trace analyses. Most of the time, developers need to install and use several such tools to fulfill their needs. In this research project, we aim to solve those problems and integrate trace analysis and visualization in tools such as IDEs, monitoring systems or continuous integration systems. Thus, we propose a flexible software architecture based on client-server, service-oriented architecture and layered approaches. We implemented the server architecture in the Trace Compass project and the client architecture in a new project called TraceScape. All of our contributions are available online in open source repositories. We also evaluated our proposed architecture through benchmarks, and the results show that our approach has an acceptable overhead compared to the standalone approach
    • …
    corecore