155 research outputs found

    An (Almost) Constant-E ort Solution-Veri cation Proof-of-Work Protocol based on Merkle Trees

    No full text
    Cryptology eprint Archive 2007/433International audienceProof-of-work schemes are economic measures to deter denial-of-service attacks: service requesters compute moderately hard functions that are easy to check by the provider. We present such a new scheme for solution-verification protocols. Although most schemes to date are probabilistic unbounded iterative processes with high variance of the requester effort, our Merkle tree scheme is deterministic, with an almost constant effort and null variance, and is computation-optima

    Modeling the Temperature Bias of Power Consumption for Nanometer-Scale CPUs in Application Processors

    Full text link
    We introduce and experimentally validate a new macro-level model of the CPU temperature/power relationship within nanometer-scale application processors or system-on-chips. By adopting a holistic view, this model is able to take into account many of the physical effects that occur within such systems. Together with two algorithms described in the paper, our results can be used, for instance by engineers designing power or thermal management units, to cancel the temperature-induced bias on power measurements. This will help them gather temperature-neutral power data while running multiple instance of their benchmarks. Also power requirements and system failure rates can be decreased by controlling the CPU's thermal behavior. Even though it is usually assumed that the temperature/power relationship is exponentially related, there is however a lack of publicly available physical temperature/power measurements to back up this assumption, something our paper corrects. Via measurements on two pertinent platforms sporting nanometer-scale application processors, we show that the power/temperature relationship is indeed very likely exponential over a 20{\deg}C to 85{\deg}C temperature range. Our data suggest that, for application processors operating between 20{\deg}C and 50{\deg}C, a quadratic model is still accurate and a linear approximation is acceptable.Comment: Submitted to SAMOS 2014; International Conference on Embedded Computer Systems: Architectures, Modeling, and Simulation (SAMOS XIV

    Optimal Compilation of HPF Remappings

    No full text
    International audienceApplications with varying array access patterns require to dynamically change array mappings on distributed-memory parallel machines. HPF (High Performance Fortran) provides such remappings, on data that can be replicated, explicitly through therealign andredistribute directives and implicitly at procedure calls and returns. However such features are left out of the HPF subset or of the currently discussed hpf kernel for effeciency reasons. This paper presents a new compilation technique to handle hpf remappings for message-passing parallel architectures. The first phase is global and removes all useless remappings that appear naturally in procedures. The code generated by the second phase takes advantage of replications to shorten the remapping time. It is proved optimal: A minimal number of messages, containing only the required data, is sent over the network. The technique is fully implemented in HPFC, our prototype HPF compiler. Experiments were performed on a Dec Alpha farm

    Compiling for an Heterogeneous Vector Image Processor

    No full text
    International audienceWe present a new compilation strategy, implemented at a small cost, to optimize image applications developed on top of a high level image processing library for an heterogeneous processor with a vector image processing accelerator. The library provides the semantics of the image computations. The pipelined structure of the accelerator allows to compute whole expressions with dozens of elementary image instructions, but is constrained as intermediate image values cannot be extracted. We adapted standard compilation techniques to perform this task automatically. Our strategy is implemented in PIPS, a source-to-source compiler which greatly reduces the development cost as standard phases are reused and parameterized for the target. Experiments were run on the hardware functional simulator. We compile 1217 cases, from elementary tests to full applications. All are optimal but a few which are mostly within a mere accelerator call of optimality. Our contribu- tions include: 1) a general low cost compilation strategy for image processing applications, based on the semantics provided by library calls, which improves locality by an order of magnitude; 2) a specific heuristic to minimize execution time on the target vector accelerator; 3) numerous experiments that show the effectiveness of our strategy

    Gestion durable des données : point sur les enjeux et proposition d'une démarche de pilotage de la performance appuyée sur un balanced scorecard thématique

    Get PDF
    Texte intégral disponible gratuitement à http://aim.asso.fr/index.php/mediatheque/finish/16-aim-2010/8-gestion-durable-des-donnees-point-sur-les-enjeux-et-proposition-d-une-demarche-de-pilotage-de-la-performance-appuyee-sur-un-balanced-scorecard-thematique/0L'objectif de cet article est de faire le point sur enjeux, acteurs et facteurs clés de succès de la mise en œuvre d'une gestion des données minimisant l'impact environnemental et de proposer une démarche de contrôle et de pilotage des performances applicable par les entreprises. Cet article s'appuie sur une recherche au cours de laquelle, après une revue bibliographique, nous avons analysé 70 appels d'offres concernant des solutions de stockage et mené 12 entretiens semi-directifs auprès d'experts du green IT et du secteur du stockage, de représentants de DSI et responsables d'exploitation. Les principales contributions de cet article sont d'une part de mettre l'accent sur la nécessité - au-delà de l'optimisation des infrastructures de stockage - de mettre en œuvre une gouvernance durable des données dont nous définissons les grands principes et de proposer, en nous appuyant sur l'approche du balanced scorecard, une démarche de contrôle et de pilotage des performances en ce domain

    Contributions à la performance du calcul scientifique et embarqué

    Get PDF
    Habilitation à diriger les recherches (HDR)Ce document résume mes résultats de recherche après vingt années d'activité, y compris les travaux en collaboration avec trois étudiants ayant soutenu leur thèse de doctorat~: Julien Zory, Youcef Bouchebaba et Mehdi Amini. Il fournit une vue générale des résultats organisés selon les principales motivations qui ont soutenu leur développement, à savoir la performance, l'élégance, les expériences et la diffusion des connaissances. Ces travaux couvrent l'analyse statique pour la compilation ou la détection de problèmes dans les programmes, la génération de communications avec des méthodes polyédriques, la génération de code pour des accélérateurs matériels, mais aussi des primitives cryptographiques et un algorithme distribué. Ce document ne donne cependant pas une présentation détaillée des résultats, pour laquelle nous orientons le lecteur vers les articles de journaux, de conférences ou de séminaires correspondants. Les quatre thèmes abordés sont~: la performance -- l'essentiel des travaux vise à optimiser les performances de codes sur diverses architectures, du super calculateur à mémoire distribuée, à la carte graphique (GPGPU), jusqu'au système embarqué spécialisé~; l'élégance -- est un objectif des phases de conception, de même que trouver si possible des solutions optimales, tout en devant rester pratiques~; les expériences -- la plupart des algorithmes présentés sont implémentés, en général dans des logiciels libres, par exemple intégrés à des gros projets comme le logiciel PIPS ou diffusés de manière indépendante, de manière à conduire des expériences qui montrent l'intérêt pratique des méthodes~; les connaissances -- une large part de de mon activité est dédiée à la transmission des connaissances, pour des étudiants, des professionnels ou même le grand public. Le document se conclut par un projet de recherche présenté sous la forme d'une discussion et d'un ensemble de sujets de stage ou de thèse

    A Field Analysis of Relational Database Schemas in Open-source Software (Extended)

    No full text
    International audienceThe relational schemas of 512 open-source projects storing their data in MySQL or PostgreSQL databases are investigated by querying the standard information schema, looking for various issues. These SQL queries are released as the Salix free software. As it is fully relational and relies on standards, it may be installed in any compliant database to help improve schemas. The overall quality of the surveyed schemas is poor: a majority of projects have at least one table without any primary key or unique constraint to identify a tuple; data security features such as referential integrity or transactional back-ends are hardly used; projects that advertise supporting both databases often have missing tables or attributes. PostgreSQL projects have a better quality compared to MySQL projects, and it is even better for projects with PostgreSQL-only support. However, the difference between both databases is mostly due to MySQL-specific issues. An overall predictor of bad database quality is that a project chooses MySQL or PHP, while good design is found with PostgreSQL and Java. The few declared constraints allow to detect latent bugs, that are worth fixing: more declarations would certainly help unveil more bugs. Our survey also suggests some features of MySQL and PostgreSQL as particularly error-prone. This first survey on the quality of relational schemas in open-source software provides a unique insight in the data engineering practice of these project

    On the Quality of Relational Database Schemas in Open-source Software

    No full text
    International audienceThe relational schemas of 512 open-source projects storing their data in MySQL or PostgreSQL databases are investigated by querying the standard information schema, looking for overall design issues. The set of SQL queries used in our research is released as the Salix free software. As it is fully relational and relies on standards, it may be installed in any compliant database to help improve schemas. Our research shows that the overall quality of the surveyed schemas is poor: a majority of projects have at least one table without any primary key or unique constraint to identify a tuple; data security features such as referential integrity or transactional back-ends are hardly used; projects that advertise supporting both databases often have missing tables or attributes. PostgreSQL projects appear to be of higher quality than MySQL projects, and have been updated more recently, suggesting a more active maintenance. This is even better for projects with PostgreSQL-only support. However, the quality difference between both databases management systems is mostly due to MySQL-specific issues. An overall predictor of bad database quality is that a project chooses MySQL or PHP, while good design is found with PostgreSQL and Java. The few declared constraints allow to detect latent bugs, that are worth fixing: more declarations would certainly help unveil more bugs. Our survey also suggests that some features of MySQL and PostgreSQL are particularly error-prone. This first survey on the quality of relational schemas in open-source software provides a unique insight in the data engineering practice of these projects

    Data and Process Abstraction in PIPS Internal Representation

    No full text
    7 pagesInternational audiencePIPS, a state-of-the-art, source-to-source compilation and optimization platform, has been under development at MINES Paris-Tech since 1988, and its development is still running strong. Initially designed to perform automatic interprocedural parallelization of Fortran 77 programs, PIPS has been extended over the years to compile HPF (High Performance Fortran), C and Fortran 95 programs. Written in C, the PIPS framework has shown to be surprisingly resilient, and its analysis and transformation phases have been reused, adapted and extended to new targets, such as generating code for special purpose hardware accelerators, without requiring significant re-engineering of its core structure. We suggest that one of the key features that explain this adaptability is the PIPS internal representation (IR) which stores an abstract syntax tree. Although fit for source-to-source processing, PIPS IR emphasized from its origins the use of maximum abstraction over target languages' specificities and generic data structure manipulation services via the Newgen Domain Specific Language, which provides key features such as type building, automatic serialization and powerful iterators. The state of software technology has significantly advanced over the last 20 years and many of the pioneering features introduced by Newgen are nowadays present in modern programming frameworks. However, we believe that the methodology used to design PIPS IR, and presented in this paper, remains relevant today and could be put to good use in future compilation platform development projects
    • …
    corecore