281 research outputs found

    Power estimation on functional level for programmable processors

    Get PDF
    In diesem Beitrag werden verschiedene Ansätze zur Verlustleistungsschätzung von programmierbaren Prozessoren vorgestellt und bezüglich ihrer Übertragbarkeit auf moderne Prozessor-Architekturen wie beispielsweise Very Long Instruction Word (VLIW)-Architekturen bewertet. Besonderes Augenmerk liegt hierbei auf dem Konzept der sogenannten Functional-Level Power Analysis (FLPA). Dieser Ansatz basiert auf der Einteilung der Prozessor-Architektur in funktionale Blöcke wie beispielsweise Processing-Unit, Clock-Netzwerk, interner Speicher und andere. Die Verlustleistungsaufnahme dieser Bl¨ocke wird parameterabhängig durch arithmetische Modellfunktionen beschrieben. Durch automatisierte Analyse von Assemblercodes des zu schätzenden Systems mittels eines Parsers können die Eingangsparameter wie beispielsweise der erzielte Parallelitätsgrad oder die Art des Speicherzugriffs gewonnen werden. Dieser Ansatz wird am Beispiel zweier moderner digitaler Signalprozessoren durch eine Vielzahl von Basis-Algorithmen der digitalen Signalverarbeitung evaluiert. Die ermittelten Schätzwerte für die einzelnen Algorithmen werden dabei mit physikalisch gemessenen Werten verglichen. Es ergibt sich ein sehr kleiner maximaler Schätzfehler von 3%.</p><p style=&quot;line-height: 20px;&quot;> In this contribution different approaches for power estimation for programmable processors are presented and evaluated concerning their capability to be applied to modern digital signal processor architectures like e.g. Very Long InstructionWord (VLIW) -architectures. Special emphasis will be laid on the concept of so-called Functional-Level Power Analysis (FLPA). This approach is based on the separation of the processor architecture into functional blocks like e.g. processing unit, clock network, internal memory and others. The power consumption of these blocks is described by parameter dependent arithmetic model functions. By application of a parser based automized analysis of assembler codes of the systems to be estimated the input parameters of the Correspondence to: H. Blume ([email protected]) arithmetic functions like e.g. the achieved degree of parallelism or the kind and number of memory accesses can be computed. This approach is exemplarily demonstrated and evaluated applying two modern digital signal processors and a variety of basic algorithms of digital signal processing. The resulting estimation values for the inspected algorithms are compared to physically measured values. A resulting maximum estimation error of 3% is achieved

    On The Cost of ASIC Hardware Crackers: A SHA-1 Case Study

    Get PDF
    International audienceIn February 2017, the SHA-1 hashing algorithm was practically broken using an identical-prefix collision attack implemented on a GPU cluster, and in January 2020 a chosen-prefix collision was first computed with practical implications on various security protocols. These advances opened the door for several research questions, such as the minimal cost to perform these attacks in practice. In particular, one may wonder what is the best technology for software/hardware cryptanalysis of such primitives. In this paper, we address some of these questions by studying the challenges and costs of building an ASIC cluster for performing attacks against a hash function. Our study takes into account different scenarios and includes two cryptanalytic strategies that can be used to find such collisions: a classical generic birthday search, and a state-of-the-art differential attack using neutral bits for SHA-1. We show that for generic attacks, GPU and ASIC poses a serious practical threat to primitives with security level ∼ 64 bits, with rented GPU a good solution for a one-off attack, and ASICs more efficient if the attack has to be run a few times. ASICs also pose a non-negligible security risk for primitives with 80-bit security. For differential attacks, GPUs (purchased or rented) are often a very cost-effective choice, but ASIC provides an alternative for organizations that can afford the initial cost and look for a compact, energy-efficient, reusable solution. In the case of SHA-1, we show that an ASIC cluster costing a few millions would be able to generate chosen-prefix collisions in a day or even in a minute. This extends the attack surface to TLS and SSH, for which the chosen-prefix collision would need to be generated very quickly

    Reds for Ed: Class Struggle in the Classroom

    Get PDF
    Utilizing the methodology of participant observation combined with semi-structured interviews, this ethnographic study aims to analyze the socio-historical development of the Richmond chapter of the Virginia Caucus of Rank-and-file Educators (VCORE), a left-wing opposition group inside the Virginia Education Association (VEA). This study aims to assess VCORE’s politics, origins, growth, transformation, organizational structure, and cultural practices, focusing upon the role VCORE members played in the lead-up to and aftermath of the 2020-2022 campaign to reinstate collective bargaining rights of public education employees in Richmond

    Modellbasiertes Regressionstesten von Varianten und Variantenversionen

    Get PDF
    The quality assurance of software product lines (SPL) achieved via testing is a crucial and challenging activity of SPL engineering. In general, the application of single-software testing techniques for SPL testing is not practical as it leads to the individual testing of a potentially vast number of variants. Testing each variant in isolation further results in redundant testing processes by means of redundant test-case executions due to the shared commonality. Existing techniques for SPL testing cope with those challenges, e.g., by identifying samples of variants to be tested. However, each variant is still tested separately without taking the explicit knowledge about the shared commonality and variability into account to reduce the overall testing effort. Furthermore, due to the increasing longevity of software systems, their development has to face software evolution. Hence, quality assurance has also to be ensured after SPL evolution by testing respective versions of variants. In this thesis, we tackle the challenges of testing redundancy as well as evolution by proposing a framework for model-based regression testing of evolving SPLs. The framework facilitates efficient incremental testing of variants and versions of variants by exploiting the commonality and reuse potential of test artifacts and test results. Our contribution is divided into three parts. First, we propose a test-modeling formalism capturing the variability and version information of evolving SPLs in an integrated fashion. The formalism builds the basis for automatic derivation of reusable test cases and for the application of change impact analysis to guide retest test selection. Second, we introduce two techniques for incremental change impact analysis to identify (1) changing execution dependencies to be retested between subsequently tested variants and versions of variants, and (2) the impact of an evolution step to the variant set in terms of modified, new and unchanged versions of variants. Third, we define a coverage-driven retest test selection based on a new retest coverage criterion that incorporates the results of the change impact analysis. The retest test selection facilitates the reduction of redundantly executed test cases during incremental testing of variants and versions of variants. The framework is prototypically implemented and evaluated by means of three evolving SPLs showing that it achieves a reduction of the overall effort for testing evolving SPLs.Testen ist ein wichtiger Bestandteil der Entwicklung von Softwareproduktlinien (SPL). Aufgrund der potentiell sehr großen Anzahl an Varianten einer SPL ist deren individueller Test im Allgemeinen nicht praktikabel und resultiert zudem in redundanten Testfallausführungen, die durch die Gemeinsamkeiten zwischen Varianten entstehen. Existierende SPL-Testansätze adressieren diese Herausforderungen z.B. durch die Reduktion der Anzahl an zu testenden Varianten. Jedoch wird weiterhin jede Variante unabhängig getestet, ohne dabei das Wissen über Gemeinsamkeiten und Variabilität auszunutzen, um den Testaufwand zu reduzieren. Des Weiteren muss sich die SPL-Entwicklung mit der Evolution von Software auseinandersetzen. Dies birgt weitere Herausforderungen für das SPL-Testen, da nicht nur für Varianten sondern auch für ihre Versionen die Qualität sichergestellt werden muss. In dieser Arbeit stellen wir ein Framework für das modellbasierte Regressionstesten von evolvierenden SPL vor, das die Herausforderungen des redundanten Testens und der Software-Evolution adressiert. Das Framework vereint Testmodellierung, Änderungsauswirkungsanalyse und automatische Testfallselektion, um einen inkrementellen Testprozess zu definieren, der Varianten und Variantenversionen unter Ausnutzung des Wissens über gemeinsame Funktionalität und dem Wiederverwendungspotential von Testartefakten und -resultaten effizient testet. Für die Testmodellierung entwickeln wir einen Ansatz, der Variabilitäts- sowie Versionsinformation von evolvierenden SPL gleichermaßen für die Modellierung einbezieht. Für die Änderungsauswirkungsanalyse definieren wir zwei Techniken, um zum einen Änderungen in Ausführungsabhängigkeiten zwischen zu testenden Varianten und ihren Versionen zu identifizieren und zum anderen die Auswirkungen eines Evolutionsschrittes auf die Variantenmenge zu bestimmen und zu klassifizieren. Für die Testfallselektion schlagen wir ein Abdeckungskriterium vor, das die Resultate der Auswirkungsanalyse einbezieht, um automatisierte Entscheidungen über einen Wiederholungstest von wiederverwendbaren Testfällen durchzuführen. Die abdeckungsgetriebene Testfallselektion ermöglicht somit die Reduktion der redundanten Testfallausführungen während des inkrementellen Testens von Varianten und Variantenversionen. Das Framework ist prototypisch implementiert und anhand von drei evolvierenden SPL evaluiert. Die Resultate zeigen, dass eine Aufwandsreduktion für das Testen evolvierender SPL erreicht wird

    An Experimental Evaluation of Datacenter Workloads On Low-Power Embedded Micro Servers

    Get PDF
    This paper presents a comprehensive evaluation of an ultra-low power cluster, built upon the Intel Edison based micro servers. The improved performance and high energy efficiency of micro servers have driven both academia and industry to explore the possibility of replacing conventional brawny servers with a larger swarm of embedded micro servers. Existing attempts mostly focus on mobile-class micro servers, whose capacities are similar to mobile phones. We, on the other hand, target on sensor-class micro servers, which are originally intended for uses in wearable technologies, sensor networks, and Internet-of-Things. Although sensor-class micro servers have much less capacity, they are touted for minimal power consumption (< 1 Watt), which opens new possibilities of achieving higher energy efficiency in datacenter workloads. Our systematic evaluation of the Edison cluster and comparisons to conventional brawny clusters involve careful workload choosing and laborious parameter tuning, which ensures maximum server utilization and thus fair comparisons. Results show that the Edison cluster achieves up to 3.5× improvement on work-done-per-joule for web service applications and data-intensive MapReduce jobs. In terms of scalability, the Edison cluster scales linearly on the throughput of web service workloads, and also shows satisfactory scalability for MapReduce workloads despite coordination overhead.This research was supported in part by NSF grant 13-20209.Ope

    A RISC-V-based FPGA Overlay to Simplify Embedded Accelerator Deployment

    Get PDF
    Modern cyber-physical systems (CPS) are increasingly adopting heterogeneous systems-on-chip (HeSoCs) as a computing platform to satisfy the demands of their sophisticated workloads. FPGA-based HeSoCs can reach high performance and energy efficiency at the cost of increased design complexity. High-Level Synthesis (HLS) can ease IP design, but automated tools still lack the maturity to efficiently and easily tackle system-level integration of the many hardware and software blocks included in a modern CPS. We present an innovative hardware overlay offering plug-and-play integration of HLS-compiled or handcrafted acceleration IPs thanks to a customizable wrapper attached to the overlay interconnect and providing shared-memory communication to the overlay cores. The latter are based on the open RISC-V ISA and offer simplified software management of the acceleration IP. Deploying the proposed overlay on a Xilinx ZU9EG shows ≈ 20% LUT usage and ≈ 4× speedup compared to program execution on the ARM host core

    Towards Power Characterization of FPGA Architectures To Enable Open-Source Power Estimation Using Micro-Benchmarks

    Full text link
    While in the past decade there has been significant progress in open-source synthesis and verification tools and flows, one piece is still missing in the open-source design automation ecosystem: a tool to estimate the power consumption of a design on specific target technologies. We discuss a work-in-progress method to characterize target technologies using generic micro-benchmarks, whose results can be used to establish power models of these target technologies. These models can further be used to predict the power consumption of a design in a given use case scenario (which is currently out of scope). We demonstrate our characterization method on the publicly documented Lattice iCE40 FPGA technology, and discuss two approaches to generating micro-benchmarks which consume power in the target device: simple lookup table (LUT) instantiation, and a more sophisticated instantiation of ring oscillators. We study three approaches to stimulate the implemented micro-benchmarks in hardware: Verilog testbenches, micro-controller testbenches, and pseudo-random linear-feedback-shift-register-(LFSR)-based testing. We measure the power consumption of the stimulated target devices. Our ultimate goal is to automate power measurements for technology characterization; Currently, we manually measure the consumed power at three shunt resistors using an oscilloscope. Preliminary results indicate that we are able to induce variable power consumption in target devices; However, the sensitivity of the power characterization is still too low to build expressive power estimation models.Comment: Presented at the 3rd Workshop on Open-Source Design Automation (OSDA), 2023 (arXiv:2303.18024

    Revolutionary Aeropropulsion Concept for Sustainable Aviation: Turboelectric Distributed Propulsion

    Get PDF
    In response to growing aviation demands and concerns about the environment and energy usage, a team at NASA proposed and examined a revolutionary aeropropulsion concept, a turboelectric distributed propulsion system, which employs multiple electric motor-driven propulsors that are distributed on a large transport vehicle. The power to drive these electric propulsors is generated by separately located gas-turbine-driven electric generators on the airframe. This arrangement enables the use of many small-distributed propulsors, allowing a very high effective bypass ratio, while retaining the superior efficiency of large core engines, which are physically separated but connected to the propulsors through electric power lines. Because of the physical separation of propulsors from power generating devices, a new class of vehicles with unprecedented performance employing such revolutionary propulsion system is possible in vehicle design. One such vehicle currently being investigated by NASA is called the "N3-X" that uses a hybrid-wing-body for an airframe and superconducting generators, motors, and transmission lines for its propulsion system. On the N3-X these new degrees of design freedom are used (1) to place two large turboshaft engines driving generators in freestream conditions to minimize total pressure losses and (2) to embed a broad continuous array of 14 motor-driven fans on the upper surface of the aircraft near the trailing edge of the hybrid-wing-body airframe to maximize propulsive efficiency by ingesting thick airframe boundary layer flow. Through a system analysis in engine cycle and weight estimation, it was determined that the N3-X would be able to achieve a reduction of 70% or 72% (depending on the cooling system) in energy usage relative to the reference aircraft, a Boeing 777-200LR. Since the high-power electric system is used in its propulsion system, a study of the electric power distribution system was performed to identify critical dynamic and safety issues. This paper presents some of the features and issues associated with the turboelectric distributed propulsion system and summarizes the recent study results, including the high electric power distribution, in the analysis of the N3-X vehicle

    The semantics of N-soft sets, their applications, and a coda about three-way decision

    Get PDF
    This paper presents the first detailed analysis of the semantics of N-soft sets. The two benchmark semantics associated with soft sets are perfect fits for N-soft sets. We argue that N-soft sets allow for an utterly new interpretation in logical terms, whereby N-soft sets can be interpreted as a generalized form of incomplete soft sets. Applications include aggregation strategies for these settings. Finally, three-way decision models are designed with both a qualitative and a quantitative character. The first is based on the concepts of V-kernel, V-core and V-support. The second uses an extended form of cardinality that is reminiscent of the idea of scalar sigma-count as a proxy of the cardinality of a fuzzy set

    Measuring covariation between preference parameters: A simulation study

    Full text link
    Much of the empirical success of Rank-Dependent Expected Utility Theory and Cumulative Prospect Theory is due to the fact that they allow for nonlinearity towards both outcomes (through the utility function) and probabilities (through the probability weighting function). Since risk attitude is jointly determined by the shapes of the two functions, it would be instructive to measure how the degree of risk aversion incorporated in the utility function empirically covaries with its counterpart from the probability weighting function. We conduct a large-scale simulation to assess whether an elicitation procedure based on the trade-off method, which essentially equals that used in recent empirical studies, allows to reliably measure the quantity of interest. We find a strong systematic distortion of measurement, which points at the limitations of the presently available elicitation techniques
    corecore