1,678 research outputs found

    A Detailed Analysis of Contemporary ARM and x86 Architectures

    Get PDF
    RISC vs. CISC wars raged in the 1980s when chip area and processor design complexity were the primary constraints and desktops and servers exclusively dominated the computing landscape. Today, energy and power are the primary design constraints and the computing landscape is significantly different: growth in tablets and smartphones running ARM (a RISC ISA) is surpassing that of desktops and laptops running x86 (a CISC ISA). Further, the traditionally low-power ARM ISA is entering the high-performance server market, while the traditionally high-performance x86 ISA is entering the mobile low-power device market. Thus, the question of whether ISA plays an intrinsic role in performance or energy efficiency is becoming important, and we seek to answer this question through a detailed measurement based study on real hardware running real applications. We analyze measurements on the ARM Cortex-A8 and Cortex-A9 and Intel Atom and Sandybridge i7 microprocessors over workloads spanning mobile, desktop, and server computing. Our methodical investigation demonstrates the role of ISA in modern microprocessors? performance and energy efficiency. We find that ARM and x86 processors are simply engineering design points optimized for different levels of performance, and there is nothing fundamentally more energy efficient in one ISA class or the other. The ISA being RISC or CISC seems irrelevant

    Programming MPSoC platforms: Road works ahead

    Get PDF
    This paper summarizes a special session on multicore/multi-processor system-on-chip (MPSoC) programming challenges. The current trend towards MPSoC platforms in most computing domains does not only mean a radical change in computer architecture. Even more important from a SW developerÂŽs viewpoint, at the same time the classical sequential von Neumann programming model needs to be overcome. Efficient utilization of the MPSoC HW resources demands for radically new models and corresponding SW development tools, capable of exploiting the available parallelism and guaranteeing bug-free parallel SW. While several standards are established in the high-performance computing domain (e.g. OpenMP), it is clear that more innovations are required for successful\ud deployment of heterogeneous embedded MPSoC. On the other hand, at least for coming years, the freedom for disruptive programming technologies is limited by the huge amount of certified sequential code that demands for a more pragmatic, gradual tool and code replacement strategy

    The AURORA Gigabit Testbed

    Get PDF
    AURORA is one of five U.S. networking testbeds charged with exploring applications of, and technologies necessary for, networks operating at gigabit per second or higher bandwidths. The emphasis of the AURORA testbed, distinct from the other four testbeds, BLANCA, CASA, NECTAR, and VISTANET, is research into the supporting technologies for gigabit networking. Like the other testbeds, AURORA itself is an experiment in collaboration, where government initiative (in the form of the Corporation for National Research Initiatives, which is funded by DARPA and the National Science Foundation) has spurred interaction among pre-existing centers of excellence in industry, academia, and government. AURORA has been charged with research into networking technologies that will underpin future high-speed networks. This paper provides an overview of the goals and methodologies employed in AURORA, and points to some preliminary results from our first year of research, ranging from analytic results to experimental prototype hardware. This paper enunciates our targets, which include new software architectures, network abstractions, and hardware technologies, as well as applications for our work

    An Overview of the AURORA Gigabit Testbed

    Get PDF
    AURORA is one of five U.S. testbeds charged with exploring applications of, and technologies necessary for, networks operating at gigabit per second or higher bandwidths. AURORA is also an experiment in collaboration, where government support (through the Corporation for National Research Initiatives, which is in turn funded by DARPA and the NSF) has spurred interaction among centers of excellence in industry, academia, and government. The emphasis of the AURORA testbed, distinct from the other four testbeds, is research into the supporting technologies for gigabit networking. Our targets include new software architectures, network abstractions, hardware technologies, and applications. This paper provides an overview of the goals and methodologies employed in AURORA, and reports preliminary results from our first year of research

    Citadel: Enclaves with Strong Microarchitectural Isolation and Secure Shared Memory on a Speculative Out-of-Order Processor

    Full text link
    We present Citadel, to our knowledge, the first enclave platform with strong microarchitectural isolation to run realistic secure programs on a speculative out-of-order multicore processor. First, we develop a new hardware mechanism to enable secure shared memory while defending against transient execution attacks by blocking speculative accesses to shared memory. Then, we develop an efficient dynamic cache partitioning scheme, improving both enclaves' and unprotected processes' performance. We conduct an in-depth security analysis and a performance evaluation of our new mechanisms. Finally, we build the hardware and software infrastructure required to run our secure enclaves. Our multicore processor runs on an FPGA and boots untrusted Linux from which users can securely launch and interact with enclaves. We open-source our end-to-end hardware and software infrastructure, hoping to spark more research and bridge the gap between conceptual proposals and FPGA prototypes

    Closed Terminologies and Temporal Reasoning in Description Logic for Concept and Plan Recognition

    Get PDF
    Description logics are knowledge representation formalisms in the tradition of frames and semantic networks, but with an emphasis on formal semantics. A terminology contains descriptions of concepts, such as UNIVERSITY, which are automatically classified ina taxonomy via subsumption inferences. Individuals such as COLUMBIA are described in terms of those concepts. This thesis enhances the scope and utility of description logics by exploiting new completeness assumptions during problem solving and by extending the expressiveness of descriptions. First, we introduce a predictive concept recognition methodology based on a new closed terminology assumption (CTA). The terminology is dynamically partitioned by modalities (necessary, optional, and impossible) with respect to individuals as they are specified. In our interactive configuration application, a user incrementally specifies an individual computer system and its components in collaboration with a configuration engine. Choices can be made in any order and at any level of abstraction. We distinguish between abstract and concrete concepts to formally define when an individual's description may be considered finished. We also exploit CTA, together with the terminology's subsumption-based organization, to efficiently track the types of systems and components consistent with current choices, infer additional constraints on current choices, and appropriately restrict future choices. Thus, we can help focus the efforts of both user and configuration engine. This work is implemented in the K-REP system. Second, we show that a new class of complex descriptions can be formed via constraint networks over standard descriptions. For example, we model plans as constraint networks whose nodes represent actions.Arcs represent qualitative and metric temporal constraints, plusco-reference constraints, between actions. By combining terminological reasoning with constraint satisfaction techniques, subsumption is extended to constraint networks, allowing automatic classification of a plan library. This work is implemented in the T-REX system, which integrates and builds upon an existing description logic system (K-REP or CLASSIC) and temporal reasoner (MATS). Finally, we combine the preceding, orthogonal results to conduct predictive recognition of constraint network concepts. As an example,this synthesis enables a new approach to deductive plan recognition,illustrated with travel plans. This work is also realized in T-REX

    From specialized to core course in Telecommunications degree: Experiences from digital electronic design and verification

    Get PDF
    [EN] The European Higher Education Area (EHEA) defines the competences for professional practice of a Telecommunications Engineer. The School of Telecommunication Engineering of the Universitat PolitÚcnica de ValÚncia (Valencia, Spain) provides an integrated education program consisting of a Graduate (GITST) + Master (MUIT). The GITST course offers four specialization tracks: Electronics, Telematics, Communication Systems and Multimedia for the proper acquisition of knowledge and competences of the future Telecommunications Engineers. In 2018, the graduate program has implemented a structural change in the organization of subjects for reinforcing important skills, in which a course on digital electronics design and verification (Integration of Digital Systems, ISDIGI) has been transformed into a core subject of the study plan. In this paper, we describe the methodology and adaptation of ISDIGI (i.e. a project-based learning intermediate HDL course that includes design and verification abilities) to the new GITST Curriculum. In addition, this paper describes the process of moving from specialized to core subject.Martínez Millana, A.; Liberos Mascarell, A.; Monzó Ferrer, JM.; Martínez Peiró, MA.; Martínez Pérez, JD.; Gadea Gironés, R. (2020). From specialized to core course in Telecommunications degree: Experiences from digital electronic design and verification. Editorial Universitat PolitÚcnica de ValÚncia. 229-238. https://doi.org/10.4995/INN2019.2019.10133OCS22923

    Development of a benchmark suite for large vector architectures into a continuous integration workflow

    Get PDF
    En el mĂłn del "High-Performance Computing", el processador Ă©s essencial. Recentment, Europa estĂ  fent grans esforços en promoure tecnologia europea. La "European Processor Initiative" sorgeix d'aquest esforç. Com a part de la iniciativa, mĂșltiples processadors estan sent dissenyats. Alguns implementant l'arquitectura "RISC-V", una "ISA" "open-source". Al llarg del desenvolupament del processador, disposar d'eines Ă©s fonamental per facilitar el testatge i automatitzar tasques. Aquest treball final de grau es focalitza en millorar una "pipeline" de "Continuous integration" emprada per detectar errors en un entorn Linux i en una "Field Programmable Gate Array (FPGA)" emulant un comportament d'usuari final.In the High-Performance Computing world, the processor is essential. In recent years, Europe has devoted a lot of effort into promoting European technology. The European Processor Initiative stems from this effort. As part of the initiative, multiple processors are being developed. Some implementing the RISC-V architecture, an open-source ISA. During the development of a processor, tools are fundamental to ease testing and automatize tasks. This final degree project focuses on improving a Continuos Integration pipeline used to detect bugs in an Field Programmable Gate Array (FPGA) and Linux environments emulating final user behaviour

    A Verified Information-Flow Architecture

    Get PDF
    SAFE is a clean-slate design for a highly secure computer system, with pervasive mechanisms for tracking and limiting information flows. At the lowest level, the SAFE hardware supports fine-grained programmable tags, with efficient and flexible propagation and combination of tags as instructions are executed. The operating system virtualizes these generic facilities to present an information-flow abstract machine that allows user programs to label sensitive data with rich confidentiality policies. We present a formal, machine-checked model of the key hardware and software mechanisms used to dynamically control information flow in SAFE and an end-to-end proof of noninterference for this model. We use a refinement proof methodology to propagate the noninterference property of the abstract machine down to the concrete machine level. We use an intermediate layer in the refinement chain that factors out the details of the information-flow control policy and devise a code generator for compiling such information-flow policies into low-level monitor code. Finally, we verify the correctness of this generator using a dedicated Hoare logic that abstracts from low-level machine instructions into a reusable set of verified structured code generators
    • 

    corecore