36,334 research outputs found

    Design of multimedia processor based on metric computation

    Get PDF
    Media-processing applications, such as signal processing, 2D and 3D graphics rendering, and image compression, are the dominant workloads in many embedded systems today. The real-time constraints of those media applications have taxing demands on today's processor performances with low cost, low power and reduced design delay. To satisfy those challenges, a fast and efficient strategy consists in upgrading a low cost general purpose processor core. This approach is based on the personalization of a general RISC processor core according the target multimedia application requirements. Thus, if the extra cost is justified, the general purpose processor GPP core can be enforced with instruction level coprocessors, coarse grain dedicated hardware, ad hoc memories or new GPP cores. In this way the final design solution is tailored to the application requirements. The proposed approach is based on three main steps: the first one is the analysis of the targeted application using efficient metrics. The second step is the selection of the appropriate architecture template according to the first step results and recommendations. The third step is the architecture generation. This approach is experimented using various image and video algorithms showing its feasibility

    A New Approach for Quality Management in Pervasive Computing Environments

    Full text link
    This paper provides an extension of MDA called Context-aware Quality Model Driven Architecture (CQ-MDA) which can be used for quality control in pervasive computing environments. The proposed CQ-MDA approach based on ContextualArchRQMM (Contextual ARCHitecture Quality Requirement MetaModel), being an extension to the MDA, allows for considering quality and resources-awareness while conducting the design process. The contributions of this paper are a meta-model for architecture quality control of context-aware applications and a model driven approach to separate architecture concerns from context and quality concerns and to configure reconfigurable software architectures of distributed systems. To demonstrate the utility of our approach, we use a videoconference system.Comment: 10 pages, 10 Figures, Oral Presentation in ECSA 201

    An effective AMS Top-Down Methodology Applied to the Design of a Mixed-SignalUWB System-on-Chip

    Get PDF
    The design of Ultra Wideband (UWB) mixed-signal SoC for localization applications in wireless personal area networks is currently investigated by several researchers. The complexity of the design claims for effective top-down methodologies. We propose a layered approach based on VHDL-AMS for the first design stages and on an intelligent use of a circuit-level simulator for the transistor-level phase. We apply the latter just to one block at a time and wrap it within the system-level VHDL-AMS description. This method allows to capture the impact of circuit-level design choices and non-idealities on system performance. To demonstrate the effectiveness of the methodology we show how the refinement of the design affects specific UWB system parameters such as bit-error rate and localization estimations

    Parallel Architectures for Many-Core Systems-On-Chip in Deep Sub-Micron Technology

    Get PDF
    Despite the several issues faced in the past, the evolutionary trend of silicon has kept its constant pace. Today an ever increasing number of cores is integrated onto the same die. Unfortunately, the extraordinary performance achievable by the many-core paradigm is limited by several factors. Memory bandwidth limitation, combined with inefficient synchronization mechanisms, can severely overcome the potential computation capabilities. Moreover, the huge HW/SW design space requires accurate and flexible tools to perform architectural explorations and validation of design choices. In this thesis we focus on the aforementioned aspects: a flexible and accurate Virtual Platform has been developed, targeting a reference many-core architecture. Such tool has been used to perform architectural explorations, focusing on instruction caching architecture and hybrid HW/SW synchronization mechanism. Beside architectural implications, another issue of embedded systems is considered: energy efficiency. Near Threshold Computing is a key research area in the Ultra-Low-Power domain, as it promises a tenfold improvement in energy efficiency compared to super-threshold operation and it mitigates thermal bottlenecks. The physical implications of modern deep sub-micron technology are severely limiting performance and reliability of modern designs. Reliability becomes a major obstacle when operating in NTC, especially memory operation becomes unreliable and can compromise system correctness. In the present work a novel hybrid memory architecture is devised to overcome reliability issues and at the same time improve energy efficiency by means of aggressive voltage scaling when allowed by workload requirements. Variability is another great drawback of near-threshold operation. The greatly increased sensitivity to threshold voltage variations in today a major concern for electronic devices. We introduce a variation-tolerant extension of the baseline many-core architecture. By means of micro-architectural knobs and a lightweight runtime control unit, the baseline architecture becomes dynamically tolerant to variations

    Resource Optimized Quantum Architectures for Surface Code Implementations of Magic-State Distillation

    Full text link
    Quantum computers capable of solving classically intractable problems are under construction, and intermediate-scale devices are approaching completion. Current efforts to design large-scale devices require allocating immense resources to error correction, with the majority dedicated to the production of high-fidelity ancillary states known as magic-states. Leading techniques focus on dedicating a large, contiguous region of the processor as a single "magic-state distillation factory" responsible for meeting the magic-state demands of applications. In this work we design and analyze a set of optimized factory architectural layouts that divide a single factory into spatially distributed factories located throughout the processor. We find that distributed factory architectures minimize the space-time volume overhead imposed by distillation. Additionally, we find that the number of distributed components in each optimal configuration is sensitive to application characteristics and underlying physical device error rates. More specifically, we find that the rate at which T-gates are demanded by an application has a significant impact on the optimal distillation architecture. We develop an optimization procedure that discovers the optimal number of factory distillation rounds and number of output magic states per factory, as well as an overall system architecture that interacts with the factories. This yields between a 10x and 20x resource reduction compared to commonly accepted single factory designs. Performance is analyzed across representative application classes such as quantum simulation and quantum chemistry.Comment: 16 pages, 14 figure

    Multi-Criteria Analysis in Compound Decision Processes. The AHP and the Architectural Competition for the Chamber of Deputies in Rome (Italy)

    Get PDF
    In 1967, a national architectural competition was released for a preliminary project proposal, aimed at the realization of the new building for the Chamber of Deputies in Rome. The outcomes of that competition were unusual: eighteen projects were declared joint winners, and no winner was consequently selected. With reference to that event, this research aims to examine the usefulness of the evaluation tools that are currently employed and the positive effects that one of these techniques would have had, as support for the identification of the “winner” project, are highlighted. Therefore, an hypothetical examination/adjustment of the decision process of that competition through the Analytic Hierarchy Process (AHP) is developed, analyzing the outputs obtained by the implementations of this technique on the final decision. In addition to confirming the usefulness of the evaluation tools for compound and conflicting decision processes, the results of this experiment led to a further understanding of the socio-cultural dynamics related to the original outcomes of the competition analyzed. View Full-Tex

    B\'ezier curves that are close to elastica

    Full text link
    We study the problem of identifying those cubic B\'ezier curves that are close in the L2 norm to planar elastic curves. The problem arises in design situations where the manufacturing process produces elastic curves; these are difficult to work with in a digital environment. We seek a sub-class of special B\'ezier curves as a proxy. We identify an easily computable quantity, which we call the lambda-residual, that accurately predicts a small L2 distance. We then identify geometric criteria on the control polygon that guarantee that a B\'ezier curve has lambda-residual below 0.4, which effectively implies that the curve is within 1 percent of its arc-length to an elastic curve in the L2 norm. Finally we give two projection algorithms that take an input B\'ezier curve and adjust its length and shape, whilst keeping the end-points and end-tangent angles fixed, until it is close to an elastic curve.Comment: 13 pages, 15 figure
    • …
    corecore