5,934 research outputs found

    Chisel: Reliability- and Accuracy-Aware Optimization of Approximate Computational Kernels

    Get PDF
    The accuracy of an approximate computation is the distance between the result that the computation produces and the corresponding fully accurate result. The reliability of the computation is the probability that it will produce an acceptably accurate result. Emerging approximate hardware platforms provide approximate operations that, in return for reduced energy consumption and/or increased performance, exhibit reduced reliability and/or accuracy. We present Chisel, a system for reliability- and accuracy-aware optimization of approximate computational kernels that run on approximate hardware platforms. Given a combined reliability and/or accuracy specification, Chisel automatically selects approximate kernel operations to synthesize an approximate computation that minimizes energy consumption while satisfying its reliability and accuracy specification. We evaluate Chisel on five applications from the image processing, scientific computing, and financial analysis domains. The experimental results show that our implemented optimization algorithm enables Chisel to optimize our set of benchmark kernels to obtain energy savings from 8.7% to 19.8% compared to the fully reliable kernel implementations while preserving important reliability guarantees.National Science Foundation (U.S.) (Grant CCF-1036241)National Science Foundation (U.S.) (Grant CCF-1138967)National Science Foundation (U.S.) (Grant IIS-0835652)United States. Dept. of Energy (Grant DE-SC0008923)United States. Defense Advanced Research Projects Agency (Grant FA8650-11-C-7192)United States. Defense Advanced Research Projects Agency (Grant FA8750-12-2-0110)United States. Defense Advanced Research Projects Agency (Grant FA-8750-14-2-0004

    Virtual sensor networks: collaboration and resource sharing

    Get PDF
    This thesis contributes to the advancement of the Sensing as a Service (SeaaS), based on cloud infrastructures, through the development of models and algorithms that make an efficient use of both sensor and cloud resources while reducing the delay associated with the data flow between cloud and client sides, which results into a better quality of experience for users. The first models and algorithms developed are suitable for the case of mashups being managed at the client side, and then models and algorithms considering mashups managed at the cloud were developed. This requires solving multiple problems: i) clustering of compatible mashup elements; ii) allocation of devices to clusters, meaning that a device will serve multiple applications/mashups; iii) reduction of the amount of data flow between workplaces, and associated delay, which depends on clustering, device allocation and placement of workplaces. The developed strategies can be adopted by cloud service providers wishing to improve the performance of their clouds. Several steps towards an efficient Se-aaS business model were performed. A mathematical model was development to assess the impact (of resource allocations) on scalability, QoE and elasticity. Regarding the clustering of mashup elements, a first mathematical model was developed for the selection of the best pre-calculated clusters of mashup elements (virtual Things), and then a second model is proposed for the best virtual Things to be built (non pre-calculated clusters). Its evaluation is done through heuristic algorithms having such model as a basis. Such models and algorithms were first developed for the case of mashups managed at the client side, and after they were extended for the case of mashups being managed at the cloud. For the improvement of these last results, a mathematical programming optimization model was developed that allows optimal clustering and resource allocation solutions to be obtained. Although this is a computationally difficult approach, the added value of this process is that the problem is rigorously outlined, and such knowledge is used as a guide in the development of better a heuristic algorithm.Esta tese contribui para o avanço tecnolĂłgico do modelo de Sensing as a Service (Se-aaS), baseado em infraestrutura cloud, atravĂ©s do desenvolvimento de modelos e algoritmos que resolvem o problema da alocação eficiente de recursos, melhorando os mĂ©todos e tĂ©cnicas atuais e reduzindo os tempos associados `a transferĂȘncia dos dados entre a cloud e os clientes, com o objetivo de melhorar a qualidade da experiĂȘncia dos seus utilizadores. Os primeiros modelos e algoritmos desenvolvidos sĂŁo adequados para o caso em que as mashups sĂŁo geridas pela aplicação cliente, e posteriormente foram desenvolvidos modelos e algoritmos para o caso em que as mashups sĂŁo geridas pela cloud. Isto implica ter de resolver mĂșltiplos problemas: i) Construção de clusters de elementos de mashup compatĂ­veis; ii) Atribuição de dispositivos fĂ­sicos aos clusters, acabando um dispositivo fĂ­sico por servir mÂŽ mĂșltiplas aplicaçÔes/mashups; iii) Redução da quantidade de transferĂȘncia de dados entre os diversos locais da cloud, e consequentes atrasos, o que dependente dos clusters construĂ­dos, dos dispositivos atribuĂ­dos aos clusters e dos locais da cloud escolhidos para realizar o processamento necessĂĄrio. As diferentes estratĂ©gias podem ser adotadas por fornecedores de serviço cloud que queiram melhorar o desempenho dos seus serviços.(


    Optimizing energy-efficiency for multi-core packet processing systems in a compiler framework

    Get PDF
    Network applications become increasingly computation-intensive and the amount of traffic soars unprecedentedly nowadays. Multi-core and multi-threaded techniques are thus widely employed in packet processing system to meet the changing requirement. However, the processing power cannot be fully utilized without a suitable programming environment. The compilation procedure is decisive for the quality of the code. It can largely determine the overall system performance in terms of packet throughput, individual packet latency, core utilization and energy efficiency. The thesis investigated compilation issues in networking domain first, particularly on energy consumption. And as a cornerstone for any compiler optimizations, a code analysis module for collecting program dependency is presented and incorporated into a compiler framework. With that dependency information, a strategy based on graph bi-partitioning and mapping is proposed to search for an optimal configuration in a parallel-pipeline fashion. The energy-aware extension is specifically effective in enhancing the energy-efficiency of the whole system. Finally, a generic evaluation framework for simulating the performance and energy consumption of a packet processing system is given. It accepts flexible architectural configuration and is capable of performingarbitrary code mapping. The simulation time is extremely short compared to full-fledged simulators. A set of our optimization results is gathered using the framework

    KAVUAKA: a low-power application-specific processor architecture for digital hearing aids

    Get PDF
    The power consumption of digital hearing aids is very restricted due to their small physical size and the available hardware resources for signal processing are limited. However, there is a demand for more processing performance to make future hearing aids more useful and smarter. Future hearing aids should be able to detect, localize, and recognize target speakers in complex acoustic environments to further improve the speech intelligibility of the individual hearing aid user. Computationally intensive algorithms are required for this task. To maintain acceptable battery life, the hearing aid processing architecture must be highly optimized for extremely low-power consumption and high processing performance.The integration of application-specific instruction-set processors (ASIPs) into hearing aids enables a wide range of architectural customizations to meet the stringent power consumption and performance requirements. In this thesis, the application-specific hearing aid processor KAVUAKA is presented, which is customized and optimized with state-of-the-art hearing aid algorithms such as speaker localization, noise reduction, beamforming algorithms, and speech recognition. Specialized and application-specific instructions are designed and added to the baseline instruction set architecture (ISA). Among the major contributions are a multiply-accumulate (MAC) unit for real- and complex-valued numbers, architectures for power reduction during register accesses, co-processors and a low-latency audio interface. With the proposed MAC architecture, the KAVUAKA processor requires 16 % less cycles for the computation of a 128-point fast Fourier transform (FFT) compared to related programmable digital signal processors. The power consumption during register file accesses is decreased by 6 %to 17 % with isolation and by-pass techniques. The hardware-induced audio latency is 34 %lower compared to related audio interfaces for frame size of 64 samples.The final hearing aid system-on-chip (SoC) with four KAVUAKA processor cores and ten co-processors is integrated as an application-specific integrated circuit (ASIC) using a 40 nm low-power technology. The die size is 3.6 mm2. Each of the processors and co-processors contains individual customizations and hardware features with a varying datapath width between 24-bit to 64-bit. The core area of the 64-bit processor configuration is 0.134 mm2. The processors are organized in two clusters that share memory, an audio interface, co-processors and serial interfaces. The average power consumption at a clock speed of 10 MHz is 2.4 mW for SoC and 0.6 mW for the 64-bit processor.Case studies with four reference hearing aid algorithms are used to present and evaluate the proposed hardware architectures and optimizations. The program code for each processor and co-processor is generated and optimized with evolutionary algorithms for operation merging,instruction scheduling and register allocation. The KAVUAKA processor architecture is com-pared to related processor architectures in terms of processing performance, average power consumption, and silicon area requirements

    Stencil codes on a vector length agnostic architecture

    Get PDF
    Data-level parallelism is frequently ignored or underutilized. Achieved through vector/SIMD capabilities, it can provide substantial performance improvements on top of widely used techniques such as thread-level parallelism. However, manual vectorization is a tedious and costly process that needs to be repeated for each specific instruction set or register size. In addition, automatic compiler vectorization is susceptible to code complexity, and usually limited due to data and control dependencies. To address some these issues, Arm recently released a new vector ISA, the Scalable Vector Extension (SVE), which is Vector-Length Agnostic (VLA). VLA enables the generation of binary files that run regardless of the physical vector register length. In this paper we leverage the main characteristics of SVE to implement and optimize stencil computations, ubiquitous in scientific computing. We show that SVE enables easy deployment of textbook optimizations like loop unrolling, loop fusion, load trading or data reuse. Our detailed simulations using vector lengths ranging from 128 to 2,048 bits show that these optimizations can lead to performance improvements over straight-forward vectorized code of up to 56.6% for 2,048 bit vectors. In addition, we show that certain optimizations can hurt performance due to a reduction in arithmetic intensity, and provide insight useful for compiler optimizers.This work has been partially supported by the European HiPEAC Network of Excellence, by the Spanish Ministry of Economy and Competitiveness (contract TIN2015-65316-P), and by the Generalitat de Catalunya (contracts 2017-SGR-1328 and 2017-SGR-1414). The Mont-Blanc project receives funding from the EUs H2020 Framework Programme (H2020/2014-2020) under grant agreements no. 671697 and no. 779877. M. Moreto has been partially supported by the Spanish Ministry of Economy, Industry and Competitiveness under Ramon y Cajal fellowship number RYC-2016-21104. Finally, A. Armejach has been partially supported by the Spanish Ministry of Economy, Industry and Competitiveness under Juan de la Cierva postdoctoral fellowship number FJCI-2015-24753.Peer ReviewedPostprint (author's final draft

    Breaking Instance-Independent Symmetries In Exact Graph Coloring

    Full text link
    Code optimization and high level synthesis can be posed as constraint satisfaction and optimization problems, such as graph coloring used in register allocation. Graph coloring is also used to model more traditional CSPs relevant to AI, such as planning, time-tabling and scheduling. Provably optimal solutions may be desirable for commercial and defense applications. Additionally, for applications such as register allocation and code optimization, naturally-occurring instances of graph coloring are often small and can be solved optimally. A recent wave of improvements in algorithms for Boolean satisfiability (SAT) and 0-1 Integer Linear Programming (ILP) suggests generic problem-reduction methods, rather than problem-specific heuristics, because (1) heuristics may be upset by new constraints, (2) heuristics tend to ignore structure, and (3) many relevant problems are provably inapproximable. Problem reductions often lead to highly symmetric SAT instances, and symmetries are known to slow down SAT solvers. In this work, we compare several avenues for symmetry breaking, in particular when certain kinds of symmetry are present in all generated instances. Our focus on reducing CSPs to SAT allows us to leverage recent dramatic improvement in SAT solvers and automatically benefit from future progress. We can use a variety of black-box SAT solvers without modifying their source code because our symmetry-breaking techniques are static, i.e., we detect symmetries and add symmetry breaking predicates (SBPs) during pre-processing. An important result of our work is that among the types of instance-independent SBPs we studied and their combinations, the simplest and least complete constructions are the most effective. Our experiments also clearly indicate that instance-independent symmetries should mostly be processed together with instance-specific symmetries rather than at the specification level, contrary to what has been suggested in the literature

    VINEA: a policy-based virtual network embedding architecture

    Full text link
    Network virtualization has enabled new business models by allowing infrastructure providers to lease or share their physical network. To concurrently run multiple customized virtual network services, such infrastructure providers need to run a virtual network embedding protocol. The virtual network embedding is the (NP-hard) problem of matching constrained virtual networks onto the physical network. We present the design and implementation of a policy-based architecture for the virtual network embedding problem. By policy, we mean a variant aspect of any of the (invariant) embedding mechanisms: resource discovery, virtual network mapping, and allocation on the physical infrastructure. Our architecture adapts to different scenarios by instantiating appropriate policies, and has bounds on embedding efficiency and on convergence embedding time, over a single provider, or across multiple federated providers. The performance of representative novel policy configurations are compared over a prototype implementation. We also present an object model as a foundation for a protocol specification, and we release a testbed to enable users to test their own embedding policies, and to run applications within their virtual networks. The testbed uses a Linux system architecture to reserve virtual node and link capacities.National Science Foundation (CNS-0963974

    Make the most out of your SIMD investments: counter control flow divergence in compiled query pipelines

    Get PDF
    Increasing single instruction multiple data (SIMD) capabilities in modern hardware allows for the compilation of data-parallel query pipelines. This means GPU-alike challenges arise: control flow divergence causes the underutilization of vector-processing units. In this paper, we present efficient algorithms for the AVX-512 architecture to address this issue. These algorithms allow for the fine-grained assignment of new tuples to idle SIMD lanes. Furthermore, we present strategies for their integration with compiled query pipelines so that tuples are never evicted from registers. We evaluate our approach with three query types: (i) a table scan query based on TPC-H Query 1, that performs up to 34% faster when addressing underutilization, (ii) a hashjoin query, where we observe up to 25% higher performance, and (iii) an approximate geospatial join query, which shows performance improvements of up to 30%
    • 

    corecore