121,785 research outputs found

    Implementable Wireless Access for B3G Networks - III: Complexity Reducing Transceiver Structures

    No full text
    This article presents a comprehensive overview of some of the research conducted within Mobile VCE’s Core Wireless Access Research Programme,1 a key focus of which has naturally been on MIMO transceivers. The series of articles offers a coherent view of how the work was structured and comprises a compilation of material that has been presented in detail elsewhere (see references within the article). In this article MIMO channel measurements, analysis, and modeling, which were presented previously in the first article in this series of four, are utilized to develop compact and distributed antenna arrays. Parallel activities led to research into low-complexity MIMO single-user spacetime coding techniques, as well as SISO and MIMO multi-user CDMA-based transceivers for B3G systems. As well as feeding into the industry’s in-house research program, significant extensions of this work are now in hand, within Mobile VCE’s own core activity, aiming toward securing major improvements in delivery efficiency in future wireless systems through crosslayer operation

    Finding Best Compiler Options for Critical Software Using Parallel Algorithms

    Get PDF
    The efficiency of a software piece is a key factor for many systems. Real-time programs, critical software, device drivers, kernel OS functions and many other software pieces which are executed thousands or even millions of times per day require a very efficient execution. How this software is built can significantly affect the run time for these programs, since the context is that of compile-once/run-many. In this sense, the optimization flags used during the compilation time are a crucial element for this goal and they could make a big difference in the final execution time. In this paper, we use parallel metaheuristic techniques to automatically decide which optimization flags should be activated during the compilation on a set of benchmarking programs. The using the appropriate flag configuration is a complex combinatorial problem, but our approach is able to adapt the flag tuning to the characteristics of the software, improving the final run times with respect to other spread practicesThis research has been partially funded by the Spanish MINECO and FEDER projects (TIN2014-57341-R (http://moveon.lcc.uma.es), TIN2016-81766-REDT (http://cirti.es), and TIN2017-88213-R (http://6city.lcc.uma.es). It is also funded by Universidad de MĂĄlaga. Campus de Excelencia Internacional AndalucĂ­a Tech

    Transformationïżœbased implementation and optimization of programs exploiting the basic Andorra model.

    Get PDF
    The characteristics of CC and CLP systems are in principle very dierent However a recent trend towards convergence in the implementation techniques for these systems can be observed While CLP and Prolog systems have been incorporating capabilities to deal with userdened suspension and coroutining CC compilers have been trying to coalesce negrained tasks into coarsergrained sequential threads This convergence of techniques opens up the possibility of having a general purpose kernel language and abstract machine to serve as a compilation target for a variety of userlevel languages We propose a transformation technique directed towards such an objective In particular we report on techniques to support the Andorra computational model essentially emulating the AndorraI system via program transformation into a sequential language with delay primitives The system is automatic comprising an optional program analyzer and a basic transformer to the kernel language It turns out that a simple parallel CLP or Prolog system with dynamic scheduling is sucient as a kernel language for this purpose The preliminary results are quite encouraging performance of the resulting system is comparable to the current AndorraI implementation

    HeTM: Transactional Memory for Heterogeneous Systems

    Full text link
    Modern heterogeneous computing architectures, which couple multi-core CPUs with discrete many-core GPUs (or other specialized hardware accelerators), enable unprecedented peak performance and energy efficiency levels. Unfortunately, though, developing applications that can take full advantage of the potential of heterogeneous systems is a notoriously hard task. This work takes a step towards reducing the complexity of programming heterogeneous systems by introducing the abstraction of Heterogeneous Transactional Memory (HeTM). HeTM provides programmers with the illusion of a single memory region, shared among the CPUs and the (discrete) GPU(s) of a heterogeneous system, with support for atomic transactions. Besides introducing the abstract semantics and programming model of HeTM, we present the design and evaluation of a concrete implementation of the proposed abstraction, which we named Speculative HeTM (SHeTM). SHeTM makes use of a novel design that leverages on speculative techniques and aims at hiding the inherently large communication latency between CPUs and discrete GPUs and at minimizing inter-device synchronization overhead. SHeTM is based on a modular and extensible design that allows for easily integrating alternative TM implementations on the CPU's and GPU's sides, which allows the flexibility to adopt, on either side, the TM implementation (e.g., in hardware or software) that best fits the applications' workload and the architectural characteristics of the processing unit. We demonstrate the efficiency of the SHeTM via an extensive quantitative study based both on synthetic benchmarks and on a porting of a popular object caching system.Comment: The current work was accepted in the 28th International Conference on Parallel Architectures and Compilation Techniques (PACT'19

    A Survey on Compiler Autotuning using Machine Learning

    Full text link
    Since the mid-1990s, researchers have been trying to use machine-learning based approaches to solve a number of different compiler optimization problems. These techniques primarily enhance the quality of the obtained results and, more importantly, make it feasible to tackle two main compiler optimization problems: optimization selection (choosing which optimizations to apply) and phase-ordering (choosing the order of applying optimizations). The compiler optimization space continues to grow due to the advancement of applications, increasing number of compiler optimizations, and new target architectures. Generic optimization passes in compilers cannot fully leverage newly introduced optimizations and, therefore, cannot keep up with the pace of increasing options. This survey summarizes and classifies the recent advances in using machine learning for the compiler optimization field, particularly on the two major problems of (1) selecting the best optimizations and (2) the phase-ordering of optimizations. The survey highlights the approaches taken so far, the obtained results, the fine-grain classification among different approaches and finally, the influential papers of the field.Comment: version 5.0 (updated on September 2018)- Preprint Version For our Accepted Journal @ ACM CSUR 2018 (42 pages) - This survey will be updated quarterly here (Send me your new published papers to be added in the subsequent version) History: Received November 2016; Revised August 2017; Revised February 2018; Accepted March 2018

    A compiler approach to scalable concurrent program design

    Get PDF
    The programmer's most powerful tool for controlling complexity in program design is abstraction. We seek to use abstraction in the design of concurrent programs, so as to separate design decisions concerned with decomposition, communication, synchronization, mapping, granularity, and load balancing. This paper describes programming and compiler techniques intended to facilitate this design strategy. The programming techniques are based on a core programming notation with two important properties: the ability to separate concurrent programming concerns, and extensibility with reusable programmer-defined abstractions. The compiler techniques are based on a simple transformation system together with a set of compilation transformations and portable run-time support. The transformation system allows programmer-defined abstractions to be defined as source-to-source transformations that convert abstractions into the core notation. The same transformation system is used to apply compilation transformations that incrementally transform the core notation toward an abstract concurrent machine. This machine can be implemented on a variety of concurrent architectures using simple run-time support. The transformation, compilation, and run-time system techniques have been implemented and are incorporated in a public-domain program development toolkit. This toolkit operates on a wide variety of networked workstations, multicomputers, and shared-memory multiprocessors. It includes a program transformer, concurrent compiler, syntax checker, debugger, performance analyzer, and execution animator. A variety of substantial applications have been developed using the toolkit, in areas such as climate modeling and fluid dynamics
    • 

    corecore