101 research outputs found

    Compilation Techniques for High-Performance Embedded Systems with Multiple Processors

    Get PDF
    Institute for Computing Systems ArchitectureDespite the progress made in developing more advanced compilers for embedded systems, programming of embedded high-performance computing systems based on Digital Signal Processors (DSPs) is still a highly skilled manual task. This is true for single-processor systems, and even more for embedded systems based on multiple DSPs. Compilers often fail to optimise existing DSP codes written in C due to the employed programming style. Parallelisation is hampered by the complex multiple address space memory architecture, which can be found in most commercial multi-DSP configurations. This thesis develops an integrated optimisation and parallelisation strategy that can deal with low-level C codes and produces optimised parallel code for a homogeneous multi-DSP architecture with distributed physical memory and multiple logical address spaces. In a first step, low-level programming idioms are identified and recovered. This enables the application of high-level code and data transformations well-known in the field of scientific computing. Iterative feedback-driven search for “good” transformation sequences is being investigated. A novel approach to parallelisation based on a unified data and loop transformation framework is presented and evaluated. Performance optimisation is achieved through exploitation of data locality on the one hand, and utilisation of DSP-specific architectural features such as Direct Memory Access (DMA) transfers on the other hand. The proposed methodology is evaluated against two benchmark suites (DSPstone & UTDSP) and four different high-performance DSPs, one of which is part of a commercial four processor multi-DSP board also used for evaluation. Experiments confirm the effectiveness of the program recovery techniques as enablers of high-level transformations and automatic parallelisation. Source-to-source transformations of DSP codes yield an average speedup of 2.21 across four different DSP architectures. The parallelisation scheme is – in conjunction with a set of locality optimisations – able to produce linear and even super-linear speedups on a number of relevant DSP kernels and applications

    Doctor of Philosophy

    Get PDF
    dissertationStochastic methods, dense free-form mapping, atlas construction, and total variation are examples of advanced image processing techniques which are robust but computationally demanding. These algorithms often require a large amount of computational power as well as massive memory bandwidth. These requirements used to be ful lled only by supercomputers. The development of heterogeneous parallel subsystems and computation-specialized devices such as Graphic Processing Units (GPUs) has brought the requisite power to commodity hardware, opening up opportunities for scientists to experiment and evaluate the in uence of these techniques on their research and practical applications. However, harnessing the processing power from modern hardware is challenging. The di fferences between multicore parallel processing systems and conventional models are signi ficant, often requiring algorithms and data structures to be redesigned signi ficantly for efficiency. It also demands in-depth knowledge about modern hardware architectures to optimize these implementations, sometimes on a per-architecture basis. The goal of this dissertation is to introduce a solution for this problem based on a 3D image processing framework, using high performance APIs at the core level to utilize parallel processing power of the GPUs. The design of the framework facilitates an efficient application development process, which does not require scientists to have extensive knowledge about GPU systems, and encourages them to harness this power to solve their computationally challenging problems. To present the development of this framework, four main problems are described, and the solutions are discussed and evaluated: (1) essential components of a general 3D image processing library: data structures and algorithms, as well as how to implement these building blocks on the GPU architecture for optimal performance; (2) an implementation of unbiased atlas construction algorithms|an illustration of how to solve a highly complex and computationally expensive algorithm using this framework; (3) an extension of the framework to account for geometry descriptors to solve registration challenges with large scale shape changes and high intensity-contrast di fferences; and (4) an out-of-core streaming model, which enables developers to implement multi-image processing techniques on commodity hardware

    Adaptive Knobs for Resource Efficient Computing

    Get PDF
    Performance demands of emerging domains such as artificial intelligence, machine learning and vision, Internet-of-things etc., continue to grow. Meeting such requirements on modern multi/many core systems with higher power densities, fixed power and energy budgets, and thermal constraints exacerbates the run-time management challenge. This leaves an open problem on extracting the required performance within the power and energy limits, while also ensuring thermal safety. Existing architectural solutions including asymmetric and heterogeneous cores and custom acceleration improve performance-per-watt in specific design time and static scenarios. However, satisfying applications’ performance requirements under dynamic and unknown workload scenarios subject to varying system dynamics of power, temperature and energy requires intelligent run-time management. Adaptive strategies are necessary for maximizing resource efficiency, considering i) diverse requirements and characteristics of concurrent applications, ii) dynamic workload variation, iii) core-level heterogeneity and iv) power, thermal and energy constraints. This dissertation proposes such adaptive techniques for efficient run-time resource management to maximize performance within fixed budgets under unknown and dynamic workload scenarios. Resource management strategies proposed in this dissertation comprehensively consider application and workload characteristics and variable effect of power actuation on performance for pro-active and appropriate allocation decisions. Specific contributions include i) run-time mapping approach to improve power budgets for higher throughput, ii) thermal aware performance boosting for efficient utilization of power budget and higher performance, iii) approximation as a run-time knob exploiting accuracy performance trade-offs for maximizing performance under power caps at minimal loss of accuracy and iv) co-ordinated approximation for heterogeneous systems through joint actuation of dynamic approximation and power knobs for performance guarantees with minimal power consumption. The approaches presented in this dissertation focus on adapting existing mapping techniques, performance boosting strategies, software and dynamic approximations to meet the performance requirements, simultaneously considering system constraints. The proposed strategies are compared against relevant state-of-the-art run-time management frameworks to qualitatively evaluate their efficacy

    The Moon Beyond 2002, Next Steps in Lunar Science and Exploration : September 12-14, 2002, Taos, New Mexico

    Get PDF
    The purpose of this meeting is to capitalize on the recent advances by focusing the planetary science community on the following: (1) What are the key questions that should now be addressed to advance lunar science and exploration? and (2) What actions should the planetary science community carry out to best answer these questions?The purpose of this meeting is to capitalize on the recent advances by focusing the planetary science community on the following: (1) What are the key questions that should now be addressed to advance lunar science and exploration? and (2) What actions should the planetary science community carry out to best answer these questions?Los Alamos National Laboratory ... [and others]meeting organizer, David J. Lawrence ; scientific organizing committee, Mike Duke ... [and others]PARTIAL CONTENTS: The Moon: Keystone to Understanding Planetary Geological Processes and History / James W. Head--Lunar Solar Power System and Lunar Exploration / D.R. Criswell--Human Exploration of the Moon / Michael B. Duke--Volatiles at the Poles of the Moon / B.J. Butler--Sensitivity of Lunar Resource Economic Model to Lunar Ice Concentration / Brad Blair and Javier Diaz--Origin of Nanophase Fe in Agglutinates: A Radical New Concept / Lawrence A. Taylor

    Information resources management, 1984-1989: A bibliography with indexes

    Get PDF
    This bibliography contains 768 annotated references to reports and journal articles entered into the NASA scientific and technical information database 1984 to 1989

    Run-time Variability with First-class Contexts

    Get PDF
    Software must be regularly updated to keep up with changing requirements. Unfortunately, to install an update, the system must usually be restarted, which is inconvenient and costly. In this dissertation, we aim at overcoming the need for restart by enabling run-time changes at the programming language level. We argue that the best way to achieve this goal is to improve the support for encapsulation, information hiding and late binding by contextualizing behavior. In our approach, behavioral variations are encapsulated into context objects that alter the behavior of other objects locally. We present three contextual language features that demonstrate our approach. First, we present a feature to evolve software by scoping variations to threads. This way, arbitrary objects can be substituted over time without compromising safety. Second, we present a variant of dynamic proxies that operate by delegation instead of forwarding. The proxies can be used as building blocks to implement contextualization mechanisms from within the language. Third, we contextualize the behavior of objects to intercept exchanges of references between objects. This approach scales information hiding from objects to aggregates. The three language features are supported by formalizations and case studies, showing their soundness and practicality. With these three complementary language features, developers can easily design applications that can accommodate run-time changes

    Un modèle de programmation à grain fin pour la parallélisation de solveurs linéaires creux

    Get PDF
    Solving large sparse linear system is an essential part of numerical simulations. These resolve can takeup to 80% of the total of the simulation time.An efficient parallelization of sparse linear kernels leads to better performances. In distributed memory,parallelization of these kernels is often done by changing the numerical scheme. Contrariwise, in sharedmemory, a more efficient parallelism can be used. It’s necessary to use two levels of parallelism, a first onebetween nodes of a cluster and a second inside a node.When using iterative methods in shared memory, task-based programming enables the possibility tonaturally describe the parallelism by using as granularity one line of the matrix for one task. Unfortunately,this granularity is too fine and doesn’t allow to obtain good performance.In this thesis, we study the granularity problem of the task-based parallelization. We offer to increasegrain size of computational tasks by creating aggregates of tasks which will become tasks themself. Thenew coarser task graph is composed by the set of these aggregates and the new dependencies betweenaggregates. Then a task scheduler schedules this new graph to obtain better performance. We use as examplethe Incomplete LU factorization of a sparse matrix and we show some improvements made by this method.Then, we focus on NUMA architecture computer. When we use a memory bandwidth limited algorithm onthis architecture, it is interesting to reduce NUMA effects. We show how to take into account these effects ina task-based runtime in order to improve performance of a parallel program.La résolution de grands systèmes linéaires creux est un élément essentiel des simulations numériques.Ces résolutions peuvent représenter jusqu’à 80% du temps de calcul des simulations.Une parallélisation efficace des noyaux d’algèbre linéaire creuse conduira donc à obtenir de meilleures performances. En mémoire distribuée, la parallélisation de ces noyaux se fait le plus souvent en modifiant leschéma numérique. Par contre, en mémoire partagée, un parallélisme plus efficace peut être utilisé. Il est doncimportant d’utiliser deux niveaux de parallélisme, un premier niveau entre les noeuds d’une grappe de serveuret un deuxième niveau à l’intérieur du noeud. Lors de l’utilisation de méthodes itératives en mémoire partagée,les graphes de tâches permettent de décrire naturellement le parallélisme en prenant comme granularité letravail sur une ligne de la matrice. Malheureusement, cette granularité est trop fine et ne permet pas d’obtenirde bonnes performances à cause du surcoût de l’ordonnanceur de tâches.Dans cette thèse, nous étudions le problème de la granularité pour la parallélisation par graphe detâches. Nous proposons d’augmenter la granularité des tâches de calcul en créant des agrégats de tâchesqui deviendront eux-mêmes des tâches. L’ensemble de ces agrégats et des nouvelles dépendances entre lesagrégats forme un graphe de granularité plus grossière. Ce graphe est ensuite utilisé par un ordonnanceur detâches pour obtenir de meilleurs résultats. Nous utilisons comme exemple la factorisation LU incomplète d’unematrice creuse et nous montrons les améliorations apportées par cette méthode. Puis, dans un second temps,nous nous concentrons sur les machines à architecture NUMA. Dans le cas de l’utilisation d’algorithmeslimités par la bande passante mémoire, il est intéressant de réduire les effets NUMA liés à cette architectureen plaçant soi-même les données. Nous montrons comment prendre en compte ces effets dans un intergiciel àbase de tâches pour ainsi améliorer les performances d’un programme parallèle

    Management: A bibliography for NASA managers

    Get PDF
    This bibliography lists 653 reports, articles and other documents introduced into the NASA scientific and technical information system in 1987. Items are selected and grouped according to their usefulness to the manager as manager. Citiations are grouped into ten subject categories; human factors and personnel issues; management theory and techniques; industrial management and manufacturing; robotics and expert systems; computers and information management; research and development; economics, costs and markets; logistics and operations management, reliability and quality control; and legality, legislation, and policy

    Cultural Maps, Networks, and Flows: The History and Impact of the Havana Biennale 1984 to the present

    Get PDF
    Since 1984 the Havana Biennale has been known as "the Tri-continental art event," presenting artists from Latin America, Africa, and Asia. It also has intensely debated the nature of recent and contemporary art from a Third World or Global South perspective. The Biennale is a product of Cuba's fruition since the Revolution of 1959. The Wifredo Lam Center, created in 1983, has organized the Biennial since its inception. This dissertation proposes that at the heart of the Biennale has been an alternative cosmopolitan modernism (that we might call "contemporary" or "post-colonial") that was envisaged by a group of local cultural agents, critics, philosophers, art historians, and also supported by a network of peers around the world. It examines the role Armando Hart Dávalos, Minister of Culture of Cuba (1976-1997), who played a key figure in the development of a solid cultural policy, one which produced the Havana Biennale as a cultural project based on an explicit "Third World" consciousness. It explores the role of critics and curators Gerardo Mosquera and Nelson Herrera Ysla, key members of the founding group of the Biennale. Subsequently, it examines how the work of Llilian Llanes, director of the Lam Center and of the Biennale (1983-1999), shaped the event in structural and conceptual terms. Finally, it examines the most recent developments and projections for the future.Using primary material, interviews, and field work research, the study focuses on the conceptual, contextual, and historical structure that supports the Biennale. It presents from several optics the views and world-view of the agents involved from the inside (curators and collaborators), as well as, from an art-world perspective through an account of the nine editions. Using the Havana Biennale as case study this work goes to disentangle and reveal the socio-political and intellectual debates taking place in the conformation of what is call today global art. In addition, recognizes the potentiality of alternative thinking and cultural subjectivity in the Global South
    corecore