337 research outputs found

    A methodology for full-system power modeling in heterogeneous data centers

    Get PDF
    The need for energy-awareness in current data centers has encouraged the use of power modeling to estimate their power consumption. However, existing models present noticeable limitations, which make them application-dependent, platform-dependent, inaccurate, or computationally complex. In this paper, we propose a platform-and application-agnostic methodology for full-system power modeling in heterogeneous data centers that overcomes those limitations. It derives a single model per platform, which works with high accuracy for heterogeneous applications with different patterns of resource usage and energy consumption, by systematically selecting a minimum set of resource usage indicators and extracting complex relations among them that capture the impact on energy consumption of all the resources in the system. We demonstrate our methodology by generating power models for heterogeneous platforms with very different power consumption profiles. Our validation experiments with real Cloud applications show that such models provide high accuracy (around 5% of average estimation error).This work is supported by the Spanish Ministry of Economy and Competitiveness under contract TIN2015-65316-P, by the Gener- alitat de Catalunya under contract 2014-SGR-1051, and by the European Commission under FP7-SMARTCITIES-2013 contract 608679 (RenewIT) and FP7-ICT-2013-10 contracts 610874 (AS- CETiC) and 610456 (EuroServer).Peer ReviewedPostprint (author's final draft

    Center for space microelectronics technology

    Get PDF
    The 1992 Technical Report of the Jet Propulsion Laboratory Center for Space Microelectronics Technology summarizes the technical accomplishments, publications, presentations, and patents of the center during the past year. The report lists 187 publications, 253 presentations, and 111 new technology reports and patents in the areas of solid-state devices, photonics, advanced computing, and custom microcircuits

    Center for Space Microelectronics Technology. 1993 Technical Report

    Get PDF
    The 1993 Technical Report of the Jet Propulsion Laboratory Center for Space Microelectronics Technology summarizes the technical accomplishments, publications, presentations, and patents of the Center during the past year. The report lists 170 publications, 193 presentations, and 84 New Technology Reports and patents. The 1993 Technical Report of the Jet Propulsion Laboratory Center for Space Microelectronics Technology summarizes the technical accomplishments, publications, presentations, and patents of the Center during the past year. The report lists 170 publications, 193 presentations, and 84 New Technology Reports and patents

    Computer Science 2019 APR Self-Study & Documents

    Get PDF
    UNM Computer Science APR self-study report and review team report for Spring 2019, fulfilling requirements of the Higher Learning Commission

    One machine, one minute, three billion tetrahedra

    Full text link
    This paper presents a new scalable parallelization scheme to generate the 3D Delaunay triangulation of a given set of points. Our first contribution is an efficient serial implementation of the incremental Delaunay insertion algorithm. A simple dedicated data structure, an efficient sorting of the points and the optimization of the insertion algorithm have permitted to accelerate reference implementations by a factor three. Our second contribution is a multi-threaded version of the Delaunay kernel that is able to concurrently insert vertices. Moore curve coordinates are used to partition the point set, avoiding heavy synchronization overheads. Conflicts are managed by modifying the partitions with a simple rescaling of the space-filling curve. The performances of our implementation have been measured on three different processors, an Intel core-i7, an Intel Xeon Phi and an AMD EPYC, on which we have been able to compute 3 billion tetrahedra in 53 seconds. This corresponds to a generation rate of over 55 million tetrahedra per second. We finally show how this very efficient parallel Delaunay triangulation can be integrated in a Delaunay refinement mesh generator which takes as input the triangulated surface boundary of the volume to mesh

    Faculty Publications and Creative Works 1997

    Get PDF
    One of the ways we recognize our faculty at the University of New Mexico is through this annual publication which highlights our faculty\u27s scholarly and creative activities and achievements and serves as a compendium of UNM faculty efforts during the 1997 calendar year. Faculty Publications and Creative Works strives to illustrate the depth and breadth of research activities performed throughout our University\u27s laboratories, studios and classrooms. We believe that the communication of individual research is a significant method of sharing concepts and thoughts and ultimately inspiring the birth of new of ideas. In support of this, UNM faculty during 1997 produced over 2,770 works, including 2,398 scholarly papers and articles, 72 books, 63 book chapters, 82 reviews, 151 creative works and 4 patents. We are proud of the accomplishments of our faculty which are in part reflected in this book, which illustrates the diversity of intellectual pursuits in support of research and education at the University of New Mexico. Nasir Ahmed Interim Associate Provost for Research and Dean of Graduate Studie

    Chemically specifi C multiscale modeling of clay-polymer nanocomposites reveals intercalation dynamics, tactoid self-assembly and emergent materials properties

    Get PDF
    A quantitative description is presented of the dynamical process of polymer intercalation into clay tactoids and the ensuing aggregation of polymerentangled tactoids into larger structures, obtaining various characteristics of these nanocomposites, including clay-layer spacings, out-of-plane clay-sheet bending energies, X-ray diffractograms, and materials properties. This model of clay-polymer interactions is based on a three-level approach, which uses quantum mechanical and atomistic descriptions to derive a coarse-grained yet chemically specifi c representation that can resolve processes on hitherto inaccessible length and time scales. The approach is applied to study collections of clay mineral tactoids interacting with two synthetic polymers, poly(ethylene glycol) and poly(vinyl alcohol). The controlled behavior of layered materials in a polymer matrix is centrally important for many engineering and manufacturing applications. This approach opens up a route to computing the properties of complex soft materials based on knowledge of their chemical composition, molecular structure, and processing conditions.This work was funded in part by the EU FP7 MAPPER project (grant number RI-261507) and the Qatar National Research Fund (grant number 09–260–1–048). Supercomputing time was provided by PRACE on JUGENE (project PRA044), the Hartree Centre (Daresbury Laboratory) on BlueJoule and BlueWonder via the CGCLAY project, and on HECToR and ARCHER, the UK national supercomputing facility at the University of Edinburgh, via EPSRC through grants EP/F00521/1, EP/E045111/1, EP/I017763/1 and the UK Consortium on Mesoscopic Engineering Sciences (EP/L00030X/1). The authors are grateful to Professor Julian Evans for stimulating discussions during the course of this project. Data-storage and management services were provided by EUDAT (grant number 283304)

    Resiliency in numerical algorithm design for extreme scale simulations

    Get PDF
    This work is based on the seminar titled ‘Resiliency in Numerical Algorithm Design for Extreme Scale Simulations’ held March 1–6, 2020, at Schloss Dagstuhl, that was attended by all the authors. Advanced supercomputing is characterized by very high computation speeds at the cost of involving an enormous amount of resources and costs. A typical large-scale computation running for 48 h on a system consuming 20 MW, as predicted for exascale systems, would consume a million kWh, corresponding to about 100k Euro in energy cost for executing 1023 floating-point operations. It is clearly unacceptable to lose the whole computation if any of the several million parallel processes fails during the execution. Moreover, if a single operation suffers from a bit-flip error, should the whole computation be declared invalid? What about the notion of reproducibility itself: should this core paradigm of science be revised and refined for results that are obtained by large-scale simulation? Naive versions of conventional resilience techniques will not scale to the exascale regime: with a main memory footprint of tens of Petabytes, synchronously writing checkpoint data all the way to background storage at frequent intervals will create intolerable overheads in runtime and energy consumption. Forecasts show that the mean time between failures could be lower than the time to recover from such a checkpoint, so that large calculations at scale might not make any progress if robust alternatives are not investigated. More advanced resilience techniques must be devised. The key may lie in exploiting both advanced system features as well as specific application knowledge. Research will face two essential questions: (1) what are the reliability requirements for a particular computation and (2) how do we best design the algorithms and software to meet these requirements? While the analysis of use cases can help understand the particular reliability requirements, the construction of remedies is currently wide open. One avenue would be to refine and improve on system- or application-level checkpointing and rollback strategies in the case an error is detected. Developers might use fault notification interfaces and flexible runtime systems to respond to node failures in an application-dependent fashion. Novel numerical algorithms or more stochastic computational approaches may be required to meet accuracy requirements in the face of undetectable soft errors. These ideas constituted an essential topic of the seminar. The goal of this Dagstuhl Seminar was to bring together a diverse group of scientists with expertise in exascale computing to discuss novel ways to make applications resilient against detected and undetected faults. In particular, participants explored the role that algorithms and applications play in the holistic approach needed to tackle this challenge. This article gathers a broad range of perspectives on the role of algorithms, applications and systems in achieving resilience for extreme scale simulations. The ultimate goal is to spark novel ideas and encourage the development of concrete solutions for achieving such resilience holistically.Peer Reviewed"Article signat per 36 autors/es: Emmanuel Agullo, Mirco Altenbernd, Hartwig Anzt, Leonardo Bautista-Gomez, Tommaso Benacchio, Luca Bonaventura, Hans-Joachim Bungartz, Sanjay Chatterjee, Florina M. Ciorba, Nathan DeBardeleben, Daniel Drzisga, Sebastian Eibl, Christian Engelmann, Wilfried N. Gansterer, Luc Giraud, Dominik G ̈oddeke, Marco Heisig, Fabienne Jezequel, Nils Kohl, Xiaoye Sherry Li, Romain Lion, Miriam Mehl, Paul Mycek, Michael Obersteiner, Enrique S. Quintana-Ortiz, Francesco Rizzi, Ulrich Rude, Martin Schulz, Fred Fung, Robert Speck, Linda Stals, Keita Teranishi, Samuel Thibault, Dominik Thonnes, Andreas Wagner and Barbara Wohlmuth"Postprint (author's final draft

    Chemical & Nuclear Engineering 2009 APR Self-Study & Documents

    Get PDF
    UNM Chemical & Nuclear Engineering APR self-study report, review team report, response to report, and initial action plan for Spring 2009, fulfilling requirements of the Higher Learning Commission
    • …
    corecore