205 research outputs found
A Survey and Comparative Study of Hard and Soft Real-time Dynamic Resource Allocation Strategies for Multi/Many-core Systems
Multi-/many-core systems are envisioned to satisfy the ever-increasing performance requirements of complex applications in various domains such as embedded and high-performance computing. Such systems need to cater to increasingly dynamic workloads, requiring efficient dynamic resource allocation strategies to satisfy hard or soft real-time constraints. This article provides an extensive survey of hard and soft real-time dynamic resource allocation strategies proposed since the mid-1990s and highlights the emerging trends for multi-/many-core systems. The survey covers a taxonomy of the resource allocation strategies and considers their various optimization objectives, which have been used to provide comprehensive comparison. The strategies employ various principles, such as market and biological concepts, to perform the optimizations. The trend followed by the resource allocation strategies, open research challenges, and likely emerging research directions have also been provided
Learning-based run-time power and energy management of multi/many-core systems: current and future trends
Multi/Many-core systems are prevalent in several application domains targeting different scales of computing such as embedded and cloud computing. These systems are able to fulfil the everincreasing performance requirements by exploiting their parallel processing capabilities. However, effective power/energy management is required during system operations due to several reasons such as to increase the operational time of battery operated systems, reduce the energy cost of datacenters, and improve thermal efficiency and reliability. This article provides an extensive survey of learning-based run-time power/energy management approaches. The survey includes a taxonomy of the learning-based approaches. These approaches perform design-time and/or run-time power/energy management by employing some learning principles such as reinforcement learning. The survey also highlights the trends followed by the learning-based run-time power management approaches, their upcoming trends and open research challenges
Resource-aware Programming in a High-level Language - Improved performance with manageable effort on clustered MPSoCs
Bis 2001 bedeutete Moores und Dennards Gesetz eine Verdoppelung der AusfĂŒhrungszeit alle 18 Monate durch verbesserte CPUs.
Heute ist NebenlÀufigkeit das dominante Mittel zur Beschleunigung von Supercomputern bis zu mobilen GerÀten.
Allerdings behindern neuere PhÀnomene wie "Dark Silicon" zunehmend eine weitere Beschleunigung durch Hardware.
Um weitere Beschleunigung zu erreichen muss sich auch die SoftÂware mehr ihrer Hardware Resourcen gewahr werden.
Verbunden mit diesem PhÀnomen ist eine immer heterogenere Hardware.
Supercomputer integrieren Beschleuniger wie GPUs.
Mobile SoCs (bspw. Smartphones) integrieren immer mehr FĂ€higkeiten.
Spezialhardware auszunutzen ist eine bekannte Methode, um den Energieverbrauch zu senken, was ein weiterer wichtiger Aspekt ist, welcher mit der reinen Geschwindigkeit abgewogen werde muss.
Zum Beispiel werden Supercomputer auch nach "Performance pro Watt" bewertet.
Zur Zeit sind systemnahe low-level Programmierer es gewohnt ĂŒber Hardware nachzudenken, wĂ€hrend der gemeine high-level Programmierer es vorzieht von der Plattform möglichst zu abstrahieren (bspw. Cloud).
"High-level" bedeutet nicht, dass Hardware irrelevant ist, sondern dass sie abstrahiert werden kann.
Falls Sie eine Java-Anwendung fĂŒr Android entwickeln, kann der Akku ein wichtiger Aspekt sein.
Irgendwann mĂŒssen aber auch Hochsprachen resourcengewahr werden, um Geschwindigkeit oder Energieverbrauch zu verbessern.
Innerhalb des Transregio "Invasive Computing" habe ich an diesen Problemen gearbeitet.
In meiner Dissertation stelle ich ein Framework vor, mit dem man Hochsprachenanwendungen resourcengewahr machen kann, um so die Leistung zu verbessern.
Das könnte beispielsweise erhöhte Effizienz oder schnellerer AusfĂŒhrung fĂŒr das System als Ganzes bringen.
Ein Kerngedanke dabei ist, dass Anwendungen sich nicht selbst optimieren.
Stattdessen geben sie alle Informationen an das Betriebssystem.
Das Betriebssystem hat eine globale Sicht und trifft Entscheidungen ĂŒber die Resourcen.
Diesen Prozess nennen wir "Invasion".
Die Aufgabe der Anwendung ist es, sich an diese Entscheidungen anzupassen, aber nicht selbst welche zu fÀllen.
Die Herausforderung besteht darin eine Sprache zu definieren, mit der Anwendungen Resourcenbedingungen und Leistungsinformationen kommunizieren.
So eine Sprache muss ausdrucksstark genug fĂŒr komplexe Informationen, erweiterbar fĂŒr neue Resourcentypen, und angenehm fĂŒr den Programmierer sein.
Die zentralen BeitrÀge dieser Dissertation sind:
Ein theoretisches Modell der Resourcen-Verwaltung, um die Essenz des resourcengewahren Frameworks zu beschreiben,
die Korrektheit der Entscheidungen des Betriebssystems bezĂŒglich der Bedingungen einer Anwendung zu begrĂŒnden
und zum Beweis meiner Thesen von Effizienz und Beschleunigung in der Theorie.
Ein Framework und eine Ăbersetzungspfad resourcengewahrer Programmierung fĂŒr die Hochsprache X10.
Zur Bewertung des Ansatzes haben wir Anwendungen aus dem High Performance Computing implementiert.
Eine Beschleunigung von 5x konnte gemessen werden.
Ein Speicherkonsistenzmodell fĂŒr die X10 Programmiersprache, da dies ein notwendiger Schritt zu einer formalen Semantik ist, die das theoretische Modell und die konkrete Implementierung verknĂŒpft.
Zusammengefasst zeige ich, dass resourcengewahre Programmierung in Hoch\-sprachen auf zukĂŒnftigen Architekturen mit vielen Kernen mit vertretbarem Aufwand machbar ist und die Leistung verbessert
Textile Society of America Newsletter 28:1 â Spring 2016
Letter from the Editor
Volunteer Opportunity: TSA Is Looking for a New Proceedings Editor
Letter from the President
Textiles Close Up Report: Art of the Zo: Textiles from Myanmar, India, and Bangladesh, Chin Weaving at the Philadelphia Museum of Art
R. L. Shep Ethnic Textile Book Award 2015 Nominees
Ossabaw Island, Indigo, and Sea Island Cotton: Two Ways to See a Georgia Barrier Island
Peer-Review Process Yields Range of Exciting Exhibitions for Biennial Symposium
Book Reviews:Symbols of Power: Luxury Items from Islamic Lands, 7thâ21st CenturyTextiles of the Banjara: Cloth and Culture of a Wandering TribeThe Handbook of Textile CultureTraditional Weavers of Guatemala: Their Stories, Their LivesDesigning Identity: The Power of Textiles in Late Antiquity
Conference Review: 21st Annual Weaving History Conference, 2015
Featured Exhibitions:Heirlooms, Catastrophe, and Survival: The Lace and Sampler Collection of the Palazzo DavanzatiThe Fabric of India, Victoria and Albert Museum, LondonFashion Meets Technology in #techstyle, Museum of Fine Arts, Boston
Shibori and Ikat in Mesoamerica
International Report:The Centre for Textile Conservation at the University of Glasgow & a New Era for Textile Dye Research in ScotlandThe Philippine Textile Research Institute, Taguig City, Philippines
Member News:Member PublicationsMember Workshops and LecturesMember Awards & HonorsMember Exhibitions
Conferences & Opportunities
Symposium Program [Crosscurrents: Land, Labor, and the Port, 15th Biennial Symposium, Savannah, Georgia, October 19â23, 2016
Dependable Embedded Systems
This Open Access book introduces readers to many new techniques for enhancing and optimizing reliability in embedded systems, which have emerged particularly within the last five years. This book introduces the most prominent reliability concerns from todayâs points of view and roughly recapitulates the progress in the community so far. Unlike other books that focus on a single abstraction level such circuit level or system level alone, the focus of this book is to deal with the different reliability challenges across different levels starting from the physical level all the way to the system level (cross-layer approaches). The book aims at demonstrating how new hardware/software co-design solution can be proposed to ef-fectively mitigate reliability degradation such as transistor aging, processor variation, temperature effects, soft errors, etc. Provides readers with latest insights into novel, cross-layer methods and models with respect to dependability of embedded systems; Describes cross-layer approaches that can leverage reliability through techniques that are pro-actively designed with respect to techniques at other layers; Explains run-time adaptation and concepts/means of self-organization, in order to achieve error resiliency in complex, future many core systems
Educational Considerations, Vol. 43(3), Summer 2016 Full Issue
Educational Considerations, Vol. 43(3), Summer 2016-Full Issu
Recommended from our members
Global norms-domestic practice: the role of community-based organisations in the diffusion of HIV and human rights norms
International norms are central to international relations because they constitute key instruments to influence state behaviour (Finnemore and Sikkink, 1998; Risse and Sikkink, 1999; Acharya, 2004). The process by which international norms, principles and procedures diffuse into national systems is called norm diffusion (Krook and True, 2010; Towns, 2012; Brown, 2014). This thesis contributes to our understanding of the complexities of norm diffusion processes by undertaking the first in-depth analysis of the role that community-based organizations (CBOs) play in such processes. Focusing on the area of global health norms regarding HIV/AIDS, and based on extensive field research undertaken in Honduras, Ukraine, Uganda, and El Salvador, the thesis presents evidence of the CBOs analysed playing various essential roles in the diffusion of international norms domestically. First, they may act as implementers of such norms ensuring their appropriation among the populations they represent and generating local practice, on occasion even bypassing their own governments when these have rejected such norms. Second, CBOs may also be able to influence their governments and other relevant state actors at the later stages of norm diffusion, when states are deemed to implement international norms through their integration into national practice, even to the point of making states change their stated positions on certain international norms. Thirdly, through the simultaneous interaction with and entanglement in multiple norm diffusion processes, CBOs may also be able to alter such processes by tactically interlinking them and affecting their respective outcomes
Co-simulation techniques based on virtual platforms for SoC design and verification in power electronics applications
En las Ășltimas dĂ©cadas, la inversiĂłn en el ĂĄmbito energĂ©tico ha aumentado considerablemente. Actualmente, existen numerosas empresas que estĂĄn desarrollando equipos como convertidores de potencia o mĂĄquinas elĂ©ctricas con sistemas de control de Ășltima generaciĂłn. La tendencia actual es usar System-on-chips y Field Programmable Gate Arrays para implementar todo el sistema de control. Estos dispositivos facilitan el uso de algoritmos de control mĂĄs complejos y eficientes, mejorando la eficiencia de los equipos y habilitando la integraciĂłn de los sistemas renovables en la red elĂ©ctrica. Sin embargo, la complejidad de los sistemas de control tambiĂ©n ha aumentado considerablemente y con ello la dificultad de su verificaciĂłn.
Los sistemas Hardware-in-the-loop (HIL) se han presentado como una solución para la verificación no destructiva de los equipos energéticos, evitando accidentes y pruebas de alto coste en bancos de ensayo. Los sistemas HIL simulan en tiempo real el comportamiento de la planta de potencia y su interfaz para realizar las pruebas con la placa de control en un entorno seguro.
Esta tesis se centra en mejorar el proceso de verificación de los sistemas de control en aplicaciones de electrónica potencia. La contribución general es proporcionar una alternativa a al uso de los HIL para la verificación del hardware/software de la tarjeta de control. La alternativa se basa en la técnica de Software-in-the-loop (SIL) y trata de superar o abordar las limitaciones encontradas hasta la fecha en el SIL.
Para mejorar las cualidades de SIL se ha desarrollado una herramienta software denominada COSIL que permite co-simular la implementaciĂłn e integraciĂłn final del sistema de control, sea software (CPU), hardware (FPGA) o una mezcla de software y hardware, al mismo tiempo que su interacciĂłn con la planta de potencia. Dicha plataforma puede trabajar en mĂșltiples niveles de abstracciĂłn e incluye soporte para realizar co-simulaciĂłn mixtas en distintos lenguajes como C o VHDL.
A lo largo de la tesis se hace hincapié en mejorar una de las limitaciones de SIL, su baja velocidad de simulación. Se proponen diferentes soluciones como el uso de emuladores software, distintos niveles de abstracción del software y hardware, o relojes locales en los módulos de la FPGA. En especial se aporta un mecanismo de sincronizaron externa para el emulador software QEMU habilitando su emulación multi-core. Esta aportación habilita el uso de QEMU en plataformas virtuales de co-simulacion como COSIL.
Toda la plataforma COSIL, incluido el uso de QEMU, se ha analizado bajo diferentes tipos de aplicaciones y bajo un proyecto industrial real. Su uso ha sido crĂtico para desarrollar y verificar el software y hardware del sistema de control de un convertidor de 400 kVA
Recommended from our members
Supervised Design-Space Exploration
Low-cost Very Large Scale Integration (VLSI) electronics have revolutionized daily life and expanded the role of computation in science and engineering. Meanwhile, process-technology scaling has changed VLSI design to an exploration process that strives for the optimal balance among multiple objectives, such as power, performance, and area, i.e. multi-objective Pareto-set optimization. Besides, modern VLSI design has shifted to synthesis-centric methodologies in order to boost the design productivity, which leads to better design quality given limited time and resources. However, current decade-old synthesis-centric design methodologies suffer from: (i) long synthesis tool runtime, (ii) elusive optimal setting of many synthesis knobs, (iii) limitation to one design implementation per synthesis run, and (iv) limited capability of digesting only component-level designs as opposed to holistic system-wide synthesis. These challenges make Design Space Exploration (DSE) with synthesis tools a daunting task for both novice and experienced VLSI designers, thus stagnating the development of more powerful (i.e. more complex) computer systems.
To address these challenges, I propose Supervised Design-Space Exploration (SDSE), an abstraction layer between a designer and a synthesis tool, aiming to autonomously supervise synthesis jobs for DSE. For system-level exploration, SDSE can approximate a system Pareto set given limited information: only lightweight component characterization is required, yet the necessary component synthesis jobs are discovered on-the-fly in order to compose the system Pareto set. For component-level exploration, SDSE can approximate a component Pareto set by iteratively refining the approximation with promising knob settings, guided by synthesis-result estimation with machine-learning models. Combined, SDSE has been applied with the three major synthesis stages, namely high-level, logic, and physical synthesis, to the design of heterogeneous accelerator cores as well as high-performance processor cores. In particular, SDSE has been successfully integrated into the IBM Synthesis Tuning System, yielding 20% better circuit performance than the original system on the design of a 22nm server processor that is currently in production.
Looking ahead, SDSE can be applied to other VLSI designs beyond the accelerator and the programmable cores. Moreover, SDSE opens several research avenues for: (i) new development and deployment platforms of synthesis tools, (ii) large-scale collaborative design engineering, and (iii) new computer-aided design approaches for new classes of systems beyond VLSI chips
- âŠ