213 research outputs found

    Just a Second -- Scheduling Thousands of Time-Triggered Streams in Large-Scale Networks

    Full text link
    Deterministic real-time communication with bounded delay is an essential requirement for many safety-critical cyber-physical systems, and has received much attention from major standardization bodies such as IEEE and IETF. In particular, Ethernet technology has been extended by time-triggered scheduling mechanisms in standards like TTEthernet and Time-Sensitive Networking. Although the scheduling mechanisms have become part of standards, the traffic planning algorithms to create time-triggered schedules are still an open and challenging research question due to the problem's high complexity. In particular, so-called plug-and-produce scenarios require the ability to extend schedules on the fly within seconds. The need for scalable scheduling and routing algorithms is further supported by large-scale distributed real-time systems like smart energy grids with tight communication requirements. In this paper, we tackle this challenge by proposing two novel algorithms called Hierarchical Heuristic Scheduling (H2S) and Cost-Efficient Lazy Forwarding Scheduling (CELF) to calculate time-triggered schedules for TTEthernet. H2S and CELF are highly efficient and scalable, calculating schedules for more than 45,000 streams on random networks with 1,000 bridges as well as a realistic energy grid network within sub-seconds to seconds

    Energy Saving Techniques for Phase Change Memory (PCM)

    Full text link
    In recent years, the energy consumption of computing systems has increased and a large fraction of this energy is consumed in main memory. Towards this, researchers have proposed use of non-volatile memory, such as phase change memory (PCM), which has low read latency and power; and nearly zero leakage power. However, the write latency and power of PCM are very high and this, along with limited write endurance of PCM present significant challenges in enabling wide-spread adoption of PCM. To address this, several architecture-level techniques have been proposed. In this report, we review several techniques to manage power consumption of PCM. We also classify these techniques based on their characteristics to provide insights into them. The aim of this work is encourage researchers to propose even better techniques for improving energy efficiency of PCM based main memory.Comment: Survey, phase change RAM (PCRAM

    Trusted resource allocation in volunteer edge-cloud computing for scientific applications

    Get PDF
    Data-intensive science applications in fields such as e.g., bioinformatics, health sciences, and material discovery are becoming increasingly dynamic and demanding with resource requirements. Researchers using these applications which are based on advanced scientific workflows frequently require a diverse set of resources that are often not available within private servers or a single Cloud Service Provider (CSP). For example, a user working with Precision Medicine applications would prefer only those CSPs who follow guidelines from HIPAA (Health Insurance Portability and Accountability Act) for implementing their data services and might want services from other CSPs for economic viability. With the generation of more and more data these workflows often require deployment and dynamic scaling of multi-cloud resources in an efficient and high-performance manner (e.g., quick setup, reduced computation time, and increased application throughput). At the same time, users seek to minimize the costs of configuring the related multi-cloud resources. While performance and cost are among the key factors to decide upon CSP resource selection, the scientific workflows often process proprietary/confidential data that introduces additional constraints of security postures. Thus, users have to make an informed decision on the selection of resources that are most suited for their applications while trading off between the key factors of resource selection which are performance, agility, cost, and security (PACS). Furthermore, even with the most efficient resource allocation across multi-cloud, the cost to solution might not be economical for all users which have led to the development of new paradigms of computing such as volunteer computing where users utilize volunteered cyber resources to meet their computing requirements. For economical and readily available resources, it is essential that such volunteered resources can integrate well with cloud resources for providing the most efficient computing infrastructure for users. In this dissertation, individual stages such as user requirement collection, user's resource preferences, resource brokering and task scheduling, in lifecycle of resource brokering for users are tackled. For collection of user requirements, a novel approach through an iterative design interface is proposed. In addition, fuzzy interference-based approach is proposed to capture users' biases and expertise for guiding their resource selection for their applications. The results showed improvement in performance i.e. time to execute in 98 percent of the studied applications. The data collected on user's requirements and preferences is later used by optimizer engine and machine learning algorithms for resource brokering. For resource brokering, a new integer linear programming based solution (OnTimeURB) is proposed which creates multi-cloud template solutions for resource allocation while also optimizing performance, agility, cost, and security. The solution was further improved by the addition of a machine learning model based on naive bayes classifier which captures the true QoS of cloud resources for guiding template solution creation. The proposed solution was able to improve the time to execute for as much as 96 percent of the largest applications. As discussed above, to fulfill necessity of economical computing resources, a new paradigm of computing viz-a-viz Volunteer Edge Computing (VEC) is proposed which reduces cost and improves performance and security by creating edge clusters comprising of volunteered computing resources close to users. The initial results have shown improved time of execution for application workflows against state-of-the-art solutions while utilizing only the most secure VEC resources. Consequently, we have utilized reinforcement learning based solutions to characterize volunteered resources for their availability and flexibility towards implementation of security policies. The characterization of volunteered resources facilitates efficient allocation of resources and scheduling of workflows tasks which improves performance and throughput of workflow executions. VEC architecture is further validated with state-of-the-art bioinformatics workflows and manufacturing workflows.Includes bibliographical references

    Orchestrating datacenters and networks to facilitate the telecom cloud

    Get PDF
    In the Internet of services, information technology (IT) infrastructure providers play a critical role in making the services accessible to end-users. IT infrastructure providers host platforms and services in their datacenters (DCs). The cloud initiative has been accompanied by the introduction of new computing paradigms, such as Infrastructure as a Service (IaaS) and Software as a Service (SaaS), which have dramatically reduced the time and costs required to develop and deploy a service. However, transport networks become crucial to make services accessible to the user and to operate DCs. Transport networks are currently configured with big static fat pipes based on capacity over-provisioning aiming at guaranteeing traffic demand and other parameters committed in Service Level Agreement (SLA) contracts. Notwithstanding, such over-dimensioning adds high operational costs for DC operators and service providers. Therefore, new mechanisms to provide reconfiguration and adaptability of the transport network to reduce the amount of over-provisioned bandwidth are required. Although cloud-ready transport network architecture was introduced to handle the dynamic cloud and network interaction and Elastic Optical Networks (EONs) can facilitate elastic network operations, orchestration between the cloud and the interconnection network is eventually required to coordinate resources in both strata in a coherent manner. In addition, the explosion of Internet Protocol (IP)-based services requiring not only dynamic cloud and network interaction, but also additional service-specific SLA parameters and the expected benefits of Network Functions Virtualization (NFV), open the opportunity to telecom operators to exploit that cloud-ready transport network and their current infrastructure, to efficiently satisfy network requirements from the services. In the telecom cloud, a pay-per-use model can be offered to support services requiring resources from the transport network and its infrastructure. In this thesis, we study connectivity requirements from representative cloud-based services and explore connectivity models, architectures and orchestration schemes to satisfy them aiming at facilitating the telecom cloud. The main objective of this thesis is demonstrating, by means of analytical models and simulation, the viability of orchestrating DCs and networks to facilitate the telecom cloud. To achieve the main goal we first study the connectivity requirements for DC interconnection and services on a number of scenarios that require connectivity from the transport network. Specifically, we focus on studying DC federations, live-TV distribution, and 5G mobile networks. Next, we study different connectivity schemes, algorithms, and architectures aiming at satisfying those connectivity requirements. In particular, we study polling-based models for dynamic inter-DC connectivity and propose a novel notification-based connectivity scheme where inter-DC connectivity can be delegated to the network operator. Additionally, we explore virtual network topology provisioning models to support services that require service-specific SLA parameters on the telecom cloud. Finally, we focus on studying DC and network orchestration to fulfill simultaneously SLA contracts for a set of customers requiring connectivity from the transport network.En la Internet de los servicios, los proveedores de recursos relacionados con tecnologías de la información juegan un papel crítico haciéndolos accesibles a los usuarios como servicios. Dichos proveedores, hospedan plataformas y servicios en centros de datos. La oferta plataformas y servicios en la nube ha introducido nuevos paradigmas de computación tales como ofrecer la infraestructura como servicio, conocido como IaaS de sus siglas en inglés, y el software como servicio, SaaS. La disponibilidad de recursos en la nube, ha contribuido a la reducción de tiempos y costes para desarrollar y desplegar un servicio. Sin embargo, para permitir el acceso de los usuarios a los servicios así como para operar los centros de datos, las redes de transporte resultan imprescindibles. Actualmente, las redes de transporte están configuradas con conexiones estáticas y su capacidad sobredimensionada para garantizar la demanda de tráfico así como los distintos parámetros relacionados con el nivel de servicio acordado. No obstante, debido a que el exceso de capacidad en las conexiones se traduce en un elevado coste tanto para los operadores de los centros de datos como para los proveedores de servicios, son necesarios nuevos mecanismos que permitan adaptar y reconfigurar la red de forma eficiente de acuerdo a las nuevas necesidades de los servicios a los que dan soporte. A pesar de la introducción de arquitecturas que permiten la gestión de redes de transporte y su interacción con los servicios en la nube de forma dinámica, y de la irrupción de las redes ópticas elásticas, la orquestación entre la nube y la red es necesaria para coordinar de forma coherente los recursos en los distintos estratos. Además, la explosión de servicios basados el Protocolo de Internet, IP, que requieren tanto interacción dinámica con la red como parámetros particulares en los niveles de servicio además de los habituales, así como los beneficios que se esperan de la virtualización de funciones de red, representan una oportunidad para los operadores de red para explotar sus recursos y su infraestructura. La nube de operador permite ofrecer recursos del operador de red a los servicios, de forma similar a un sistema basado en pago por uso. En esta Tesis, se estudian requisitos de conectividad de servicios basados en la nube y se exploran modelos de conectividad, arquitecturas y modelos de orquestación que contribuyan a la realización de la nube de operador. El objetivo principal de esta Tesis es demostrar la viabilidad de la orquestación de centros de datos y redes para facilitar la nube de operador, mediante modelos analíticos y simulaciones. Con el fin de cumplir dicho objetivo, primero estudiamos los requisitos de conectividad para la interconexión de centros de datos y servicios en distintos escenarios que requieren conectividad en la red de transporte. En particular, nos centramos en el estudio de escenarios basados en federaciones de centros de datos, distribución de televisión en directo y la evolución de las redes móviles hacia 5G. A continuación, estudiamos distintos modelos de conectividad, algoritmos y arquitecturas para satisfacer los requisitos de conectividad. Estudiamos modelos de conectividad basados en sondeos para la interconexión de centros de datos y proponemos un modelo basado en notificaciones donde la gestión de la conectividad entre centros de datos se delega al operador de red. Estudiamos la provisión de redes virtuales para soportar en la nube de operador servicios que requieren parámetros específicos en los acuerdos de nivel de servicio además de los habituales. Finalmente, nos centramos en el estudio de la orquestación de centros de datos y redes con el objetivo de satisfacer de forma simultánea requisitos para distintos servicios.Postprint (published version

    Design of Time-Sensitive Networks For Safety-Critical Cyber-Physical Systems

    Get PDF
    A new era of Cyber-Physical Systems (CPSs) is emerging due to the vast growth in computation and communication technologies. A fault-tolerant and timely communication is the backbone of any CPS to interconnect the distributed controllers to the physical processes. Such reliability and timing requirements become more stringent in safety-critical applications, such as avionics and automotive. Future networks have to meet increasing bandwidth and coverage demands without compromising their reliability and timing. Ethernet technology is efficient in providing a low-cost scalable networking solution. However, the non-deterministic queuing delay and the packet collisions deny low latency communication in Ethernet. In this context, IEEE 802.1 Time Sensitive Network (TSN) standard has been introduced as an extension of the Ethernet technology to realize switched network architecture with real-time capabilities. TSN offers Time-Triggered (TT) traffic deterministic communication. Bounded Worst-Case end-to-end Delay (WCD) delivery is yielded by Audio Video Bridging (AVB) traffic. In this thesis, we are interested in the TSN design and verification. TSN design and verification are challenging tasks, especially for realistic safety-critical applications. The increasing complexity of CPSs widens the gap between the underlying networks' scale and the design techniques' capabilities. The existing TSN's scheduling techniques, which are limited to small and medium networks, are good examples of such a gap. On the other hand, the TSN has to handle dynamic traffic in some applications, e.g., Fog computing applications. Other challenges are related to satisfying the fault-tolerance constraints of mixed-criticality traffic in resource-efficient manners. Furthermore, in space and avionics applications, the harsh radiation environment implies verifying the TSN's availability under Single Event Upset (SEU)-induced failures. In other words, TSN design has to manage a large variety of constraints regarding the cost, redundancy, and delivery latency where no single design approach fits all applications. Therefore, TSN's efficient employment demands a flexible design framework that offers several design approaches to meet the broad range of timing, reliability, and cost constraints. This thesis aims to develop a TSN design framework that enables TSN deployment in a broad spectrum of CPSs. The framework introduces a set of methods to address the reliability, timing, and scalability aspects. Topology synthesis, traffic planning, and early-stage modeling and analysis are considered in this framework. The proposed methods work together to meet a large variety of constraints in CPSs. This thesis proposes a scalable heuristic-based method for topology synthesis and ILP formulations for reliability-aware AVB traffic routing to address the fault-tolerance transmission. A novel method for scalable scheduling of TT traffic to attain real-time transmission. To optimize the TSN for dynamic traffic, we propose a new priority assignment technique based on reinforcement learning. Regarding the TSN verification in harsh radiation environments, we introduce formal models to investigate the impact of the SEU-induced switches failures on the TSN availability. The proposed analysis adopts the model checking and statistical model checking techniques to discover and characterize the vulnerable design candidates

    Power-constrained aware and latency-aware microarchitectural optimizations in many-core processors

    Get PDF
    As the transistor budgets outpace the power envelope (the power-wall issue), new architectural and microarchitectural techniques are needed to improve, or at least maintain, the power efficiency of next-generation processors. Run-time adaptation, including core, cache and DVFS adaptations, has recently emerged as a promising area to keep the pace for acceptable power efficiency. However, none of the adaptation techniques proposed so far is able to provide good results when we consider the stringent power budgets that will be common in the next decades, so new techniques that attack the problem from several fronts using different specialized mechanisms are necessary. The combination of different power management mechanisms, however, bring extra levels of complexity, since other factors such as workload behavior and run-time conditions must also be considered to properly allocate power among cores and threads. To address the power issue, this thesis first proposes Chrysso, an integrated and scalable model-driven power management that quickly selects the best combination of adaptation methods out of different core and uncore micro-architecture adaptations, per-core DVFS, or any combination thereof. Chrysso can quickly search the adaptation space by making performance/power projections to identify Pareto-optimal configurations, effectively pruning the search space. Chrysso achieves 1.9x better chip performance over core-level gating for multi-programmed workloads, and 1.5x higher performance for multi-threaded workloads. Most existing power management schemes use a centralized approach to regulate power dissipation. Unfortunately, the complexity and overhead of centralized power management increases significantly with core count rendering it in-viable at fine-grain time slices. The work leverages a two-tier hierarchical power manager. This solution is highly scalable with low overhead on a tiled many-core architecture with shared LLC and per-tile DVFS at fine-grain time slices. The global power is first distributed across tiles using GPM and then within a tile (in parallel across all tiles). Additionally, this work also proposes DVFS and cache-aware thread migration (DCTM) to ensure optimum per-tile co-scheduling of compatible threads at runtime over the two-tier hierarchical power manager. DCTM outperforms existing solutions by up to 12% on adaptive many-core tile processor. With the advancements in the core micro-architectural techniques and technology scaling, the performance gap between the computational component and memory component is increasing significantly (the memory-wall issue). To bridge this gap, the architecture community is pushing forward towards multi-core architecture with on-die near-memory DRAM cache memory (faster than conventional DRAM). Gigascale DRAM Caches poses a problem of how to efficiently manage the tags. The Tags-in-DRAM designs aims at efficiently co-locate tags with data, but it still suffer from high latency especially in multi-way associativity. The thesis finally proposes Tag Cache mechanism, an on-chip distributed tag caching mechanism with limited space and latency overhead to bypass the tag read operation in multi-way DRAM Caches, thereby reducing hit latency. Each Tag Cache, stored in L2, stores tag information of the most recently used DRAM Cache ways. The Tag Cache is able to exploit temporal locality of the DRAM Cache, thereby contributing to on average 46% of the DRAM Cache hits.A mesura que el consum dels transistors supera el nivell de potència desitjable es necessiten noves tècniques arquitectòniques i microarquitectòniques per millorar, o almenys mantenir, l'eficiència energètica dels processadors de les pròximes generacions. L'adaptació en temps d'execució, tant de nuclis com de les cachés, així com també adaptacions DVFS són idees que han sorgit recentment que fan preveure que sigui un àrea prometedora per mantenir un ritme d'eficiència energètica acceptable. Tanmateix, cap de les tècniques d'adaptació proposades fins ara és capaç d'oferir bons resultats si tenim en compte les restriccions estrictes de potència que seran comuns a les pròximes dècades. És convenient definir noves tècniques que ataquin el problema des de diversos fronts utilitzant diferents mecanismes especialitzats. La combinació de diferents mecanismes de gestió d'energia porta aparellada nivells addicionals de complexitat, ja que altres factors com ara el comportament de la càrrega de treball així com condicions específiques de temps d'execució també han de ser considerats per assignar adequadament la potència entre els nuclis del sistema computador. Per tractar el tema de la potència, aquesta tesi proposa en primer lloc Chrysso, una administració d'energia integrada i escalable que selecciona ràpidament la millor combinació entre diferents adaptacions microarquitectòniques. Chrysso pot buscar ràpidament l'adaptació adequada al fer projeccions òptimes de rendiment i potència basades en configuracions de Pareto, permetent així reduir de manera efectiva l'espai de cerca. Chrysso arriba a un rendiment de 1,9 sobre tècniques convencionals d'inhibició de portes amb una càrrega d'aplicacions seqüencials; i un rendiment de 1,5 quan les aplicacions corresponen a programes parla·lels. La majoria dels sistemes de gestió d'energia existents utilitzen un enfocament centralitzat per regular la dissipació d'energia. Malauradament, la complexitat i el temps d'administració s'incrementen significativament amb una gran quantitat de nuclis. En aquest treball es defineix un gestor jeràrquic de potència basat en dos nivells. Aquesta solució és altament escalable amb baix cost operatiu en una arquitectura de múltiples nuclis integrats en clústers, amb memòria caché de darrer nivell compartida a nivell de cluster, i DVFS establert en intervals de temps de gra fi a nivell de clúster. La potència global es distribueix en primer lloc a través dels clústers utilitzant GPM i després es distribueix dins un clúster (en paral·lel si es consideren tots els clústers). A més, aquest treball també proposa DVFS i migració de fils conscient de la memòria caché (DCTM) que garanteix una òptima distribució de tasques entre els nuclis. DCTM supera les solucions existents fins a un 12%. Amb els avenços en la tecnologia i les tècniques de micro-arquitectura de nuclis, la diferència de rendiment entre el component computacional i la memòria està augmentant significativament. Per omplir aquest buit, s'està avançant cap a arquitectures de múltiples nuclis amb memòries caché integrades basades en DRAM. Aquestes memòries caché DRAM a gran escala plantegen el problema de com gestionar de forma eficaç les etiquetes. Els dissenys de cachés amb dades i etiquetes juntes són un primer pas, però encara pateixen per tenir una alta latència, especialment en cachés amb un grau alt d'associativitat. En aquesta tesi es proposa l'estudi d'una tècnica anomenada Tag Cache, un mecanisme distribuït d'emmagatzematge d'etiquetes, que redueix la latència de les operacions de lectura d'etiquetes en les memòries caché DRAM. Cada Tag Cache, que resideix a L2, emmagatzema la informació de les vies que s'han accedit recentment de les memòries caché DRAM. D'aquesta manera es pot aprofitar la localitat temporal d'una caché DRAM, fet que contribueix en promig en un 46% dels encerts en les caché DRAM

    NFV/SDN enabled architecture for efficient adaptive management of renewable and non-renewable energy

    Get PDF
    Ever-increasing energy consumption, the depletion of non-renewable resources, the climate impact associated with energy generation, and finite energy-production capacity are important concerns that drive the urgent creation of new solutions for energy management. In this regard, by leveraging the massive connectivity provided by emerging 5G communications, this paper proposes a long-term sustainable Demand-Response (DR) architecture for the efficient management of available energy consumption for Internet of Things (IoT) infrastructures. The proposal uses Network Functions Virtualization (NFV) and Software Defined Networking (SDN) technologies as enablers and promotes the primary use of energy from renewable sources. Associated with architecture, this paper presents a novel consumption model conditioned on availability and in which the consumers are part of the management process. To efficiently use the energy from renewable and non-renewable sources, several management strategies are herein proposed, such as prioritization of the energy supply and workload scheduling using time-shifting capabilities. The complexity of the proposal is analyzed in order to present an appropriate architectural framework. The energy management solution is modeled as an Integer Linear Programming (ILP) and, to verify the improvements in energy utilization, an algorithmic solution and its evaluation are presented. Finally, open research problems and application scenarios are discussed.This work was supported in part by the Ministerio de Economía yCompetitividad of the Spanish Government under Project TEC2016-76795-C6-1-R and Project AEI/FEDER, UE, and in part by the SGRProject under Grant 2017 SGR 397 from the Generalitat de Catalunya.Peer ReviewedPostprint (published version

    Application aware performance, power consumption, and reliability tradeoff

    Get PDF
    There has been an unprecedented increase in the drive for microprocessor performance. This drive is motivated by the increase in software complexity, opportunity to solve previously unattempted problems especially in scientific domain, and a need to crunch the ever growing `Big Data\u27 to enable a multitude of technological advances to benefit mankind. A consequence of these phenomena is the ever increasing transistor count in deployed computing systems. Although technology scaling leads to lower power consumption per transistor, the overall system level power consumption is on the rise. This leads to a variety of power supply related issues. As the chip die area is not increasing significantly, and the supply voltage reduction is not keeping on par with the reduction in device dimensions, an increase in power density is observed. This manifests as an increased temperature profile on the chip floorplan. A rise in temperature necessitates aggressive and costly cooling mechanisms adding to the design complexity and manufacturing efforts. It also triggers various failure mechanisms leading to reduction in the expected chip lifetime/reliability. Given the conflicting trends in Performance, Power consumption, and chip Reliability (PPR), it is imperative to balance them in a fine-grained fashion to meet system level goals and expectations. Sole dependence on the advancements in manufacturing technology is no longer sufficient. Alternate venues for PPR management are being increasingly paid attention to. On the other hand, the PPR demands are usually time dependent. For example, the constraint on power consumption in a green data center is dictated by the energy reserve. The demand on performance in a cloud based platform depends on the agreed Quality of Service (QOS) requirements. The reliability of a microprocessor is dependent on the deployment domain. The goal of our research is to address the issue of growing microprocessor power consumption subject to performance and/or reliability goals. Through our developed schemes, we tailor the execution context to match application requirements. This leads to judicious use of power while adhering to aforementioned constraints. It is to be noted that the actual demands on performance, power consumption, and reliability are highly variant, and depend upon executing applications and operating conditions. As such, we develop schemes to cater to these variant demands. To meet these demands efficiently, the solutions developed are tailored to current hardware-software interaction characteristics. Two techniques that are very relevant in this area, namely dynamic voltage and frequency scaling (DVFS) and microarchitectural adaptation, are leveraged to produce expected PPR characteristics when executing a wide variety of tasks. In this dissertation, we demonstrate how the expected chip lifetime can be augmented in a real-time setting using DVFS while paying heed to performance constraints modeled as QoS requirements. Individual tasks in a task queue are assigned specific voltage and frequency pairs to utilize for their execution. This assignment is empowered by knowledge of application-wise hardware-software interactions to reach solutions that are tailored to the current execution scenario. Our observations indicate that a 2 to 18 fold improvement in chip lifetime can be expected by the utilization of the schemes we develop in this regard. Capitalizing on the power of microarchitectural adaptation, we further improve chip lifetime expectations 2-8 times, based on the failure mechanism investigated. This increase in expected chip lifetime directly translates to reduction of both operational and replacement costs. We also provide mechanisms to co-manage performance and power consumption constraints. Comprehensive microarchitectural adaptation space is very complex and its usage thus leads to significant runtime overhead. To tackle this, we devote a fair bit of attention to its pruning so as to narrow down on and utilize only the most effective adaptations. A two stage adaptation process is provided to a) improve optimality of the solutions delivered, and b) to keep the runtime overhead in check. We observe that our schemes provide 20\% higher normalized energy efficiency compared to the state of the art techniques proposed, while using just a very small fraction of the configuration space. We also find that our schemes effectively cater to a wide variety of demands on performance and power consumption, providing the necessary hardware characteristics within 10\% bound. Since only the most useful configuration space is retained for adaptation, occurrence of a fault that prohibits the usage of a certain adaptive control can lead to the inability to satisfy a subset of hardware demands. A detailed analysis has been carried out to understand how the remaining active configurations can preserve the expected hardware behavior. To a good extent, we observe that the system behavior under a failure closely tracks (with less than 5\% tracking error) the obtainable behavior without the presence of the fault. We believe that application tailored schemes for PPR management become increasingly relevant as the microprocessor design advancements saturate in the future. They will be extremely relevant to extract every possible ounce of performance while confirming to constraints on power consumption and reliability. Given the effectiveness of our schemes, we are confident that such schemes are applicable in different markets like embedded computing, desktop computing, cloud platforms and high performance computing. Insights drawn from our research will guide chip designers in the provision of effective adaptive controls to combat increasing demands on PPR characteristics

    Design Space Exploration and Resource Management of Multi/Many-Core Systems

    Get PDF
    The increasing demand of processing a higher number of applications and related data on computing platforms has resulted in reliance on multi-/many-core chips as they facilitate parallel processing. However, there is a desire for these platforms to be energy-efficient and reliable, and they need to perform secure computations for the interest of the whole community. This book provides perspectives on the aforementioned aspects from leading researchers in terms of state-of-the-art contributions and upcoming trends
    • …
    corecore