14 research outputs found

    Energy Saving and Virtualization Technologies in Switching

    Get PDF
    Switching is the key functionality for many devices like electronic Router and Switch, optical Router, Network on Chips (NoCs) and so on. Basically, switching is responsible for moving data unit from one port/location to another (or multiple) port(s)/location(s). In past years, the high capacity, low delay were the main concerns when designing high-end switching unit. As new demands, requests and technologies emerge, flexibility and low power cost switching design become to weight the same as throughput and delay. On one hand, highly flexible (i.e, programming ability) switching can cope with variable needs stem from new applications (i.e, VoIP) and popular user behavior (i.e, p2p downloading); on the other hand, reduce the energy and power dissipation for switching could not only save bills and build echo system but also expand components life time. Many research efforts have been devoted to increase switching flexibility and reduce its power cost. In this thesis work, we consider to exploit virtualization as the main technique to build flexible software router in the first part, then in the second part we draw our attention on energy saving in NoC (i.e, a switching fabric designed to handle the on chip data transmission) and software router. In the first part of the thesis, we consider the virtualization inside Software Routers (SRs). SR, i.e, routers running in commodity Personal Computers (PCs), become an appealing solution compared to traditional Proprietary Routing Devices (PRD) for various reasons such as cost (the multi-vendor hardware used by SRs can be cheap, while the equipment needed by PRDs is more expensive and their training cost is higher), openness (SRs can make use of a large number of open source networking applications, while PRDs are more closed) and flexibility. The forwarding performance provided by SRs has been an obstacle to their deployment in real networks. For this reason, we proposed to aggregate multiple routing units that form an powerful SR known as the Multistage Software Router (MSR) to overcome the performance limitation for a single SR. Our results show that the throughput can increase almost linearly as the number of the internal routing devices. But some other features related to flexibility (such as power saving, programmability, router migration or easy management) have been investigated less than performance previously. We noticed that virtualization techniques become reality thanks to the quick development of the PC architectures, which are now able to easily support several logical PCs running in parallel on the same hardware. Virtualization could provide many flexible features like hardware and software decoupling, encapsulation of virtual machine state, failure recovery and security, to name a few. Virtualization permits to build multiple SRs inside one physical host and a multistage architecture exploiting only logical devices. By doing so, physical resources can be used in a more efficient way, energy savings features (switching on and off device when needed) can be introduced and logical resources could be rented on-demand instead of being owned. Since virtualization techniques are still difficult to deploy, several challenges need to be faced when trying to integrate them into routers. The main aim of the first part in this thesis is to find out the feasibility of the virtualization approach, to build and test virtualized SR (VSR), to implement the MSR exploiting logical, i.e. virtualized, resources, to analyze virtualized routing performance and to propose improvement techniques to VSR and virtual MSR (VMSR). More specifically, we considered different virtualization solutions like VMware, XEN, KVM to build VSR and VMSR, being VMware a closed source solution but with higher performance and XEN/KVM open source solutions. Firstly we built and tested each single component of our multistage architecture (i.e, back-end router, load balancer )inside the virtual infrastructure, then and we extended the performance experiments with more complex scenarios like multiple Back-end Router (BR) or Load Balancer (LB) which cooperate to route packets. Our results show that virtualization could introduce 40~\% performance penalty compare with the hardware only solution. Keep the performance limitation in mind, we developed the whole VMSR and we obtained low throughput with 64B packet flow as expected. To increase the VMSR throughput, two directions could be considered, the first one is to improve the single component ( i.e, VSR) performance and the other is to work from the topology (i.e, best allocation of the VMs into the hardware ) point of view. For the first method, we considered to tune the VSR inside the KVM and we studied closely such as Linux driver, scheduler, interconnect methodology which could impact the performance significantly with proper configuration; then we proposed two ways for the VMs allocation into physical servers to enhance the VMSR performance. Our results show that with good tuning and allocation of VMs, we could minimize the virtualization penalty and get reasonable throughput for running SRs inside virtual infrastructure and add flexibility functionalities into SRs easily. In the second part of the thesis, we consider the energy efficient switching design problem and we focus on two main architecture, the NoC and MSR. As many research works suggest, the energy cost in the Communication Technologies ( ICT ) is constantly increasing. Among the main ICT sectors, a large portion of the energy consumption is contributed by the telecommunication infrastructure and their devices, i.e, router, switch, cell phone, ip TV settle box, storage home gateway etc. More in detail, the linecards, links, System on Chip (SoC) including the transmitter/receiver on these variate devices are the main power consuming units. We firstly present the work on the power reduction of the data transmission in SoC, which is carried out by the NoC. NoC is an approach to design the communication subsystem between different Processing Units (PEs) in a SoC. PEs could be different elements such as CPU, memory, digital signal/analog signal processor etc. Different PEs performs specific tasks depending on the applications running on the chip. Different tasks need to exchange data information among each other, thus flits ( chopped packet with limited header information ) are generated by PEs. The flits are injected into the NoC by the proper interface and routed until reach the destination PEs. For the whole procedure, the NoC behaves as a packet switch network. Studies show that in general the information processing in the PEs only consume 60~\% energy while the remaining 40~\% are consumed by the NoC. More importantly, as the current network designing principle, the NoC capacity is devised to handle the peak load. This is a clear sign for energy saving when the network load is low. In our work, we considered to exploit Dynamic Voltage and Frequency Scaling (DVFS) technique, which can jointly decrease or increase the system voltage and frequency when necessary, i.e, decrease the voltage and frequency at low load scenario to save energy and reduce power dissipation. More precisely, we studied two different NoC architectures for energy saving, namely single plane chip and multi-plane chip architecture. In both cases we have a very strict constraint to be that all the links and transmitter/receivers on the same plane work at the same frequency/voltage to avoid synchronization problem. This is the main difference with many existing works in the literature which usually assume different links can work at different frequency, that is hard to be implemented in reality. For the single plane NoC, we exploited different routing schemas combined with DVFS to reduce the power for the whole chip. Our results haven been compared with the optimal value obtained by modeling the power saving formally as a quadratic programming problem. Results suggest that just by using simple load balancing routing algorithm, we can save considerable energy for the single chip NoC architecture. Furthermore, we noticed that in the single plane NoC architecture, the bottleneck link could limit the DVFS effectiveness. Then we discovered that multiplane NoC architecture is fairly easy to be implemented and it could help with the energy saving. Thus we focus on the multiplane architecture and we found out that DVFS could be more efficient when we concentrate more traffic into one plane and send the remaining flows to other planes. We compared load concentration and load balancing with different power modeling and all simulation results show that load concentration is better compared with load balancing for multiplan NoC architecture. Finally, we also present one of the the energy efficient MSR design technique, which permits the MSR to follow the day-night traffic pattern more efficiently with our on-line energy saving algorithm

    On Fault Tolerance Methods for Networks-on-Chip

    Get PDF
    Technology scaling has proceeded into dimensions in which the reliability of manufactured devices is becoming endangered. The reliability decrease is a consequence of physical limitations, relative increase of variations, and decreasing noise margins, among others. A promising solution for bringing the reliability of circuits back to a desired level is the use of design methods which introduce tolerance against possible faults in an integrated circuit. This thesis studies and presents fault tolerance methods for network-onchip (NoC) which is a design paradigm targeted for very large systems-onchip. In a NoC resources, such as processors and memories, are connected to a communication network; comparable to the Internet. Fault tolerance in such a system can be achieved at many abstraction levels. The thesis studies the origin of faults in modern technologies and explains the classification to transient, intermittent and permanent faults. A survey of fault tolerance methods is presented to demonstrate the diversity of available methods. Networks-on-chip are approached by exploring their main design choices: the selection of a topology, routing protocol, and flow control method. Fault tolerance methods for NoCs are studied at different layers of the OSI reference model. The data link layer provides a reliable communication link over a physical channel. Error control coding is an efficient fault tolerance method especially against transient faults at this abstraction level. Error control coding methods suitable for on-chip communication are studied and their implementations presented. Error control coding loses its effectiveness in the presence of intermittent and permanent faults. Therefore, other solutions against them are presented. The introduction of spare wires and split transmissions are shown to provide good tolerance against intermittent and permanent errors and their combination to error control coding is illustrated. At the network layer positioned above the data link layer, fault tolerance can be achieved with the design of fault tolerant network topologies and routing algorithms. Both of these approaches are presented in the thesis together with realizations in the both categories. The thesis concludes that an optimal fault tolerance solution contains carefully co-designed elements from different abstraction levelsSiirretty Doriast

    Resilience mechanisms for carrier-grade networks

    Get PDF
    In recent years, the advent of new Future Internet (FI) applications is creating ever-demanding requirements. These requirements are pushing network carriers for high transport capacity, energy efficiency, as well as high-availability services with low latency. A widespread practice to provide FI services is the adoption of a multi-layer network model consisting in the use of IP/MPLS and optical technologies such as Wavelength Division Multiplexing (WDM). Indeed, optical transport technologies are the foundation supporting the current telecommunication network backbones, because of the high transmission bandwidth achieved in fiber optical networks. Traditional optical networks consist of a fixed 50 GHz grid, resulting in a low Optical Spectrum (OS) utilization, specifically with transmission rates above 100 Gbps. Recently, optical networks have been undergoing significant changes with the purpose of providing a flexible grid that can fully exploit the potential of optical networks. This has led to a new network paradigm termed as Elastic Optical Network (EON). In recent years, the advent of new Future Internet (FI) applications is creating ever-demanding requirements. A widespread practice to provide FI services is the adoption of a multi-layer network model consisting in the use of IP/MPLS and optical technologies such as Wavelength Division Multiplexing (WDM). Traditional optical networks consist of a fixed 50 GHz grid, resulting in a low Optical Spectrum (OS) utilization. Recently, optical networks have been undergoing significant changes with the purpose of providing a flexible grid that can fully exploit the potential of optical networks. This has led to a new network paradigm termed as Elastic Optical Network (EON). Recently, a new protection scheme referred to as Network Coding Protection (NCP) has emerged as an innovative solution to proactively enable protection in an agile and efficient manner by means of throughput improvement techniques such as Network Coding. It is an intuitive reasoning that the throughput advantages of NCP might be magnified by means of the flexible-grid provided by EONs. The goal of this thesis is three-fold. The first, is to study the advantages of NCP schemes in planning scenarios. For this purpose, this thesis focuses on the performance of NCP assuming both a fixed as well as a flexible spectrum grid. However, conversely to planning scenarios, in dynamic scenarios the accuracy of Network State Information (NSI) is crucial since inaccurate NSI might substantially affect the performance of an NCP scheme. The second contribution of this thesis is to study the performance of protection schemes in dynamic scenarios considering inaccurate NSI. For this purpose, this thesis explores prediction techniques in order to mitigate the negative effects of inaccurate NSI. On the other hand, Internet users are continuously demanding new requirements that cannot be supported by the current host-oriented communication model.This communication model is not suitable for future Internet architectures such as the so-called Internet of Things (IoT). Fortunately, there is a new trend in network research referred to as ID/Locator Split Architectures (ILSAs) which is a non-disruptive technique to mitigate the issues related to host-oriented communications. Moreover, a new routing architecture referred to as Path Computation Element (PCE) has emerged with the aim of overcoming the well-known issues of the current routing schemes. Undoubtedly, routing and protection schemes need to be enhanced to fully exploit the advantages provided by new network architectures.In light of this, the third goal of this thesis introduces a novel PCE-like architecture termed as Context-Aware PCE. In a context-aware PCE scenario, the driver of a path computation is not a host/location, as in conventional PCE architectures, rather it is an interest for a service defined within a context.En los últimos años la llegada de nuevas aplicaciones del llamado Internet del Futuro (FI) está creando requerimientos sumamente exigentes. Estos requerimientos están empujando a los proveedores de redes a incrementar sus capacidades de transporte, eficiencia energética, y sus prestaciones de servicios de alta disponibilidad con baja latencia. Es una práctica sumamente extendida para proveer servicios (FI) la adopción de un modelo multi-capa el cual consiste en el uso de tecnologías IP/MPLS así como también ópticas como por ejemplo Wavelength Division Multiplexing (WDM). De hecho, las tecnologías de transporte son el sustento del backbone de las redes de telecomunicaciones actuales debido al gran ancho de banda que proveen las redes de fibra óptica. Las redes ópticas tradicionales consisten en el uso de un espectro fijo de 50 GHz. Esto resulta en una baja utilización del espectro Óptico, específicamente con tasas de transmisiones superiores a 100 Gbps. Recientemente, las redes ópticas están experimentado cambios significativos con el propósito de proveer un espectro flexible que pueda explotar el potencial de las redes ópticas. Esto ha llevado a un nuevo paradigma denominado Redes Ópticas Elásticas (EON). Por otro lado, un nuevo esquema de protección llamado Network Coding Protection (NCP) ha emergido como una solución innovadora para habilitar de manera proactiva protección eficiente y ágil usando técnicas de mejora de throughput como es Network Coding (NC). Es un razonamiento lógico pensar que las ventajas relacionadas con throughput de NCP pueden ser magnificadas mediante el espectro flexible proveído por las redes EONs. El objetivo de esta tesis es triple. El primero es estudiar las ventajas de esquemas NCP en un escenario de planificación. Para este propósito, esta tesis se enfoca en el rendimiento de NCP asumiendo un espectro fijo y un espectro flexible. Sin embargo, contrario a escenarios de planificación, en escenarios dinámicos la precisión relacionada de la Información de Estado de Red (NSI) es crucial, ya que la imprecisión de NSI puede afectar sustancialmente el rendimiento de un esquema NCP. La segunda contribución de esta tesis es el estudio del rendimiento de esquemas de protección en escenarios dinámicos considerando NSI no precisa. Para este propósito, esta tesis explora técnicas predictivas con el propósito de mitigar los efectos negativos de NSI impreciso. Por otro lado, los usuarios de Internet están demandando continuamente nuevos requerimientos los cuales no pueden ser soportados por el modelo de comunicación orientado a hosts. Este modelo de comunicaciones no es factible para arquitecturas FI como es el Internet de las cosas (IoT). Afortunadamente, existe un nueva línea investigativa llamada ID/Locator Split Architectures (ILSAs) la cual es una técnica no disruptiva para mitigar los problemas relacionadas con el modelo de comunicación orientado a hosts. Además, un nuevo esquema de enrutamiento llamado as Path Computation Element (PCE) ha emergido con el propósito de superar los problemas bien conocidos de los esquemas de enrutamiento tradicionales. Indudablemente, los esquemas de enrutamiento y protección deben ser mejorados para que estos puedan explotar las ventajas introducidas por las nuevas arquitecturas de redes. A luz de esto, el tercer objetivo de esta tesis es introducir una nueva arquitectura PCE denominada Context-Aware PCE. En un escenario context-aware PCE, el objetivo de una acción de computación de camino no es un host o localidad, como es el caso en lo esquemas PCE tradicionales. Más bien, es un interés por un servicio definido dentro de una información de contexto

    Towards a cloud enabler : from an optical network resource provisioning system to a generalized architecture for dynamic infrastructure services provisioning

    Get PDF
    This work was developed during a period where most of the optical management and provisioning system where manual and proprietary. This work contributed to the evolution of the state of the art of optical networks with new architectures and advanced virtual infrastructure services. The evolution of optical networks, and internet globally, have been very promising during the last decade. The impact of mobile technology, grid, cloud computing, HDTV, augmented reality and big data, among many others, have driven the evolution of optical networks towards current service technologies, mostly based on SDN (Software Defined Networking) architectures and NFV(Network Functions Virtualisation). Moreover, the convergence of IP/Optical networks and IT services, and the evolution of the internet and optical infrastructures, have generated novel service orchestrators and open source frameworks. In fact, technology has evolved that fast that none could foresee how important Internet is for our current lives. Said in other words, technology was forced to evolve in a way that network architectures became much more transparent, dynamic and flexible to the end users (applications, user interfaces or simple APIs). This Thesis exposes the work done on defining new architectures for Service Oriented Networks and the contribution to the state of the art. The research work is divided into three topics. It describes the evolution from a Network Resource Provisioning System to an advanced Service Plane, and ends with a new architecture that virtualized the optical infrastructure in order to provide coordinated, on-demand and dynamic services between the application and the network infrastructure layer, becoming an enabler for the new generation of cloud network infrastructures. The work done on defining a Network Resource Provisioning System established the first bases for future work on network infrastructure virtualization. The UCLP (User Light Path Provisioning) technology was the first attempt for Customer Empowered Networks and Articulated Private Networks. It empowered the users and brought virtualization and partitioning functionalities into the optical data plane, with new interfaces for dynamic service provisioning. The work done within the development of a new Service Plane allowed the provisioning of on-demand connectivity services from the application, and in a multi-domain and multi-technology scenario based on a virtual network infrastructure composed of resources from different infrastructure providers. This Service Plane facilitated the deployment of applications consuming large amounts of data under deterministic conditions, so allowing the networks behave as a Grid-class resource. It became the first on-demand provisioning system that at lower levels allowed the creation of one virtual domain composed from resources of different providers. The last research topic presents an architecture that consolidated the work done in virtualisation while enhancing the capabilities to upper layers, so fully integrating the optical network infrastructure into the cloud environment, and so providing an architecture that enabled cloud services by integrating the request of optical network and IT infrastructure services together at the same level. It set up a new trend into the research community and evolved towards the technology we use today based on SDN and NFV. Summing up, the work presented is focused on the provisioning of virtual infrastructures from the architectural point of view of optical networks and IT infrastructures, together with the design and definition of novel service layers. It means, architectures that enabled the creation of virtual infrastructures composed of optical networks and IT resources, isolated and provisioned on-demand and in advance with infrastructure re-planning functionalities, and a new set of interfaces to open up those services to applications or third parties.Aquesta tesi es va desenvolupar durant un període on la majoria de sistemes de gestió de xarxa òptica eren manuals i basats en sistemes propietaris. En aquest sentit, la feina presentada va contribuir a l'evolució de l'estat de l'art de les xarxes òptiques tant a nivell d’arquitectures com de provisió d’infraestructures virtuals. L'evolució de les xarxes òptiques, i d'Internet a nivell mundial, han estat molt prometedores durant l'última dècada. L'impacte de la tecnologia mòbil, la computació al núvol, la televisió d'alta definició, la realitat augmentada i el big data, entre molts altres, han impulsat l'evolució cap a xarxes d’altes prestacions amb nous serveis basats en SDN (Software Defined Networking) i NFV (Funcions de xarxa La virtualització). D'altra banda, la convergència de xarxes òptiques i els serveis IT, junt amb l'evolució d'Internet i de les infraestructures òptiques, han generat nous orquestradors de serveis i frameworks basats en codi obert. La tecnologia ha evolucionat a una velocitat on ningú podria haver predit la importància que Internet està tenint en el nostre dia a dia. Dit en altres paraules, la tecnologia es va veure obligada a evolucionar d'una manera on les arquitectures de xarxa es fessin més transparent, dinàmiques i flexibles vers als usuaris finals (aplicacions, interfícies d'usuari o APIs simples). Aquesta Tesi presenta noves arquitectures de xarxa òptica orientades a serveis. El treball de recerca es divideix en tres temes. Es presenta un sistema de virtualització i aprovisionament de recursos de xarxa i la seva evolució a un pla de servei avançat, per acabar presentant el disseny d’una nova arquitectura capaç de virtualitzar la infraestructura òptica i IT i proporcionar serveis de forma coordinada, i sota demanda, entre l'aplicació i la capa d'infraestructura de xarxa òptica. Tot esdevenint un facilitador per a la nova generació d'infraestructures de xarxa en el núvol. El treball realitzat en la definició del sistema de virtualització de recursos va establir les primeres bases sobre la virtualització de la infraestructura de xarxa òptica en el marc de les “Customer Empowered Networks” i “Articulated Private Networks”. Amb l’objectiu de virtualitzar el pla de dades òptic, i oferir noves interfícies per a la provisió de serveis dinàmics de xarxa. En quant al pla de serveis presentat, aquest va facilitat la provisió de serveis de connectivitat sota demanda per part de l'aplicació, tant en entorns multi-domini, com en entorns amb múltiples tecnologies. Aquest pla de servei, anomenat Harmony, va facilitar el desplegament de noves aplicacions que consumien grans quantitats de dades en condicions deterministes. En aquest sentit, va permetre que les xarxes es comportessin com un recurs Grid, i per tant, va esdevenir el primer sistema d'aprovisionament sota demanda que permetia la creació de dominis virtuals de xarxa composts a partir de recursos de diferents proveïdors. Finalment, es presenta l’evolució d’un pla de servei cap una arquitectura global que consolida el treball realitzat a nivell de convergència d’infraestructures (òptica + IT) i millora les capacitats de les capes superiors. Aquesta arquitectura va facilitar la plena integració de la infraestructura de xarxa òptica a l'entorn del núvol. En aquest sentit, aquest resultats van evolucionar cap a les tendències actuals de SDN i NFV. En resum, el treball presentat es centra en la provisió d'infraestructures virtuals des del punt de vista d’arquitectures de xarxa òptiques i les infraestructures IT, juntament amb el disseny i definició de nous serveis de xarxa avançats, tal i com ho va ser el servei de re-planificació dinàmicaPostprint (published version

    Dependable Embedded Systems

    Get PDF
    This Open Access book introduces readers to many new techniques for enhancing and optimizing reliability in embedded systems, which have emerged particularly within the last five years. This book introduces the most prominent reliability concerns from today’s points of view and roughly recapitulates the progress in the community so far. Unlike other books that focus on a single abstraction level such circuit level or system level alone, the focus of this book is to deal with the different reliability challenges across different levels starting from the physical level all the way to the system level (cross-layer approaches). The book aims at demonstrating how new hardware/software co-design solution can be proposed to ef-fectively mitigate reliability degradation such as transistor aging, processor variation, temperature effects, soft errors, etc. Provides readers with latest insights into novel, cross-layer methods and models with respect to dependability of embedded systems; Describes cross-layer approaches that can leverage reliability through techniques that are pro-actively designed with respect to techniques at other layers; Explains run-time adaptation and concepts/means of self-organization, in order to achieve error resiliency in complex, future many core systems

    Fehlertolerante Mehrkernprozessoren für gemischt-kritische Echtzeitsysteme

    Get PDF
    Current and future computing systems must be appropriately designed to cope with random hardware faults in order to provide a dependable service and correct functionality. Dependability has many facets to be addressed when designing a system and that is specially challenging in mixed-critical real-time systems, where safety standards play an important role and where responding in time can be as important as responding correctly or even responding at all. The thesis addresses the dependability of mixed-critical real-time systems, considering three important requirements: integrity, resilience and real-time. More specifically, it looks into the architectural and performance aspects of achieving dependability, concentrating its scope on error detection and handling in hardware -- more specifically in the Network-on-Chip (NoC), the backbone of modern MPSoC -- and on the performance of error handling and recovery in software. The thesis starts by looking at the impacts of random hardware faults on the NoC and on the system, with special focus on soft errors. Then, it addresses the uncovered weaknesses in the NoC by proposing a resilient NoC for mixed-critical real-time systems that is able to provide a highly reliable service with transparent protection for the applications. Formal communication time analysis is provided with common ARQ protocols modeled for NoCs and including a novel ARQ-based protocol optimized for DMAs. After addressing the efficient use of ARQ-based protocols in NoCs, the thesis proposes the Advanced Integrity Q-service (AIQ), a low-overhead mechanism to achieve integrity and real-time guarantees of NoC transactions on an End-to-End (E2E) basis. Inspired by transactions in distributed systems, the mechanism differs from the previous approach in that it does not provide error recovery in hardware but delegates the task to software, making use of existing functionality in cross-layer fault-tolerance solutions. Finally, the thesis addresses error handling in software as seen in cross-layer approaches. It addresses the performance of replicated software execution in many-core platforms. Replicated software execution provides protection to the system against random hardware faults. It relies on hardware-supported error detection and error handling in software. The replica-aware co-scheduling is proposed to achieve high performance with replicated execution, which is not possible with standard real-time schedulers.Um einen zuverlässigen Betrieb und korrekte Funktionalität zu gewährleisten, müssen aktuelle und zukünftige Computersysteme so ausgelegt werden, dass sie mit diesen Fehlern umgehen können. Zuverlässigkeit hat viele Aspekte, die bei der Entwicklung eines Systems berücksichtigt werden müssen. Das gilt insbesondere für Echtzeitsysteme mit gemischter Kritikalität, bei denen Sicherheitsstandards, die ein korrektes und rechtzeitiges Verhalten fordern, eine wichtige Rolle spielen. Diese Dissertation befasst sich mit der Zuverlässigkeit von gemischt-kritischen Echtzeitsystemen unter Berücksichtigung von drei wichtigen Anforderungen: Integrität, Resilienz und Echtzeit. Genauer gesagt, behandelt sie Architektur- und Leistungsaspekte die notwendig sind um Zuverlässigkeit zu erreichen, wobei der Schwerpunkt auf der Fehlererkennung und -behandlung in der Hardware – genauer gesagt im Network-on-Chip (NoC), dem Rückgrat des modernen MPSoC – und auf der Leistung der Fehlerbehandlung und -behebung in der Software liegt. Die Arbeit beginnt mit der Untersuchung der Auswirkung von zufälligen Hardwarefehlern auf das NoC und das System, wobei der Schwerpunkt auf weichen Fehler (soft errors) liegt. Anschließend werden die aufgedeckten Schwachstellen im NoC behoben, indem ein widerstandsfähiges NoC für gemischt-kritische Echtzeitsysteme vorgeschlagen wird, das in der Lage ist, einen höchst zuverlässigen Betrieb mit transparentem Schutz für die Anwendungen zu bieten. Nach der Auseinandersetzung mit der effizienten Nutzung von ARQ-basierten Protokolle in NoCs, wird der Advanced Integrity Q-Service (AIQ) vorgestellt, der ein Mechanismus mit geringem Overhead ist, um Integrität und Echtzeit-Garantien von NoC-Transaktionen auf Ende-zu-Ende (E2E)-Basis zu erreichen. Inspiriert von Transaktionen in verteilten Systemen unterscheidet sich der Mechanismus vom bisherigen Konzept dadurch, dass er keine Fehlerbehebung in der Hardware vorsieht, sondern diese Aufgabe an die Software delegiert. Schließlich befasst sich die Dissertation mit der Fehlerbehandlung in Software, wie sie in schichtübergreifenden Methoden zu sehen ist. Sie behandelt die Leistung der replizierten Software-Ausführung in Many-Core-Plattformen. Es setzt auf hardwaregestützte Fehlererkennung und Fehlerbehandlung in der Software. Das Replika-bewusste Co-Scheduling wird vorgeschlagen, um eine hohe Performance bei replizierter Ausführung zu erreichen, was mit Standard-Echtzeit-Schedulern nicht möglich ist

    Internet of Things From Hype to Reality

    Get PDF
    The Internet of Things (IoT) has gained significant mindshare, let alone attention, in academia and the industry especially over the past few years. The reasons behind this interest are the potential capabilities that IoT promises to offer. On the personal level, it paints a picture of a future world where all the things in our ambient environment are connected to the Internet and seamlessly communicate with each other to operate intelligently. The ultimate goal is to enable objects around us to efficiently sense our surroundings, inexpensively communicate, and ultimately create a better environment for us: one where everyday objects act based on what we need and like without explicit instructions

    Discrete Event Simulations

    Get PDF
    Considered by many authors as a technique for modelling stochastic, dynamic and discretely evolving systems, this technique has gained widespread acceptance among the practitioners who want to represent and improve complex systems. Since DES is a technique applied in incredibly different areas, this book reflects many different points of view about DES, thus, all authors describe how it is understood and applied within their context of work, providing an extensive understanding of what DES is. It can be said that the name of the book itself reflects the plurality that these points of view represent. The book embraces a number of topics covering theory, methods and applications to a wide range of sectors and problem areas that have been categorised into five groups. As well as the previously explained variety of points of view concerning DES, there is one additional thing to remark about this book: its richness when talking about actual data or actual data based analysis. When most academic areas are lacking application cases, roughly the half part of the chapters included in this book deal with actual problems or at least are based on actual data. Thus, the editor firmly believes that this book will be interesting for both beginners and practitioners in the area of DES

    Advanced Signal Processing Techniques Applied to Power Systems Control and Analysis

    Get PDF
    The work published in this book is related to the application of advanced signal processing in smart grids, including power quality, data management, stability and economic management in presence of renewable energy sources, energy storage systems, and electric vehicles. The distinct architecture of smart grids has prompted investigations into the use of advanced algorithms combined with signal processing methods to provide optimal results. The presented applications are focused on data management with cloud computing, power quality assessment, photovoltaic power plant control, and electrical vehicle charge stations, all supported by modern AI-based optimization methods
    corecore