407 research outputs found
Design and Performance Analysis of Low Latency Routing Algorithm based NoC for MPSoC
The Network on Chip is appropriate where System-on-Chip technology is scalable and adaptable. The Network on Chip is a new communication architecture with a number of benefits, including scalability, flexibility, and reusability, for applications built on Multiprocessor System on a Chip (MPSoC). However, the design of efficient NoC fabric with high performance is critically complex because of its architectural parameters. Identifying a suitable scheduling algorithm to resolve arbitration among ports to obtain high-speed data transfer in the router is one of the most significant phases while designing a Network on chip based Multiprocessor System on a Chip. Low latency, throughput, space utilization, energy consumption, and reliability for Network on chip fabric are all determined by the router. The performance of the NoC system is hampered by the deadlock issues that plague conventional routing algorithms. This work develops a novel routing algorithm to address the deadlock problem. In this paper, a deterministic shortest path deadlock-free routing method is developed based on the analysis of the Turn Model. In the 2D-mesh structure, the algorithm uses separate routing methods for the odd and even columns. This minimizes the number of paths for a single channel, congestion, and latency. Two test scenarios—one with and one without a load test—were used to evaluate the proposed model. For a zero-load network, three clock cycles are utilized to transfer the packets. For the load network, five clocks are utilized to transfer the packets. The latency is measured for both cases without load and with load test and the corresponding latency is 3ns and 7ns respectively.The proposed method has an 18.57Mbps throughput. The area and power utilization for the proposed method are 69% (IO utilization) and 0.128W respectively. In order to validate the proposed method, the latency is compared with existing work and 50% latency is reduced both with and without congestion load
Adaptive Intelligent Systems for Extreme Environments
As embedded processors become powerful, a growing number of embedded systems equipped with artificial intelligence (AI) algorithms have been used in radiation environments to perform routine tasks to reduce radiation risk for human workers. On the one hand, because of the low price, commercial-off-the-shelf devices and components are becoming increasingly popular to make such tasks more affordable. Meanwhile, it also presents new challenges to improve radiation tolerance, the capability to conduct multiple AI tasks and deliver the power efficiency of the embedded systems in harsh environments. There are three aspects of research work that have been completed in this thesis: 1) a fast simulation method for analysis of single event effect (SEE) in integrated circuits, 2) a self-refresh scheme to detect and correct bit-flips in random access memory (RAM), and 3) a hardware AI system with dynamic hardware accelerators and AI models for increasing flexibility and efficiency.
The variances of the physical parameters in practical implementation, such as the nature of the particle, linear energy transfer and circuit characteristics, may have a large impact on the final simulation accuracy, which will significantly increase the complexity and cost in the workflow of the transistor level simulation for large-scale circuits. It makes it difficult to conduct SEE simulations for large-scale circuits. Therefore, in the first research work, a new SEE simulation scheme is proposed, to offer a fast and cost-efficient method to evaluate and compare the performance of large-scale circuits which subject to the effects of radiation particles. The advantages of transistor and hardware description language (HDL) simulations are combined here to produce accurate SEE digital error models for rapid error analysis in large-scale circuits. Under the proposed scheme, time-consuming back-end steps are skipped. The SEE analysis for large-scale circuits can be completed in just few hours.
In high-radiation environments, bit-flips in RAMs can not only occur but may also be accumulated. However, the typical error mitigation methods can not handle high error rates with low hardware costs. In the second work, an adaptive scheme combined with correcting codes and refreshing techniques is proposed, to correct errors and mitigate error accumulation in extreme radiation environments. This scheme is proposed to continuously refresh the data in RAMs so that errors can not be accumulated. Furthermore, because the proposed design can share the same ports with the user module without changing the timing sequence, it thus can be easily applied to the system where the hardware modules are designed with fixed reading and writing latency.
It is a challenge to implement intelligent systems with constrained hardware resources. In the third work, an adaptive hardware resource management system for multiple AI tasks in harsh environments was designed. Inspired by the “refreshing” concept in the second work, we utilise a key feature of FPGAs, partial reconfiguration, to improve the reliability and efficiency of the AI system. More
importantly, this feature provides the capability to manage the hardware resources for deep learning acceleration. In the proposed design, the on-chip hardware resources are dynamically managed to improve the flexibility, performance and power efficiency of deep learning inference systems. The deep learning units provided by Xilinx are used to perform multiple AI tasks simultaneously, and the experiments show significant improvements in power efficiency for a wide range of scenarios with different workloads. To further improve the performance of the system, the concept of reconfiguration was further extended. As a result, an adaptive DL software framework was designed. This framework can provide a significant level of adaptability support for various deep learning algorithms on an FPGA-based edge computing platform. To meet the specific accuracy and latency requirements derived from the running applications and operating environments, the platform may dynamically update hardware and software (e.g., processing pipelines) to achieve better cost, power, and processing efficiency compared to the static system
Run-time Resource Management in CMPs Handling Multiple Aging Mechanisms
Abstract—Run-time resource management is fundamental for efficient execution of workloads on Chip Multiprocessors. Application- and system-level requirements (e.g. on performance vs. power vs. lifetime reliability) are generally conflicting each other, and any decision on resource assignment, such as core allocation or frequency tuning, may positively affect some of them while penalizing some others. Resource assignment decisions can be perceived in few instants of time on performance and power consumption, but not on lifetime reliability. In fact, this latter changes very slowly based on the accumulation of effects of various decisions over a long time horizon. Moreover, aging mechanisms are various and have different causes; most of them, such as Electromigration (EM), are subject to temperature levels, while Thermal Cycling (TC) is caused mainly by temperature variations (both amplitude and frequency). Mitigating only EM may negatively affect TC and vice versa. We propose a resource orchestration strategy to balance the performance and power consumption constraints in the short-term and EM and TC aging in the long-term. Experimental results show that the proposed approach improves the average Mean Time To Failure at least by 17% and 20% w.r.t. EM and TC, respectively, while providing same performance level of the nominal counterpart and guaranteeing the power budget
A fault tolerant, peer-to-peer based scheduler for home grids
This thesis presents a fault-tolerant, Peer-to-Peer (P2P) based grid scheduling system for highly dynamic and highly heterogeneous environments, such as home networks, where we can find a variety of devices (laptops, PCs, game consoles, etc.) and networks.
The number of devices found in a house that are capable of processing data has been increasing in the last few years. However, being able to process data does not mean that these devices are powerful, and, in a home environment, there will be a demand for some applications that need significant computing resources, beyond the capabilities of a single domestic device, such as a set top box (examples of such applications are TV recommender systems, image processing and photo indexing systems). A computational grid is a possible solution for this problem, but the constrained environment in the home makes it difficult to use conventional grid scheduling technologies, which demand a powerful infrastructure.
Our solution is based on the distribution of the matchmaking task among providers, leaving the final allocation decision to a central scheduler that can be running on a limited device without a big loss in performance.
We evaluate our solution by simulating different scenarios and configurations against the Opportunistic Load Balance (OLB) scheduling heuristic, which we found to be the best option for home grids from the existing solutions that we analysed. The results have shown that our solution performs similar or better to OLB. Furthermore, our solution also provides fault tolerance, which is not achieved with OLB, and we have formally verified the behaviour our solution against two cases of network partition failure
Design and Implementation of High QoS 3D-NoC using Modified Double Particle Swarm Optimization on FPGA
One technique to overcome the exponential growth bottleneck is to increase the number of cores on a processor, although having too many cores might cause issues including chip overheating and communication blockage. The problem of the communication bottleneck on the chip is presently effectively resolved by networks-on-chip (NoC). A 3D stack of chips is now possible, thanks to recent developments in IC manufacturing techniques, enabling to reduce of chip area while increasing chip throughput and reducing power consumption. The automated process associated with mapping applications to form three-dimensional NoC architectures is a significant new path in 3D NoC research. This work proposes a 3D NoC partitioning approach that can identify the 3D NoC region that has to be mapped. A double particle swarm optimization (DPSO) inspired algorithmic technique, which may combine the characteristics having neighbourhood search and genetic architectures, also addresses the challenge of a particle swarm algorithm descending into local optimal solutions. Experimental evidence supports the claim that this hybrid optimization algorithm based on Double Particle Swarm Optimisation outperforms the conventional heuristic technique in terms of output rate and loss in energy. The findings demonstrate that in a network of the same size, the newly introduced router delivers the lowest loss on the longest path. Three factors, namely energy, latency or delay, and throughput, are compared between the suggested 3D mesh ONoC and its 2D version. When comparing power consumption between 3D ONoC and its electronic and 2D equivalents, which both have 512 IP cores, it may save roughly 79.9% of the energy used by the electronic counterpart and 24.3% of the energy used by the latter. The network efficiency of the 3D mesh ONoC is simulated by DPSO in a variety of configurations. The outcomes also demonstrate an increase in performance over the 2D ONoC. As a flexible communication solution, Network-On-Chips (NoCs) have been frequently employed in the development of multiprocessor system-on-chips (MPSoCs). By outsourcing their communication activities, NoCs permit on-chip Intellectual Property (IP) cores to communicate with one another and function at a better level. The important components in assigning application duties, distributing the work to the IPs, and coordinating communication among them are mapping and scheduling methods. This study aims to present an entirely advanced form of research in the area of 3D NoC mapping and scheduling applications, grouping the results according to various parameters and offering several suggestions for further research
Manticore: Hardware-Accelerated RTL Simulation with Static Bulk-Synchronous Parallelism
The demise of Moore's Law and Dennard Scaling has revived interest in
specialized computer architectures and accelerators. Verification and testing
of this hardware heavily uses cycle-accurate simulation of
register-transfer-level (RTL) designs. The best software RTL simulators can
simulate designs at 1--1000~kHz, i.e., more than three orders of magnitude
slower than hardware. Faster simulation can increase productivity by speeding
design iterations and permitting more exhaustive exploration.
One possibility is to use parallelism as RTL exposes considerable fine-grain
concurrency. However, state-of-the-art RTL simulators generally perform best
when single-threaded since modern processors cannot effectively exploit
fine-grain parallelism.
This work presents Manticore: a parallel computer designed to accelerate RTL
simulation. Manticore uses a static bulk-synchronous parallel (BSP) execution
model to eliminate runtime synchronization barriers among many simple
processors. Manticore relies entirely on its compiler to schedule resources and
communication. Because RTL code is practically free of long divergent execution
paths, static scheduling is feasible. Communication and synchronization no
longer incur runtime overhead, enabling efficient fine-grain parallelism.
Moreover, static scheduling dramatically simplifies the physical
implementation, significantly increasing the potential parallelism on a chip.
Our 225-core FPGA prototype running at 475 MHz outperforms a state-of-the-art
RTL simulator on an Intel Xeon processor running at 3.3 GHz by up to
27.9 (geomean 5.3) in nine Verilog benchmarks
Jornadas Nacionales de Investigación en Ciberseguridad: actas de las VIII Jornadas Nacionales de Investigación en ciberseguridad: Vigo, 21 a 23 de junio de 2023
Jornadas Nacionales de Investigación en Ciberseguridad (8ª. 2023. Vigo)atlanTTicAMTEGA: Axencia para a modernización tecnolóxica de GaliciaINCIBE: Instituto Nacional de Cibersegurida
Applying Hypervisor-Based Fault Tolerance Techniques to Safety-Critical Embedded Systems
This document details the work conducted through the development of this thesis, and it
is structured as follows:
• Chapter 1, Introduction, has briefly presented the motivation, objectives, and contributions
of this thesis.
• Chapter 2, Fundamentals, exposes a series of concepts that are necessary to correctly
understand the information presented in the rest of the thesis, such as the
concepts of virtualization, hypervisors, or software-based fault tolerance. In addition,
this chapter includes an exhaustive review and comparison between the different
hypervisors used in scientific studies dealing with safety-critical systems, and a
brief review of some works that try to improve fault tolerance in the hypervisor itself,
an area of research that is outside the scope of this work, but that complements
the mechanism presented and could be established as a line of future work.
• Chapter 3, Problem Statement and Related Work, explains the main reasons why
the concept of Hypervisor-Based Fault Tolerance was born and reviews the main
articles and research papers on the subject. This review includes both papers related
to safety-critical embedded systems (such as the research carried out in this thesis)
and papers related to cloud servers and cluster computing that, although not directly
applicable to embedded systems, may raise useful concepts that make our solution
more complete or allow us to establish future lines of work.
• Chapter 4, Proposed Solution, begins with a brief comparison of the work presented
in Chapter 3 to establish the requirements that our solution must meet in order to
be as complete and innovative as possible. It then sets out the architecture of the
proposed solution and explains in detail the two main elements of the solution: the
Voter and the Health Monitoring partition.
• Chapter 5, Prototype, explains in detail the prototyping of the proposed solution,
including the choice of the hypervisor, the processing board, and the critical functionality
to be redundant. With respect to the voter, it includes prototypes for both
the software version (the voter is implemented in a virtual machine) and the hardware
version (the voter is implemented as IP cores on the FPGA).
• Chapter 6, Evaluation, includes the evaluation of the prototype developed in Chapter
5. As a preliminary step and given that there is no evidence in this regard, an
exercise is carried out to measure the overhead involved in using the XtratuM hypervisor
versus not using it. Subsequently, qualitative tests are carried out to check that
Health Monitoring is working as expected and a fault injection campaign is carried
out to check the error detection and correction rate of our solution. Finally, a comparison
is made between the performance of the hardware and software versions of
Voter.
• Chapter 7, Conclusions and Future Work, is dedicated to collect the conclusions
obtained and the contributions made during the research (in the form of articles in
journals, conferences and contributions to projects and proposals in the industry).
In addition, it establishes some lines of future work that could complete and extend
the research carried out during this doctoral thesis.Programa de Doctorado en Ciencia y Tecnología Informática por la Universidad Carlos III de MadridPresidente: Katzalin Olcoz Herrero.- Secretario: Félix García Carballeira.- Vocal: Santiago Rodríguez de la Fuent
- …