157 research outputs found

    Reconfigurable Tree Architectures Using Subtree Oriented Fault Tolerance

    Get PDF
    Coordinated Science Laboratory was formerly known as Control Systems LaboratoryMicroelectronics and Computer Technology Corporation (MCC) / VLSI/CAD grantNational Aeronautics and Space Administration / NASA NAG 1-613AT&T Bell Laboratories fellowshipOpe

    Scheduling and reconfiguration of interconnection network switches

    Get PDF
    Interconnection networks are important parts of modern computing systems, facilitating communication between a system\u27s components. Switches connecting various nodes of an interconnection network serve to move data in the network. The switch\u27s delay and throughput impact the overall performance of the network and thus the system. Scheduling efficient movement of data through a switch and configuring the switch to realize a schedule are the main themes of this research. We consider various interconnection network switches including (i) crossbar-based switches, (ii) circuit-switched tree switches, and (iii) fat-tree switches. For crossbar-based input-queued switches, a recent result established that logarithmic packet delay is possible. However, this result assumes that packet transmission time through the switch is no less than schedule-generation time. We prove that without this assumption (as is the case in practice) packet delay becomes linear. We also report results of simulations that bear out our result for practical switch sizes and indicate that a fast scheduling algorithm reduces not only packet delay but also buffer size. We also propose a fast mesh-of-trees based distributed switch scheduling (maximal-matching based) algorithm that has polylog complexity. A circuit-switched tree (CST) can serve as an interconnect structure for various computing architectures and models such as the self-reconfigurable gate array and the reconfigurable mesh. A CST is a tree structure with source and destination processing elements as leaves and switches as internal nodes. We design several scheduling and configuration algorithms that distributedly partition a given set of communications into non-conflicting subsets and then establish switch settings and paths on the CST corresponding to the communications. A fat-tree is another widely used interconnection structure in many of today\u27s high-performance clusters. We embed a reconfigurable mesh inside a fat-tree switch to generate efficient connections. We present an R-Mesh-based algorithm for a fat-tree switch that creates buses connecting input and output ports corresponding to various communications using that switch

    Tree-structured small-world connected wireless network-on-chip with adaptive routing

    Get PDF
    Traditional Network-on-Chip (NoC) systems comprised of many cores suffer from debilitating bottlenecks of latency and significant power dissipation due to the overhead inherent in multi-hop communication. In addition, these systems remain vulnerable to malicious circuitry incorporated into the design by untrustworthy vendors in a world where complex multi-stage design and manufacturing processes require the collective specialized services of a variety of contractors. This thesis proposes a novel small-world tree-based network-on-chip (SWTNoC) structure designed for high throughput, acceptable energy consumption, and resiliency to attacks and node failures resulting from the insertion of hardware Trojans. This tree-based implementation was devised as a means of reducing average network hop count, providing a large degree of local connectivity, and effective long-range connectivity by means of a novel wireless link approach based on carbon nanotube (CNT) antenna design. Network resiliency is achieved by means of a devised adaptive routing algorithm implemented to work with TRAIN (Tree-based Routing Architecture for Irregular Networks). Comparisons are drawn with benchmark architectures with optimized wireless link placement by means of the simulated annealing (SA) metaheuristic. Experimental results demonstrate a 21% throughput improvement and a 23% reduction in dissipated energy per packet over the closest competing architecture. Similar trends are observed at increasing system sizes. In addition, the SWTNoC maintains this throughput and energy advantage in the presence of a fault introduced into the system. By designing a hierarchical topology and designating a higher level of importance on a subset of the nodes, much higher network throughput can be attained while simultaneously guaranteeing deadlock freedom as well as a high degree of resiliency and fault-tolerance

    Reconfiguration for Fault Tolerance and Performance Analysis

    Get PDF
    Architecture reconfiguration, the ability of a system to alter the active interconnection among modules, has a history of different purposes and strategies. Its purposes develop from the relatively simple desire to formalize procedures that all processes have in common to reconfiguration for the improvement of fault-tolerance, to reconfiguration for performance enhancement, either through the simple maximizing of system use or by sophisticated notions of wedding topology to the specific needs of a given process. Strategies range from straightforward redundancy by means of an identical backup system to intricate structures employing multistage interconnection networks. The present discussion surveys the more important contributions to developments in reconfigurable architecture. The strategy here is in a sense to approach the field from an historical perspective, with the goal of developing a more coherent theory of reconfiguration. First, the Turing and von Neumann machines are discussed from the perspective of system reconfiguration, and it is seen that this early important theoretical work contains little that anticipates reconfiguration. Then some early developments in reconfiguration are analyzed, including the work of Estrin and associates on the fixed plus variable restructurable computer system, the attempt to theorize about configurable computers by Miller and Cocke, and the work of Reddi and Feustel on their restructable computer system. The discussion then focuses on the most sustained systems for fault tolerance and performance enhancement that have been proposed. An attempt will be made to define fault tolerance and to investigate some of the strategies used to achieve it. By investigating four different systems, the Tandern computer, the C.vmp system, the Extra Stage Cube, and the Gamma network, the move from dynamic redundancy to reconfiguration is observed. Then reconfiguration for performance enhancement is discussed. A survey of some proposals is attempted, then the discussion focuses on the most sustained systems that have been proposed: PASM, the DC architecture, the Star local network, and the NYU Ultracomputer. The discussion is organized around a comparison of control, scheduling, communication, and network topology. Finally, comparisons are drawn between fault tolerance and performance enhancement, in order to clarify the notion of reconfiguration and to reveal the common ground of fault tolerance and performance enhancement as well as the areas in which they diverge. An attempt is made in the conclusion to derive from this survey and analysis some observations on the nature of reconfiguration, as well as some remarks on necessary further areas of research

    Flexible multi-layer virtual machine design for virtual laboratory in distributed systems and grids.

    Get PDF
    We propose a flexible Multi-layer Virtual Machine (MVM) design intended to improve efficiencies in distributed and grid computing and to overcome the known current problems that exist within traditional virtual machine architectures and those used in distributed and grid systems. This thesis presents a novel approach to building a virtual laboratory to support e-science by adapting MVMs within the distributed systems and grids, thereby providing enhanced flexibility and reconfigurability by raising the level of abstraction. The MVM consists of three layers. They are OS-level VM, queue VMs, and components VMs. The group of MVMs provides the virtualized resources, virtualized networks, and reconfigurable components layer for virtual laboratories. We demonstrate how our reconfigurable virtual machine can allow software designers and developers to reuse parallel communication patterns. In our framework, the virtual machines can be created on-demand and their applications can be distributed at the source-code level, compiled and instantiated in runtime. (Abstract shortened by UMI.) Paper copy at Leddy Library: Theses & Major Papers - Basement, West Bldg. / Call Number: Thesis2005 .K56. Source: Masters Abstracts International, Volume: 44-03, page: 1405. Thesis (M.Sc.)--University of Windsor (Canada), 2005

    Onboard Mission- and Contingency Management based on Behavior Trees for Unmanned Aerial Vehicles

    Get PDF
    Unmanned Aerial Vehicles (UAVs) have gained significant attention for their potential in various sectors, including surveillance, logistics, and disaster management. This thesis focuses on developing a novel onboard mission and contingency management system based on Behavior Trees for UAVs. The study aims to assert if behavior trees can be effectively applied to this domain and how they perform with respect to other modelling architectures. Furthermore, this document explores which tree structures are more efficient, good-design practices and behavior tree limitations. Overall, this thesis addresses the challenge of autonomous onboard decision-making of UAVs in complex and dynamic environments, particularly in the context of delivery missions in off-shore wind farms. The developed architecture is tested in a simulated environment. The research integrates a Skill Manager, a Mission Planner, and a Mission and Contingency Manager. The architecture leverages Behavior Trees to facilitate both mission execution and contingency management. The thesis also presents a quantitative analysis of key performance indicators, providing a comparative evaluation against traditional architectures like Finite State Machines. The results indicate that the proposed system is efficient in mission execution and effective in handling contingencies. This study offers a comprehensive structure targeting onboard planning, contingency management and concurrent actions execution. It also presents a quantitative analysis of Behavior Trees' performance in UAV mission execution and reactivity to contingent situations. It contributes to the ongoing discourse on UAV autonomy, offering insights beneficial for the broader deployment of UAVs in various industrial applications

    Modeling and formal verification of probabilistic reconfigurable systems

    Get PDF
    In this thesis, we propose a new approach for formal modeling and verification of adaptive probabilistic systems. Dynamic reconfigurable systems are the trend of all future technological systems, such as flight control systems, vehicle electronic systems, and manufacturing systems. In order to meet user and environmental requirements, such a dynamic reconfigurable system has to actively adjust its configuration at run-time by modifying its components and connections, while changes are detected in the internal/external execution environment. On the other hand, these changes may violate the memory usage, the required energy and the concerned real-time constraints since the behavior of the system is unpredictable. It might also make the system's functions unavailable for some time and make potential harm to human life or large financial investments. Thus, updating a system with any new configuration requires that the post reconfigurable system fully satisfies the related constraints. We introduce GR-TNCES formalism for the optimal functional and temporal specification of probabilistic reconfigurable systems under resource constraints. It enables the optimal specification of a probabilistic, energetic and memory constraints of such a system. To formally verify the correctness and the safety of such a probabilistic system specification, and the non-violation of its properties, an automatic transformation from GR-TNCES models into PRISM models is introduced. Moreover, a new approach XCTL is also proposed to formally verify reconfigurable systems. It enables the formal certification of uncompleted and reconfigurable systems. A new version of the software ZIZO is also proposed to model, simulate and verify such GR-TNCES model. To prove its relevance, the latter was applied to case studies; it was used to model and simulate the behavior of an IPV4 protocol to prevent the energy and memory resources violation. It was also used to optimize energy consumption of an automotive skid conveyor.In dieser Arbeit wird ein neuer Ansatz zur formalen Modellierung und Verifikation dynamisch rekonfigurierbarer Systeme vorgestellt. Dynamische rekonfigurierbare Systeme sind in vielen aktuellen und zukünftigen Anwendungen, wie beispielsweise Flugsteuerungssystemen, Fahrzeugelektronik und Fertigungssysteme zu finden. Diese Systeme weisen ein probabilistisches, adaptives Verhalten auf. Um die Benutzer- und Umgebungsbedingungen kontinuierlich zu erfüllen, muss ein solches System seine Konfiguration zur Laufzeit aktiv anpassen, indem es seine Komponenten, Verbindungen zwischen Komponenten und seine Daten modifiziert (adaptiv), sobald Änderungen in der internen oder externen Ausführungsumgebung erkannt werden (probabilistisch). Diese Anpassungen dürfen Beschränkungen bei der Speichernutzung, der erforderlichen Energie und bestehende Echtzeitbedingungen nicht verletzen. Eine nicht geprüfte Rekonfiguration könnte dazu führen, dass die Funktionen des Systems für einige Zeit nicht verfügbar wären und potenziell menschliches Leben gefährdet würde oder großer finanzieller Schaden entstünde. Somit erfordert das Aktualisieren eines Systems mit einer neuen Konfiguration, dass das rekonfigurierte System die zugehörigen Beschränkungen vollständig einhält. Um dies zu überprüfen, wird in dieser Arbeit der GR-TNCES-Formalismus, eine Erweiterung von Petrinetzen, für die optimale funktionale und zeitliche Spezifikation probabilistischer rekonfigurierbarer Systeme unter Ressourcenbeschränkungen vorgeschlagen. Die entstehenden Modelle sollen über probabilistische model checking verifiziert werden. Dazu eignet sich die etablierte Software PRISM. Um die Verifikation zu ermöglichen wird in dieser Arbeit ein Verfahren zur Transformation von GR-TNCES-Modellen in PRISM-Modelle beschrieben. Eine neu eingeführte Logik (XCTL) erlaubt zudem die einfache Beschreibung der zu prüfenden Eigenschaften. Die genannten Schritte wurden in einer Softwareumgebung für den automatisierten Entwurf, die Simulation und die formale Verifikation (durch eine automatische Transformation nach PRISM) umgesetzt. Eine Fallstudie zeigt die Anwendung des Verfahren

    Enabling Lock-Free Concurrent Fine-Grain Access to Massive Distributed Data: Application to Supernovae Detection

    Get PDF
    We consider the problem of efficiently managing massive data in a large-scale distributed environment. We consider data strings of size in the order of Terabytes, shared and accessed by concurrent clients. On each individual access, a segment of a string, of the order of Megabytes, is read or modified. Our goal is to provide the clients with efficient fine-grain access the data string as concurrently as possible, without locking the string itself. This issue is crucial in the context of applications in the field of astronomy, databases, data mining and multimedia. We illustrate these requiremens with the case of an application for searching supernovae. Our solution relies on distributed, RAM-based data storage, while leveraging a DHT-based, parallel metadata management scheme. The proposed architecture and algorithms have been validated through a software prototype and evaluated in a cluster environment

    Synthesis of FPGA-based accelerators implementing recursive algorithms

    Get PDF
    Doutoramento em Engenharia InformáticaO desenvolvimento de sistemas computacionais é um processo complexo, com múltiplas etapas, que requer uma análise profunda do problema, levando em consideração as limitações e os requisitos aplicáveis. Tal tarefa envolve a exploração de técnicas alternativas e de algoritmos computacionais para optimizar o sistema e satisfazer os requisitos estabelecidos. Neste contexto, uma das mais importantes etapas é a análise e implementação de algoritmos computacionais. Enormes avanços tecnológicos no âmbito das FPGAs (Field-Programmable Gate Arrays) tornaram possível o desenvolvimento de sistemas de engenharia extremamente complexos. Contudo, o número de transístores disponíveis por chip está a crescer mais rapidamente do que a capacidade que temos para desenvolver sistemas que tirem proveito desse crescimento. Esta limitação já bem conhecida, antes de se revelar com FPGAs, já se verificava com ASICs (Application-Specific Integrated Circuits) e tem vindo a aumentar continuamente. O desenvolvimento de sistemas com base em FPGAs de alta capacidade envolve uma grande variedade de ferramentas, incluindo métodos para a implementação eficiente de algoritmos computacionais. Esta tese pretende proporcionar uma contribuição nesta área, tirando partido da reutilização, do aumento do nível de abstracção e de especificações algorítmicas mais automatizadas e claras. Mais especificamente, é apresentado um estudo que foi levado a cabo no sentido de obter critérios relativos à implementação em hardware de algoritmos recursivos versus iterativos. Depois de serem apresentadas algumas das estratégias para implementar recursividade em hardware mais significativas, descreve-se, em pormenor, um conjunto de algoritmos para resolver problemas de pesquisa combinatória (considerados enquanto exemplos de aplicação). Versões recursivas e iterativas destes algoritmos foram implementados e testados em FPGA. Com base nos resultados obtidos, é feita uma cuidada análise comparativa. Novas ferramentas e técnicas de investigação que foram desenvolvidas no âmbito desta tese são também discutidas e demonstradas.Design of computational systems is a complex multistage process which requires a deep analysis of the problem, taking into account relevant limitations and constraints as well as software/hardware co-design. Such task involves exploring competitive techniques and computational algorithms, enabling the system to be optimized while satisfying given requirements. In this context, one of the most important stages is analysis and implementation of computational algorithms. Tremendous progress in the scope of FPGA (Field-Programmable Gate Array) technology has made it possible to design very complicated engineering systems. However, the number of available transistors grows faster than the ability to meaningfully design with them. This situation is a well known design productivity gap, which was inherited by FPGA from ASIC (Application-Specific Integrated Circuit) and which is increasing continuously. Developing engineering systems on the basis of high capacity FPGAs involves a wide variety of design tools, including methods for efficient implementation of computational algorithms. The thesis is intended to provide a contribution in this area by aiming at reuse, high level abstraction, automation, and clearness of algorithmic specifications. More specifically, it presents research studies which have been carried out in order to obtain criteria regarding implementation of recursive vs. iterative algorithms in hardware. After describing some of the most relevant strategies for implementing recursion in hardware, a selection of algorithms for solving combinatorial search problems (considered as application examples) are also described in detail. Iterative and recursive versions of these algorithms have been implemented and tested in FPGA. Taking into consideration the results obtained, a careful comparative analysis is given. New research-oriented tools and techniques for hardware design which have been developed in the scope of this thesis are also discussed and demonstrated
    corecore