964 research outputs found

    DeSyRe: on-Demand System Reliability

    No full text
    The DeSyRe project builds on-demand adaptive and reliable Systems-on-Chips (SoCs). As fabrication technology scales down, chips are becoming less reliable, thereby incurring increased power and performance costs for fault tolerance. To make matters worse, power density is becoming a significant limiting factor in SoC design, in general. In the face of such changes in the technological landscape, current solutions for fault tolerance are expected to introduce excessive overheads in future systems. Moreover, attempting to design and manufacture a totally defect and fault-free system, would impact heavily, even prohibitively, the design, manufacturing, and testing costs, as well as the system performance and power consumption. In this context, DeSyRe delivers a new generation of systems that are reliable by design at well-balanced power, performance, and design costs. In our attempt to reduce the overheads of fault-tolerance, only a small fraction of the chip is built to be fault-free. This fault-free part is then employed to manage the remaining fault-prone resources of the SoC. The DeSyRe framework is applied to two medical systems with high safety requirements (measured using the IEC 61508 functional safety standard) and tight power and performance constraints

    A machine learning-based approach to optimize repair and increase yield of embedded flash memories in automotive systems-on-chip

    Get PDF
    Nowadays, Embedded Flash Memory cores occupy a significant portion of Automotive Systems-on-Chip area, therefore strongly contributing to the final yield of the devices. Redundancy strategies play a key role in this context; in case of memory failures, a set of spare word- and bit-lines are allocated by a replacement algorithm that complements the memory testing procedure. In this work, we show that replacement algorithms, which are heavily constrained in terms of execution time, may be slightly inaccurate and lead to classify a repairable memory core as unrepairable. We denote this situation as Flash memory false fail. The proposed approach aims at identifying false fails by using a Machine Learning approach that exploits a feature extraction strategy based on shape recognition. Experimental results carried out on the manufacturing data show a high capability of predicting false fails

    NASA space station automation: AI-based technology review

    Get PDF
    Research and Development projects in automation for the Space Station are discussed. Artificial Intelligence (AI) based automation technologies are planned to enhance crew safety through reduced need for EVA, increase crew productivity through the reduction of routine operations, increase space station autonomy, and augment space station capability through the use of teleoperation and robotics. AI technology will also be developed for the servicing of satellites at the Space Station, system monitoring and diagnosis, space manufacturing, and the assembly of large space structures

    Space Station Technology, 1983

    Get PDF
    This publication is a compilation of the panel summaries presented in the following areas: systems/operations technology; crew and life support; EVA; crew and life support: ECLSS; attitude, control, and stabilization; human capabilities; auxillary propulsion; fluid management; communications; structures and mechanisms; data management; power; and thermal control. The objective of the workshop was to aid the Space Station Technology Steering Committee in defining and implementing a technology development program to support the establishment of a permanent human presence in space. This compilation will provide the participants and their organizations with the information presented at this workshop in a referenceable format. This information will establish a stepping stone for users of space station technology to develop new technology and plan future tasks

    Run-time management for future MPSoC platforms

    Get PDF
    In recent years, we are witnessing the dawning of the Multi-Processor Systemon- Chip (MPSoC) era. In essence, this era is triggered by the need to handle more complex applications, while reducing overall cost of embedded (handheld) devices. This cost will mainly be determined by the cost of the hardware platform and the cost of designing applications for that platform. The cost of a hardware platform will partly depend on its production volume. In turn, this means that ??exible, (easily) programmable multi-purpose platforms will exhibit a lower cost. A multi-purpose platform not only requires ??exibility, but should also combine a high performance with a low power consumption. To this end, MPSoC devices integrate computer architectural properties of various computing domains. Just like large-scale parallel and distributed systems, they contain multiple heterogeneous processing elements interconnected by a scalable, network-like structure. This helps in achieving scalable high performance. As in most mobile or portable embedded systems, there is a need for low-power operation and real-time behavior. The cost of designing applications is equally important. Indeed, the actual value of future MPSoC devices is not contained within the embedded multiprocessor IC, but in their capability to provide the user of the device with an amount of services or experiences. So from an application viewpoint, MPSoCs are designed to ef??ciently process multimedia content in applications like video players, video conferencing, 3D gaming, augmented reality, etc. Such applications typically require a lot of processing power and a signi??cant amount of memory. To keep up with ever evolving user needs and with new application standards appearing at a fast pace, MPSoC platforms need to be be easily programmable. Application scalability, i.e. the ability to use just enough platform resources according to the user requirements and with respect to the device capabilities is also an important factor. Hence scalability, ??exibility, real-time behavior, a high performance, a low power consumption and, ??nally, programmability are key components in realizing the success of MPSoC platforms. The run-time manager is logically located between the application layer en the platform layer. It has a crucial role in realizing these MPSoC requirements. As it abstracts the platform hardware, it improves platform programmability. By deciding on resource assignment at run-time and based on the performance requirements of the user, the needs of the application and the capabilities of the platform, it contributes to ??exibility, scalability and to low power operation. As it has an arbiter function between different applications, it enables real-time behavior. This thesis details the key components of such an MPSoC run-time manager and provides a proof-of-concept implementation. These key components include application quality management algorithms linked to MPSoC resource management mechanisms and policies, adapted to the provided MPSoC platform services. First, we describe the role, the responsibilities and the boundary conditions of an MPSoC run-time manager in a generic way. This includes a de??nition of the multiprocessor run-time management design space, a description of the run-time manager design trade-offs and a brief discussion on how these trade-offs affect the key MPSoC requirements. This design space de??nition and the trade-offs are illustrated based on ongoing research and on existing commercial and academic multiprocessor run-time management solutions. Consequently, we introduce a fast and ef??cient resource allocation heuristic that considers FPGA fabric properties such as fragmentation. In addition, this thesis introduces a novel task assignment algorithm for handling soft IP cores denoted as hierarchical con??guration. Hierarchical con??guration managed by the run-time manager enables easier application design and increases the run-time spatial mapping freedom. In turn, this improves the performance of the resource assignment algorithm. Furthermore, we introduce run-time task migration components. We detail a new run-time task migration policy closely coupled to the run-time resource assignment algorithm. In addition to detailing a design-environment supported mechanism that enables moving tasks between an ISP and ??ne-grained recon??gurable hardware, we also propose two novel task migration mechanisms tailored to the Network-on-Chip environment. Finally, we propose a novel mechanism for task migration initiation, based on reusing debug registers in modern embedded microprocessors. We propose a reactive on-chip communication management mechanism. We show that by exploiting an injection rate control mechanism it is possible to provide a communication management system capable of providing a soft (reactive) QoS in a NoC. We introduce a novel, platform independent run-time algorithm to perform quality management, i.e. to select an application quality operating point at run-time based on the user requirements and the available platform resources, as reported by the resource manager. This contribution also proposes a novel way to manage the interaction between the quality manager and the resource manager. In order to have a the realistic, reproducible and ??exible run-time manager testbench with respect to applications with multiple quality levels and implementation tradev offs, we have created an input data generation tool denoted Pareto Surfaces For Free (PSFF). The the PSFF tool is, to the best of our knowledge, the ??rst tool that generates multiple realistic application operating points either based on pro??ling information of a real-life application or based on a designer-controlled random generator. Finally, we provide a proof-of-concept demonstrator that combines these concepts and shows how these mechanisms and policies can operate for real-life situations. In addition, we show that the proposed solutions can be integrated into existing platform operating systems

    Service Quality and Profit Control in Utility Computing Service Life Cycles

    Get PDF
    Utility Computing is one of the most discussed business models in the context of Cloud Computing. Service providers are more and more pushed into the role of utilities by their customer's expectations. Subsequently, the demand for predictable service availability and pay-per-use pricing models increases. Furthermore, for providers, a new opportunity to optimise resource usage offers arises, resulting from new virtualisation techniques. In this context, the control of service quality and profit depends on a deep understanding of the representation of the relationship between business and technique. This research analyses the relationship between the business model of Utility Computing and Service-oriented Computing architectures hosted in Cloud environments. The relations are clarified in detail for the entire service life cycle and throughout all architectural layers. Based on the elaborated relations, an approach to a delivery framework is evolved, in order to enable the optimisation of the relation attributes, while the service implementation passes through business planning, development, and operations. Related work from academic literature does not cover the collected requirements on service offers in this context. This finding is revealed by a critical review of approaches in the fields of Cloud Computing, Grid Computing, and Application Clusters. The related work is analysed regarding appropriate provision architectures and quality assurance approaches. The main concepts of the delivery framework are evaluated based on a simulation model. To demonstrate the ability of the framework to model complex pay-per-use service cascades in Cloud environments, several experiments have been conducted. First outcomes proof that the contributions of this research undoubtedly enable the optimisation of service quality and profit in Cloud-based Service-oriented Computing architectures

    Medium Access Control Layer Implementation on Field Programmable Gate Array Board for Wireless Networks

    Get PDF
    Triple play services are playing an important role in modern telecommunications systems. Nowadays, more researchers are engaged in investigating the most efficient approaches to integrate these services at a reduced level of operation costs. Field Programmable Gate Array (FPGA) boards have been found as the most suitable platform to test new protocols as they offer high levels of flexibility and customization. This thesis focuses on implementing a framework for the Triple Play Time Division Multiple Access (TP-TDMA) protocol using the Xilinx FPGA Virtex-5 board. This flexible framework design offers network systems engineers a reconfigiirable platform for triple-play systems development. In this work, MicorBlaze is used to perform memory and connectivity tests aiming to ensure the establishment of the connectivity as well as board’s processor stability. Two different approaches are followed to achieve TP-TDMA implementa­tion: systematic and conceptual. In the systematic approach, a bottom-to-top design is chosen where four subsystems are built with various components. Each component is then tested individually to investigate its response. On the other hand, the concep­tual approach is designed with only two components, in which one of them is created with the help of Xilinx Integrated Software Environment (ISE) Core Generator. The system is integrated and then tested to check its overall response. In summary, the work of this thesis is divided into three sections. The first section presents a testing method for Virtex-5 board using MicroBlaze soft processor. The following two sections concentrate on implementing the TP-TDMA protocol on the board by using two design approaches: one based on designing each component from scratch, while the other one focuses more on the system’s broader picture
    • …
    corecore