4,404 research outputs found

    A Survey of Fault-Tolerance Techniques for Embedded Systems from the Perspective of Power, Energy, and Thermal Issues

    Get PDF
    The relentless technology scaling has provided a significant increase in processor performance, but on the other hand, it has led to adverse impacts on system reliability. In particular, technology scaling increases the processor susceptibility to radiation-induced transient faults. Moreover, technology scaling with the discontinuation of Dennard scaling increases the power densities, thereby temperatures, on the chip. High temperature, in turn, accelerates transistor aging mechanisms, which may ultimately lead to permanent faults on the chip. To assure a reliable system operation, despite these potential reliability concerns, fault-tolerance techniques have emerged. Specifically, fault-tolerance techniques employ some kind of redundancies to satisfy specific reliability requirements. However, the integration of fault-tolerance techniques into real-time embedded systems complicates preserving timing constraints. As a remedy, many task mapping/scheduling policies have been proposed to consider the integration of fault-tolerance techniques and enforce both timing and reliability guarantees for real-time embedded systems. More advanced techniques aim additionally at minimizing power and energy while at the same time satisfying timing and reliability constraints. Recently, some scheduling techniques have started to tackle a new challenge, which is the temperature increase induced by employing fault-tolerance techniques. These emerging techniques aim at satisfying temperature constraints besides timing and reliability constraints. This paper provides an in-depth survey of the emerging research efforts that exploit fault-tolerance techniques while considering timing, power/energy, and temperature from the real-time embedded systems’ design perspective. In particular, the task mapping/scheduling policies for fault-tolerance real-time embedded systems are reviewed and classified according to their considered goals and constraints. Moreover, the employed fault-tolerance techniques, application models, and hardware models are considered as additional dimensions of the presented classification. Lastly, this survey gives deep insights into the main achievements and shortcomings of the existing approaches and highlights the most promising ones

    Superfluid Helium Tanker (SFHT) study

    Get PDF
    Replenishment of superfluid helium (SFHe) offers the potential of extending the on-orbit life of observatories, satellite instruments, sensors and laboratories which operate in the 2 K temperature regime. A reference set of resupply customers was identified as representing realistic helium servicing requirements and interfaces for the first 10 years of superfluid helium tanker (SFHT) operations. These included the Space Infrared Telescope Facility (SIRTF), the Advanced X-ray Astrophysics Facility (AXAF), the Particle Astrophysics Magnet Facility (Astromag), and the Microgravity and Materials Processing Sciences Facility (MMPS)/Critical Point Phenomena Facility (CPPF). A mixed-fleet approach to SFHT utilization was considered. The tanker permits servicing from the Shuttle cargo bay, in situ when attached to the OMV and carried to the user spacecraft, and as a depot at the Space Station. A SFHT Dewar ground servicing concept was developed which uses a dedicated ground cooling heat exchanger to convert all the liquid, after initial fill as normal fluid, to superfluid for launch. This concept permits the tanker to be filled to a near full condition, and then cooled without any loss of fluid. The final load condition can be saturated superfluid with any desired ullage volume, or the tank can be totally filed and pressurized. The SFHT Dewar and helium plumbing system design has sufficient component redundancy to meet fail-operational, fail-safe requirements, and is designed structurally to meet a 50 mission life usage requirement. Technology development recommendations were made for the selected SFHT concept, and a Program Plan and cost estimate prepared for a phase C/D program spanning 72 months from initiation through first launch in 1997

    Design Space Exploration and Resource Management of Multi/Many-Core Systems

    Get PDF
    The increasing demand of processing a higher number of applications and related data on computing platforms has resulted in reliance on multi-/many-core chips as they facilitate parallel processing. However, there is a desire for these platforms to be energy-efficient and reliable, and they need to perform secure computations for the interest of the whole community. This book provides perspectives on the aforementioned aspects from leading researchers in terms of state-of-the-art contributions and upcoming trends

    Virtual Runtime Application Partitions for Resource Management in Massively Parallel Architectures

    Get PDF
    This thesis presents a novel design paradigm, called Virtual Runtime Application Partitions (VRAP), to judiciously utilize the on-chip resources. As the dark silicon era approaches, where the power considerations will allow only a fraction chip to be powered on, judicious resource management will become a key consideration in future designs. Most of the works on resource management treat only the physical components (i.e. computation, communication, and memory blocks) as resources and manipulate the component to application mapping to optimize various parameters (e.g. energy efficiency). To further enhance the optimization potential, in addition to the physical resources we propose to manipulate abstract resources (i.e. voltage/frequency operating point, the fault-tolerance strength, the degree of parallelism, and the configuration architecture). The proposed framework (i.e. VRAP) encapsulates methods, algorithms, and hardware blocks to provide each application with the abstract resources tailored to its needs. To test the efficacy of this concept, we have developed three distinct self adaptive environments: (i) Private Operating Environment (POE), (ii) Private Reliability Environment (PRE), and (iii) Private Configuration Environment (PCE) that collectively ensure that each application meets its deadlines using minimal platform resources. In this work several novel architectural enhancements, algorithms and policies are presented to realize the virtual runtime application partitions efficiently. Considering the future design trends, we have chosen Coarse Grained Reconfigurable Architectures (CGRAs) and Network on Chips (NoCs) to test the feasibility of our approach. Specifically, we have chosen Dynamically Reconfigurable Resource Array (DRRA) and McNoC as the representative CGRA and NoC platforms. The proposed techniques are compared and evaluated using a variety of quantitative experiments. Synthesis and simulation results demonstrate VRAP significantly enhances the energy and power efficiency compared to state of the art.Siirretty Doriast

    Reconfigurable Battery Techniques and Systems: A Survey

    Get PDF
    Battery packs with a large number of battery cells are becoming more and more widely adopted in electronic systems, such as robotics, renewable energy systems, energy storage in smart grids, and electronic vehicles. Therefore, a well-designed battery pack is essential for battery applications. In the literature, the majority of research in battery pack design focuses on battery management system, safety circuit, and cell-balancing strategies. Recently, the reconfigurable battery pack design has gained increasing attentions as a promising solution to solve the problems existing in the conventional battery packs and associated battery management systems, such as low energy efficiency, short pack lifespan, safety issues, and low reliability. One of the most prominent features of reconfigurable battery packs is that the battery cell topology can be dynamically reconfigured in the real-time fashion based on the current condition (in terms of the state of charge and the state of health) of battery cells. So far, there are several reconfigurable battery schemes having been proposed and validated in the literature, all sharing the advantage of cell topology reconfiguration that ensures balanced cell states during charging and discharging, meanwhile providing strong fault tolerance ability. This survey is undertaken with the intent of identifying the state-of-the-art technologies of reconfigurable battery as well as providing review on related technologies and insight on future research in this emerging area

    Sustainable Edge Computing: Challenges and Future Directions

    Full text link
    An increasing amount of data is being injected into the network from IoT (Internet of Things) applications. Many of these applications, developed to improve society's quality of life, are latency-critical and inject large amounts of data into the network. These requirements of IoT applications trigger the emergence of Edge computing paradigm. Currently, data centers are responsible for a global energy use between 2% and 3%. However, this trend is difficult to maintain, as bringing computing infrastructures closer to the edge of the network comes with its own set of challenges for energy efficiency. In this paper, we propose our approach for the sustainability of future computing infrastructures to provide (i) an energy-efficient and economically viable deployment, (ii) a fault-tolerant automated operation, and (iii) a collaborative resource management to improve resource efficiency. We identify the main limitations of applying Cloud-based approaches close to the data sources and present the research challenges to Edge sustainability arising from these constraints. We propose two-phase immersion cooling, formal modeling, machine learning, and energy-centric federated management as Edge-enabling technologies. We present our early results towards the sustainability of an Edge infrastructure to demonstrate the benefits of our approach for future computing environments and deployments.Comment: 26 pages, 16 figure

    2020 NASA Technology Taxonomy

    Get PDF
    This document is an update (new photos used) of the PDF version of the 2020 NASA Technology Taxonomy that will be available to download on the OCT Public Website. The updated 2020 NASA Technology Taxonomy, or "technology dictionary", uses a technology discipline based approach that realigns like-technologies independent of their application within the NASA mission portfolio. This tool is meant to serve as a common technology discipline-based communication tool across the agency and with its partners in other government agencies, academia, industry, and across the world

    Integrated design of motor drives using random heuristic optimization for aerospace applications

    Get PDF
    High power density for aerospace motor drives is a key factor in the successful realization of the More Electric Aircraft (MEA) concept. An integrated system design approach offers optimization opportunities, which could lead to further improvements in power density. However this requires multi-disciplinary modelling and the handling of a complex optimization problem that is discrete and non¬linear in nature. This paper proposes a multi-level approach towards applying random heuristic optimization to the integrated motor design problem. Integrated optimizations are performed independently and sequentially at different levels assigned according to the 4-level modelling paradigm for electric systems. This paper also details a motor drive sizing procedure, which poses as the optimization problem to solve here. Finally, results comparing the proposed multi-level approach with a more traditional single-level approach is presented for a 2.5 kW actuator motor drive design. The multi-level approach is found to be more computationally efficient than its counterpart
    • …
    corecore