11,625 research outputs found
Redundant Logic Insertion and Fault Tolerance Improvement in Combinational Circuits
This paper presents a novel method to identify and insert redundant logic
into a combinational circuit to improve its fault tolerance without having to
replicate the entire circuit as is the case with conventional redundancy
techniques. In this context, it is discussed how to estimate the fault masking
capability of a combinational circuit using the truth-cum-fault enumeration
table, and then it is shown how to identify the logic that can introduced to
add redundancy into the original circuit without affecting its native
functionality and with the aim of improving its fault tolerance though this
would involve some trade-off in the design metrics. However, care should be
taken while introducing redundant logic since redundant logic insertion may
give rise to new internal nodes and faults on those may impact the fault
tolerance of the resulting circuit. The combinational circuit that is
considered and its redundant counterparts are all implemented in semi-custom
design style using a 32/28nm CMOS digital cell library and their respective
design metrics and fault tolerances are compared
On the engineering of crucial software
The various aspects of the conventional software development cycle are examined. This cycle was the basis of the augmented approach contained in the original grant proposal. This cycle was found inadequate for crucial software development, and the justification for this opinion is presented. Several possible enhancements to the conventional software cycle are discussed. Software fault tolerance, a possible enhancement of major importance, is discussed separately. Formal verification using mathematical proof is considered. Automatic programming is a radical alternative to the conventional cycle and is discussed. Recommendations for a comprehensive approach are presented, and various experiments which could be conducted in AIRLAB are described
LEGaTO: first steps towards energy-efficient toolset for heterogeneous computing
LEGaTO is a three-year EU H2020 project which started in December 2017. The LEGaTO project will leverage task-based programming models to provide a software ecosystem for Made-in-Europe heterogeneous hardware composed of CPUs, GPUs, FPGAs and dataflow engines. The aim is to attain one order of magnitude energy savings from the edge to the converged cloud/HPC.Peer ReviewedPostprint (author's final draft
FPGA acceleration of a quantized neural network for remote-sensed cloud detection
The capture and transmission of remote-sensed imagery for Earth observation is both computationally and bandwidth expensive. In the analyses of remote-sensed imagery in the visual band, atmospheric cloud cover can obstruct up to two-thirds of observations, resulting in costly imagery being discarded. Mission objectives and satellite operational details vary; however, assuming a cloud-free observation requirement, a doubling of useful data downlinked with an associated halving of delivery cost is possible through effective cloud detection. A minimal-resource, real-time inference neural network is ideally suited to perform automatic cloud detection, both for pre-processing captured images prior to transmission and preventing unnecessary images being taken by larger payload sensors. Much of the hardware complexity of modern neural network implementations resides in high-precision floating-point calculation pipelines. In recent years, research has been conducted in identifying quantized, or low-integer precision equivalents to known deep learning models, which do not require the extensive resources of their floating-point, full-precision counterparts. Our work leverages existing research on binary and quantized neural networks to develop a real-time, remote-sensed cloud detection solution using a commodity field-programmable gate array. This follows on developments of the Forwards Looking Imager for predictive cloud detection developed by Craft Prospect, a space engineering practice based in Glasgow, UK. The synthesized cloud detection accelerator achieved an inference throughput of 358.1 images per second with a maximum power consumption of 2.4 W. This throughput is an order of magnitude faster than alternate algorithmic options for the Forwards Looking Imager at around one third reduction in classification accuracy, and approximately two orders of magnitude faster than the CloudScout deep neural network, deployed with HyperScout 2 on the European Space Agency PhiSat-1 mission. Strategies for incorporating fault tolerance mechanisms are expounded
Susceptible Workload Evaluation and Protection using Selective Fault Tolerance
This is an Open Access article distributed under the terms of the Creative Commons Attribution International License CC-BY 4.0 ( http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.Low power fault tolerance design techniques trade reliability to reduce the area cost and the power overhead of integrated circuits by protecting only a subset of their workload or their most vulnerable parts. However, in the presence of faults not all workloads are equally susceptible to errors. In this paper, we present a low power fault tolerance design technique that selects and protects the most susceptible workload. We propose to rank the workload susceptibility as the likelihood of any error to bypass the logic masking of the circuit and propagate to its outputs. The susceptible workload is protected by a partial Triple Modular Redundancy (TMR) scheme. We evaluate the proposed technique on timing-independent and timing-dependent errors induced by permanent and transient faults. In comparison with unranked selective fault tolerance approach, we demonstrate a) a similar error coverage with a 39.7% average reduction of the area overhead or b) a 86.9% average error coverage improvement for a similar area overhead. For the same area overhead case, we observe an error coverage improvement of 53.1% and 53.5% against permanent stuck-at and transition faults, respectively, and an average error coverage improvement of 151.8% and 89.0% against timing-dependent and timing-independent transient faults, respectively. Compared to TMR, the proposed technique achieves an area and power overhead reduction of 145.8% to 182.0%.Peer reviewedFinal Published versio
Analysis of design alternatives on using dynamic and partial reconfiguration in a space application
Some of the biggest concerns in space systems are power consumption and reliability due to the limited power generated by the system's energy harvesters and the fact that once deployed, it is almost impossible to perform maintenance or repairs. Another consideration is that during deployment, the high exposure to electromagnetic radiation can cause single event damage effects including SEUs, SEFIs, SETs and others. In order to mitigate these problems inherent to the space environment, a system with dynamic and partial reconfiguration capabilities is proposed. This approach provide s the flexibility to reconfigure parts of the FPGA while still in operation, thus making the system more flexible, fault tolerant and less power-consuming. In this paper, several partial reconfiguration approaches are proposed and compared in terms of device occupation, power consumption, reconfiguration speed and size of memory footprints
Towards Multidimensional Verification: Where Functional Meets Non-Functional
Trends in advanced electronic systems' design have a notable impact on design
verification technologies. The recent paradigms of Internet-of-Things (IoT) and
Cyber-Physical Systems (CPS) assume devices immersed in physical environments,
significantly constrained in resources and expected to provide levels of
security, privacy, reliability, performance and low power features. In recent
years, numerous extra-functional aspects of electronic systems were brought to
the front and imply verification of hardware design models in multidimensional
space along with the functional concerns of the target system. However,
different from the software domain such a holistic approach remains
underdeveloped. The contributions of this paper are a taxonomy for
multidimensional hardware verification aspects, a state-of-the-art survey of
related research works and trends towards the multidimensional verification
concept. The concept is motivated by an example for the functional and power
verification dimensions.Comment: 2018 IEEE Nordic Circuits and Systems Conference (NORCAS): NORCHIP
and International Symposium of System-on-Chip (SoC
NASA SBIR abstracts of 1991 phase 1 projects
The objectives of 301 projects placed under contract by the Small Business Innovation Research (SBIR) program of the National Aeronautics and Space Administration (NASA) are described. These projects were selected competitively from among proposals submitted to NASA in response to the 1991 SBIR Program Solicitation. The basic document consists of edited, non-proprietary abstracts of the winning proposals submitted by small businesses. The abstracts are presented under the 15 technical topics within which Phase 1 proposals were solicited. Each project was assigned a sequential identifying number from 001 to 301, in order of its appearance in the body of the report. Appendixes to provide additional information about the SBIR program and permit cross-reference of the 1991 Phase 1 projects by company name, location by state, principal investigator, NASA Field Center responsible for management of each project, and NASA contract number are included
- …