4 research outputs found
Approximation-Based Fault Tolerance in Image Processing Applications
Image processing applications exhibit an intrinsic degree of fault tolerance due to i) the redundant nature of images, and ii) the possible ability of the consumers of the application output to effectively carry out their task even when it is slightly corrupted. In this application scenario the classical Duplication with Comparison (DWC) scheme, that rejects images (and requires re-executions) when the two replicas' outputs differ in a per-pixel comparison, may be over-conservative. In this article, we propose a novel lightweight fault tolerant scheme specifically tailored for image processing applications. The proposed scheme enhances the state-of-the-art by: i) improving the DWC scheme by replacing one of the two exact replicas with an approximated counterpart, and ii) allowing to distinguish between usable and unusable images instead of corrupted and uncorrupted ones by means of a Convolutional Neural Network-based checker. To tune the proposed scheme we introduce a specific design methodology that optimizes both execution time and fault detection capability of the hardened system. We report the results of the application of the proposed approach on two case studies; our proposal achieves an average execution time reduction larger than 30% w.r.t. the DWC with re-execution, and less than 4% misclassified unusable images
An Approximation-based Fault Detection Scheme for Image Processing Applications
Image processing applications expose an intrinsic resilience to faults. In this application field the classical Duplication with Comparison (DWC) scheme, where output images are discarded as soon as the two replicas' outputs differ for at least one pixel, may be over-conseravative. This paper introduces a novel lightweight fault detection scheme for image processing applications; i) it extends the DWC scheme by substituting one of the two exact replicas with a faster approximated one; and ii) it features a Neural Network-based checker designed to distinguish between usable and unusable images instead of faulty/unfaulty ones. The application of the hardening scheme on a case study has shown an execution time reduction from 27% to 34% w.r.t. the DWC, while guaranteeing a comparable fault detection capability
Lightweight Fault Detection and Management for Image Restoration
Image restoration is generally employed to recover an image that has been blurred, for example, for noise suppression purposes. The Richardson-Lucy (RL) algorithm is a widely used iterative approach for image restoration. In this paper we propose a lightweight application-specific fault detection and management scheme for RL that exploits two specific characteristics of such algorithm: i) there is a strong correlation between the input and output images of each iteration, and ii) the algorithm is often able to produce a final output that is very similar to the expected one although the output of an intermediate iteration has been corrupted by a fault. The proposed scheme exploits these characteristics to detect the occurrence of a fault without requiring duplication and to determine whether the error in the output of an intermediate iteration of the algorithm would be absorbed (thus avoiding image dropping and algorithm reexecution) or whether the image has to be discarded and the overall elaboration to be re-executed. An experimental campaign demonstrated that our scheme allows for an execution time reduction of about 54% w.r.t. the classical Duplication with Comparison (DWC), still providing about 99% fault detection
A Neural Network Based Fault Management Scheme for Reliable Image Processing
Traditional reliability approaches introduce relevant costs to achieve unconditional correctness during data processing. However, many application environments are inherently tolerant to a certain degree of inexactness or inaccuracy. In this article, we focus on the practical scenario of image processing in space, a domain where faults are a threat, while the applications are inherently tolerant to a certain degree of errors. We first introduce the concept of usability of the processed image to relax the traditional requirement of unconditional correctness, and to limit the computational overheads related to reliability. We then introduce our new flexible and lightweight fault management methodology for inaccurate application environments. A key novelty of our scheme is the utilization of neural networks to reduce the costs associated with the occurrence and the detection of faults. Experiments on two aerospace image processing case studies show overall time savings of 14.89 and 34.72 percent for the two applications, respectively, as compared with the baseline classical Duplication with Comparison scheme