217 research outputs found
On the Role of Context in the Design of Mobile Mashups
This paper presents a design methodology and an accompanying platform for the design and fast development of Context-Aware
Mobile mashUpS (CAMUS). The approach is characterized by the role given to context as a first-class modeling dimension used to support i) the identification of the most adequate resources that can satisfy the users' situational needs and ii) the consequent tailoring at runtime of the provided data and functions. Context-based abstractions are exploited to generate models specifying how data returned by the selected services have to be merged and visualized by means of integrated views. Thanks
to the adoption of Model-Driven Engineering (MDE) techniques, these models drive the flexible execution of the final mobile app on target mobile devices. A prototype of the platform, making use of novel and advanced Web and mobile technologies, is also illustrated
Fast and Accurate Error Simulation for CNNs Against Soft Errors
The great quest for adopting AI-based computation for safety-/mission-critical applications motivates the interest towards methods for assessing the robustness of the application w.r.t. not only its training/tuning but also errors due to faults, in particular soft errors, affecting the underlying hardware. Two strategies exist: architecture-level fault injection and application-level functional error simulation. We present a framework for the reliability analysis of Convolutional Neural Networks (CNNs) via an error simulation engine that exploits a set of validated error models extracted from a detailed fault injection campaign. These error models are defined based on the corruption patterns of the output of the CNN operators induced by faults and bridge the gap between fault injection and error simulation, exploiting the advantages of both approaches. We compared our methodology against SASSIFI for the accuracy of functional error simulation w.r.t. fault injection, and against TensorFI in terms of speedup for the error simulation strategy. Experimental results show that our methodology achieves about 99% accuracy of the fault effects w.r.t. SASSIFI, and a speedup ranging from 44x up to 63x w.r.t. TensorFI, that only implements a limited set of error models
A data mining approach to incremental adaptive functional diagnosis
This paper presents a novel approach to functional fault diagnosis adopting data mining to exploit knowledge extracted from the system model. Such knowledge puts into relation test outcomes with components failures, to define an incremental strategy for identifying the candidate faulty component. The diagnosis procedure is built upon a set of sorted, possibly approximate, rules that specify given a (set of) failing test, which is the faulty candidate. The procedure iterative selects the most promising rules and requests the execution of the corresponding tests, until a component is identified as faulty, or no diagnosis can be performed. The proposed approach aims at limiting the number of tests to be executed in order to reduce the time and cost of diagnosis. Results on a set of examples show that the proposed approach allows for a significant reduction of the number of executed tests (the average improvement ranges from 32% to 88%)
Selective Hardening of CNNs based on Layer Vulnerability Estimation
There is an increasing interest in employing Convolutional Neural Networks (CNNs) in safety-critical application fields. In such scenarios, it is vital to ensure that the application fulfills the reliability requirements expressed by customers and design standards. On the other hand, given the CNNs extremely high computational requirements, it is also paramount to achieve high performance. To meet both reliability and performance requirements, partial and selective replication of the layers of the CNN can be applied. In this paper, we identify the most critical layers of a CNN in terms of vulnerability to fault and selectively duplicate them to achieve a target reliability vs. execution time trade-off. To this end we perform a design space exploration to identify layers to be duplicated. Results on the application of the proposed approach to four case study CNNs are reported
Analyzing the Reliability of Alternative Convolution Implementations for Deep Learning Applications
Convolution represents the core of Deep Learning (DL) applications, enabling the automatic extraction of features from raw input data. Several implementations of the convolution have been proposed. The impact of these different implementations on the performance of DL applications has been studied. However, no specific reliability-related analysis has been carried out. In this paper, we apply the CLASSES cross-layer reliability analysis methodology for an in-depth study aimed at: i) analyzing and characterizing the effects of Single Event Upsets occurring in Graphics Processing Units while executing the convolution operators; and ii) identifying whether a convolution implementation is more robust than others. The outcomes can then be exploited to tailor better hardening schemes for DL applications to improve reliability and reduce overhead
Approximation-Based Fault Tolerance in Image Processing Applications
Image processing applications exhibit an intrinsic degree of fault tolerance due to i) the redundant nature of images, and ii) the possible ability of the consumers of the application output to effectively carry out their task even when it is slightly corrupted. In this application scenario the classical Duplication with Comparison (DWC) scheme, that rejects images (and requires re-executions) when the two replicas' outputs differ in a per-pixel comparison, may be over-conservative. In this article, we propose a novel lightweight fault tolerant scheme specifically tailored for image processing applications. The proposed scheme enhances the state-of-the-art by: i) improving the DWC scheme by replacing one of the two exact replicas with an approximated counterpart, and ii) allowing to distinguish between usable and unusable images instead of corrupted and uncorrupted ones by means of a Convolutional Neural Network-based checker. To tune the proposed scheme we introduce a specific design methodology that optimizes both execution time and fault detection capability of the hardened system. We report the results of the application of the proposed approach on two case studies; our proposal achieves an average execution time reduction larger than 30% w.r.t. the DWC with re-execution, and less than 4% misclassified unusable images
Data Modeling for Ambient Home Care Systems
Ambient assisted living (AAL) services are usually designed to work on the assumption that real-time context information about the user and his environment is available. Systems handling acquisition and context inference need to use a versatile data model, expressive and scalable enough to handle complex context and heterogeneous data sources. In this paper, we describe an ontology to be used in a system providing AAL services. The ontology reuses previous ontologies and models the partners in the value chain and their service offering. With our proposal, we aim at having an effective AAL data model, easily adaptable to specific domain needs and services
- …