116,371 research outputs found

    Taming Uncertainty in the Assurance Process of Self-Adaptive Systems: a Goal-Oriented Approach

    Full text link
    Goals are first-class entities in a self-adaptive system (SAS) as they guide the self-adaptation. A SAS often operates in dynamic and partially unknown environments, which cause uncertainty that the SAS has to address to achieve its goals. Moreover, besides the environment, other classes of uncertainty have been identified. However, these various classes and their sources are not systematically addressed by current approaches throughout the life cycle of the SAS. In general, uncertainty typically makes the assurance provision of SAS goals exclusively at design time not viable. This calls for an assurance process that spans the whole life cycle of the SAS. In this work, we propose a goal-oriented assurance process that supports taming different sources (within different classes) of uncertainty from defining the goals at design time to performing self-adaptation at runtime. Based on a goal model augmented with uncertainty annotations, we automatically generate parametric symbolic formulae with parameterized uncertainties at design time using symbolic model checking. These formulae and the goal model guide the synthesis of adaptation policies by engineers. At runtime, the generated formulae are evaluated to resolve the uncertainty and to steer the self-adaptation using the policies. In this paper, we focus on reliability and cost properties, for which we evaluate our approach on the Body Sensor Network (BSN) implemented in OpenDaVINCI. The results of the validation are promising and show that our approach is able to systematically tame multiple classes of uncertainty, and that it is effective and efficient in providing assurances for the goals of self-adaptive systems

    Machine learning and its applications in reliability analysis systems

    Get PDF
    In this thesis, we are interested in exploring some aspects of Machine Learning (ML) and its application in the Reliability Analysis systems (RAs). We begin by investigating some ML paradigms and their- techniques, go on to discuss the possible applications of ML in improving RAs performance, and lastly give guidelines of the architecture of learning RAs. Our survey of ML covers both levels of Neural Network learning and Symbolic learning. In symbolic process learning, five types of learning and their applications are discussed: rote learning, learning from instruction, learning from analogy, learning from examples, and learning from observation and discovery. The Reliability Analysis systems (RAs) presented in this thesis are mainly designed for maintaining plant safety supported by two functions: risk analysis function, i.e., failure mode effect analysis (FMEA) ; and diagnosis function, i.e., real-time fault location (RTFL). Three approaches have been discussed in creating the RAs. According to the result of our survey, we suggest currently the best design of RAs is to embed model-based RAs, i.e., MORA (as software) in a neural network based computer system (as hardware). However, there are still some improvement which can be made through the applications of Machine Learning. By implanting the 'learning element', the MORA will become learning MORA (La MORA) system, a learning Reliability Analysis system with the power of automatic knowledge acquisition and inconsistency checking, and more. To conclude our thesis, we propose an architecture of La MORA

    Combining case based reasoning with neural networks

    Get PDF
    This paper presents a neural network based technique for mapping problem situations to problem solutions for Case-Based Reasoning (CBR) applications. Both neural networks and CBR are instance-based learning techniques, although neural nets work with numerical data and CBR systems work with symbolic data. This paper discusses how the application scope of both paradigms could be enhanced by the use of hybrid concepts. To make the use of neural networks possible, the problem's situation and solution features are transformed into continuous features, using techniques similar to CBR's definition of similarity metrics. Radial Basis Function (RBF) neural nets are used to create a multivariable, continuous input-output mapping. As the mapping is continuous, this technique also provides generalisation between cases, replacing the domain specific solution adaptation techniques required by conventional CBR. This continuous representation also allows, as in fuzzy logic, an associated membership measure to be output with each symbolic feature, aiding the prioritisation of various possible solutions. A further advantage is that, as the RBF neurons are only active in a limited area of the input space, the solution can be accompanied by local estimates of accuracy, based on the sufficiency of the cases present in that area as well as the results measured during testing. We describe how the application of this technique could be of benefit to the real world problem of sales advisory systems, among others

    Combining case based reasoning with neural networks

    Get PDF
    This paper presents a neural network based technique for mapping problem situations to problem solutions for Case-Based Reasoning (CBR) applications. Both neural networks and CBR are instance-based learning techniques, although neural nets work with numerical data and CBR systems work with symbolic data. This paper discusses how the application scope of both paradigms could be enhanced by the use of hybrid concepts. To make the use of neural networks possible, the problem's situation and solution features are transformed into continuous features, using techniques similar to CBR's definition of similarity metrics. Radial Basis Function (RBF) neural nets are used to create a multivariable, continuous input-output mapping. As the mapping is continuous, this technique also provides generalisation between cases, replacing the domain specific solution adaptation techniques required by conventional CBR. This continuous representation also allows, as in fuzzy logic, an associated membership measure to be output with each symbolic feature, aiding the prioritisation of various possible solutions. A further advantage is that, as the RBF neurons are only active in a limited area of the input space, the solution can be accompanied by local estimates of accuracy, based on the sufficiency of the cases present in that area as well as the results measured during testing. We describe how the application of this technique could be of benefit to the real world problem of sales advisory systems, among others

    Communication Subsystems for Emerging Wireless Technologies

    Get PDF
    The paper describes a multi-disciplinary design of modern communication systems. The design starts with the analysis of a system in order to define requirements on its individual components. The design exploits proper models of communication channels to adapt the systems to expected transmission conditions. Input filtering of signals both in the frequency domain and in the spatial domain is ensured by a properly designed antenna. Further signal processing (amplification and further filtering) is done by electronics circuits. Finally, signal processing techniques are applied to yield information about current properties of frequency spectrum and to distribute the transmission over free subcarrier channels

    Using fuzzy logic to integrate neural networks and knowledge-based systems

    Get PDF
    Outlined here is a novel hybrid architecture that uses fuzzy logic to integrate neural networks and knowledge-based systems. The author's approach offers important synergistic benefits to neural nets, approximate reasoning, and symbolic processing. Fuzzy inference rules extend symbolic systems with approximate reasoning capabilities, which are used for integrating and interpreting the outputs of neural networks. The symbolic system captures meta-level information about neural networks and defines its interaction with neural networks through a set of control tasks. Fuzzy action rules provide a robust mechanism for recognizing the situations in which neural networks require certain control actions. The neural nets, on the other hand, offer flexible classification and adaptive learning capabilities, which are crucial for dynamic and noisy environments. By combining neural nets and symbolic systems at their system levels through the use of fuzzy logic, the author's approach alleviates current difficulties in reconciling differences between low-level data processing mechanisms of neural nets and artificial intelligence systems

    A Psychogenetic Algorithm for Behavioral Sequence Learning

    Get PDF
    This work presents an original algorithmic model of some essential features of psychogenetic theory, as was proposed by J.Piaget. Specifically, we modeled some elements of cognitive structure learning in children from 0 to 4 months of life. We are in fact convinced that the study of well-established cognitive models of human learning can suggest new, interesting approaches to problem so far not satisfactorily solved in the field of machine learning. Further, we discussed the possible parallels between our model and subsymbolic machine learning and neuroscience. The model was implemented and tested in some simple experimental settings, with reference to the task of learning sensorimotor sequences

    Probabilistic Model Checking for Energy Analysis in Software Product Lines

    Full text link
    In a software product line (SPL), a collection of software products is defined by their commonalities in terms of features rather than explicitly specifying all products one-by-one. Several verification techniques were adapted to establish temporal properties of SPLs. Symbolic and family-based model checking have been proven to be successful for tackling the combinatorial blow-up arising when reasoning about several feature combinations. However, most formal verification approaches for SPLs presented in the literature focus on the static SPLs, where the features of a product are fixed and cannot be changed during runtime. This is in contrast to dynamic SPLs, allowing to adapt feature combinations of a product dynamically after deployment. The main contribution of the paper is a compositional modeling framework for dynamic SPLs, which supports probabilistic and nondeterministic choices and allows for quantitative analysis. We specify the feature changes during runtime within an automata-based coordination component, enabling to reason over strategies how to trigger dynamic feature changes for optimizing various quantitative objectives, e.g., energy or monetary costs and reliability. For our framework there is a natural and conceptually simple translation into the input language of the prominent probabilistic model checker PRISM. This facilitates the application of PRISM's powerful symbolic engine to the operational behavior of dynamic SPLs and their family-based analysis against various quantitative queries. We demonstrate feasibility of our approach by a case study issuing an energy-aware bonding network device.Comment: 14 pages, 11 figure
    • 

    corecore