145 research outputs found

    Discrete-time dynamic modeling for software and services composition as an extension of the Markov chain approach

    Get PDF
    Discrete Time Markov Chains (DTMCs) and Continuous Time Markov Chains (CTMCs) are often used to model various types of phenomena, such as, for example, the behavior of software products. In that case, Markov chains are widely used to describe possible time-varying behavior of “self-adaptive” software systems, where the transition from one state to another represents alternative choices at the software code level, taken according to a certain probability distribution. From a control-theoretical standpoint, some of these probabilities can be interpreted as control signals and others can just be observed. However, the translation between a DTMC or CTMC model and a corresponding first principle model, that can be used to design a control system is not immediate. This paper investigates a possible solution for translating a CTMC model into a dynamic system, with focus on the control of computing systems components. Notice that DTMC models can be translated as well, providing additional information

    A formal approach to adaptive software: continuous assurance of non-functional requirements

    No full text
    Abstract Modern software systems are increasingly requested to be adaptive to changes in the environment in which they are embedded. Moreover, adaptation often needs to be performed automatically, through self-managed reactions enacted by the application at run time. Off-line, human-driven changes should be requested only if self-adaptation cannot be achieved successfully. To support this kind of autonomic behavior, software systems must be empowered by a rich run-time support that can monitor the relevant phenomena of the surrounding environment to detect changes, analyze the data collected to understand the possible consequences of changes, reason about the ability of the application to continue to provide the required service, and finally react if an adaptation is needed. This paper focuses on non-functional requirements, which constitute an essential component of the quality that modern software systems need to exhibit. Although the proposed approach is quite general, it is mainly exemplified in the paper in the context of service-oriented systems, where the quality of service (QoS) is regulated by contractual obligations between the application provider and its clients. We analyze the case where an application, exported as a service, is built as a composition of other services. Non-functional requirements—such as reliability and performance—heavily depend on the environment in which the application is embedded. Thus changes in the environment may ultimately adversely affect QoS satisfaction. We illustrate an approach and support tools that enable a holistic view of the design and run-time management of adaptive software systems. The approach is based on formal (probabilistic) models that are used at design time to reason about dependability of the application in quantitative terms. Models continue to exist at run time to enable continuous verification and detection of changes that require adaptation.</jats:p

    Run-time efficient probabilistic model checking

    No full text
    Since the inception of discontinuous Galerkin (DG) methods for elliptic problems, there has existed a question of whether DG methods can be made more computationally efficient than continuous Galerkin (CG) methods. Fewer degrees of freedom, approximation properties for elliptic problems together with the number of optimization techniques, such as static condensation, available within CG framework made it challenging for DG methods to be competitive until recently. However, with the introduction of a static-condensation-amenable DG method—the hybridizable discontinuous Galerkin (HDG) method—it has become possible to perform a realistic comparison of CG and HDG methods when applied to elliptic problems. In this work, we extend upon an earlier 2D comparative study, providing numerical results and discussion of the CG and HDG method performance in three dimensions. The comparison categories covered include steady-state elliptic and time-dependent parabolic problems, various element types and serial and parallel performance. The postprocessing technique, which allows for superconvergence in the HDG case, is also discussed. Depending on the direct linear system solver used and the type of the problem (steady-state vs. time-dependent) in question the HDG method either outperforms or demonstrates a comparable performance when compared with the CG method. The HDG method however falls behind performance-wise when the iterative solver is used, which indicates the need for an effective preconditioning strategy for the method

    Syntax-driven program verification of matching logic properties

    No full text
    We describe a novel approach to program verification and its application to verification of C programs, where properties are expressed in matching logic. The general approach is syntax-directed: semantic rules, expressed according to Knuths attribute grammars, specify how verification conditions can be computed. Evaluation is performed by interplaying attribute computation and propagation through the syntax tree with invocation of a solver of logic formulae. The benefit of a general syntax-driven approach is that it provides a reusable reference scheme for implementing verifiers for different languages. We show that the instantiation of a general approach to a specific language does not penalize the efficiency of the resulting verifier. This is done by comparing our C verifier for matching logic with an existing tool for the same programming language and logic. A further key advantage of the syntax-directed approach is that it can be the starting point for an incremental verifier -- which is our long-term research target

    Compositional Solution Space Quantification for Probabilistic Software Analysis

    Get PDF
    Probabilistic software analysis aims at quantifying how likely a target event is to occur during program execution. Current approaches rely on symbolic execution to identify the conditions to reach the target event and try to quantify the fraction of the input domain satisfying these conditions. Precise quantification is usually limited to linear constraints, while only approximate solutions can be provided in general through statistical approaches. However, statistical approaches may fail to converge to an acceptable accuracy within a reasonable time. We present a compositional statistical approach for the efficient quantification of solution spaces for arbitrarily complex constraints over bounded floating-point domains. The approach leverages interval constraint propagation to improve the accuracy of the estimation by focusing the sampling on the regions of the input domain containing the sought solutions. Preliminary experiments show significant improvement on previous approaches both in results accuracy and analysis time

    Supporting self-adaptation via quantitative verification and sensitivity analysis at run time

    Get PDF
    Modern software-intensive systems often interact with an environment whose behavior changes over time, often unpredictably. The occurrence of changes may jeopardize their ability to meet the desired requirements. It is therefore desirable to design software in a way that it can self-adapt to the occurrence of changes with limited, or even without, human intervention. Self-adaptation can be achieved by bringing software models and model checking to run time, to support perpetual automatic reasoning about changes. Once a change is detected, the system itself can predict if requirements violations may occur and enable appropriate counter-actions. However, existing mainstream model checking techniques and tools were not conceived for run-time usage; hence they hardly meet the constraints imposed by on-the-fly analysis in terms of execution time and memory usage. This paper addresses this issue and focuses on perpetual satisfaction of non-functional requirements, such as reliability or energy consumption. Its main contribution is the description of a mathematical framework for run-time efficient probabilistic model checking. Our approach statically generates a set of verification conditions that can be efficiently evaluated at run time as soon as changes occur. The proposed approach also supports sensitivity analysis, which enables reasoning about the effects of changes and can drive effective adaptation strategies

    Counterexample-Driven Synthesis for Probabilistic Program Sketches

    Full text link
    Probabilistic programs are key to deal with uncertainty in e.g. controller synthesis. They are typically small but intricate. Their development is complex and error prone requiring quantitative reasoning over a myriad of alternative designs. To mitigate this complexity, we adopt counterexample-guided inductive synthesis (CEGIS) to automatically synthesise finite-state probabilistic programs. Our approach leverages efficient model checking, modern SMT solving, and counterexample generation at program level. Experiments on practically relevant case studies show that design spaces with millions of candidate designs can be fully explored using a few thousand verification queries.Comment: Extended versio

    Syntactic-semantic incrementality for agile verification

    Get PDF
    Modern software systems are continuously evolving, often because systems requirements change over time. Responding to requirements changes is one of the principles of agile methodologies. In this paper we envision the seamless integration of automated verification techniques within agile methodologies, thanks to the support for incrementality. Incremental verification accommodates the changes that occur within the schedule of frequent releases of software agile processes. We propose a general approach to developing families of verifiers that can support incremental verification for different kinds of artifacts and properties. The proposed syntactic-semantic approach is rooted in operator precedence grammars and their support for incremental parsing. Incremental verification procedures are encoded as attribute grammars, whose incremental evaluation goes hand in hand with incremental parsing

    An iterative decision-making scheme for Markov decision processes and its application to self-adaptive systems

    Get PDF
    Software is often governed by and thus adapts to phenomena that occur at runtime. Unlike traditional decision problems, where a decision-making model is determined for reasoning, the adaptation logic of such software is concerned with empirical data and is subject to practical constraints. We present an Iterative Decision-Making Scheme (IDMS) that infers both point and interval estimates for the undetermined transition probabilities in a Markov Decision Process (MDP) based on sampled data, and iteratively computes a confidently optimal scheduler from a given finite subset of schedulers. The most important feature of IDMS is the flexibility for adjusting the criterion of confident optimality and the sample size within the iteration, leading to a tradeoff between accuracy, data usage and computational overhead. We apply IDMS to an existing self-adaptation framework Rainbow and conduct a case study using a Rainbow system to demonstrate the flexibility of IDMS

    Influencers' “organic” persuasion through electronic word of mouth: a case of sincerity over brains and beauty

    Get PDF
    Despite the growing importance of influencers’ word-of-mouth through audio-visual content, little is known about its effect on consumers' brand evaluation, purchase intentions, and decisions. Drawing on Ohanian’s (2010) source credibility framework, we conducted two studies across different influencers, product categories, and respondents’ gender and tested the hypotheses using Covariance-based and Partial Least Square structural equation modelling. Study 1 focuses on a mega-influencer of cosmetic and beauty brands and predominantly involves female respondents. The findings show that the influencer’sattractiveness affects perceptions about source expertise and source trustworthiness but not brand attitude, while source expertise predicts source trustworthiness, and both mediate the effect of source attractiveness on brand attitude. In contrast, brand attitude predicts purchase intention and mediates the impact of source credibility dimensions. Study 2 focuses on various influencers of hedonic products (lifestyle, fashion, beauty). The results confirm the influence of source attractiveness and expertise on source trustworthiness, which ultimately predicts consumer purchase decisions. This study reveals the interdependencies between different source constructs, contributing to source credibility theory. Furthermore, we show that the effect of source dimensions that are relevant in the celebrity endorsement literature, such as source attractiveness, do not directly influence consumers' intentions and decisions in the context of influencers’ eWOM. Finally, the two studies confirm that only influencers perceived as honest and sincere can influence consumers’ purchase decisions
    corecore