831 research outputs found
Discrete-time dynamic modeling for software and services composition as an extension of the Markov chain approach
Discrete Time Markov Chains (DTMCs) and Continuous Time Markov Chains (CTMCs) are often used to model various types of phenomena, such as, for example, the behavior of software products. In that case, Markov chains are widely used to describe possible time-varying behavior of “self-adaptive” software systems, where the transition from one state to another represents alternative choices at the software code level, taken according to a certain probability distribution. From a control-theoretical standpoint, some of these probabilities can be interpreted as control signals and others can just be observed. However, the translation between a DTMC or CTMC model and a corresponding first principle model, that can be used to design a control system is not immediate. This paper investigates a possible solution for translating a CTMC model into a dynamic system, with focus on the control of computing systems components. Notice that DTMC models can be translated as well, providing additional information
Concetto e Sistema in Kant. Immaginazione Trascendentale ed Unità della Ragione.
La tesi affronta le tematiche relative all'immaginazione trascenrdentale all'interno delle due edizioni della deduzione trascendentale dei concetti puri dell'intelletto ed in riferimento al capitolo sullo schematismo nella Critica della region pura di Kant. Scopo del lavoro è cogliere le differenti funzioni dell'immaginazione e valutare in base ad esse l'intero equilibrio teorico della deduzione come momento di attuazione del progetto sistematico della Ragione. Pensando lo schematismo come un esigenza sistematica vedremo in che misura l'immaginazione svolga un ruolo per determinare l'unità della Ragione nella sua stessa articolazione in facoltà
Iterative test suites refinement for elastic computing systems
Elastic computing systems can dynamically scale to continuously and cost-effectively provide their required Quality of Service in face of time-varying workloads, and they are usually implemented in the cloud. Despite their wide-spread adoption by industry, a formal definition of elasticity and suitable procedures for its assessment and verification are still missing. Both academia and industry are trying to adapt established testing procedures for functional and non-functional properties, with limited effectiveness with respect to elasticity. In this paper we propose a new methodology to automatically generate test-suites for testing the elastic properties of systems. Elasticity, plasticity, and oscillations are first formalized through a convenient behavioral abstraction of the elastic system and then used to drive an iterative test suite refinement process. The outcomes of our approach are a test suite tailored to the violation of elasticity properties and a human-readable abstraction of the system behavior to further support diagnosis and fix
On the probabilistic symbolic analysis of programs
Recently we have proposed symbolic execution techniques for the probabilistic analysis of programs. These techniques seek to quan- tify the probability of a program to satisfy a property of interest under a relevant usage profile. We describe recent advances in prob- abilistic symbolic analysis including handling of complex floating- point constraints and nondeterminism, and the use of statistical techniques for increased scalability
The how and why of consumers' co-creation: evidence from the Fiat 500 case study
In the current business environment firms need to continuously renew their products to keep up with their competitive advantage. Increasingly, customers' needs are atomized and they change at an unprecedented pace. Thus, some companies have started to tap consumer knowledge over the internet. These companies are using the web and social media to build virtual spaces to connect with actual and potential customers in order to engage them at different stages of the new product development process. In such virtual spaces, consumers spontaneously decide to contribute with their knowledge, ideas and activities to company's cocreation initiatives. In marketing research, there is a dearth of studies on how companies are involving ordinary consumers in their innovation processes through internet applications. Moreover, few studies have investigated the implications of cocreation activities on innovation outputs and brand image. Thus, this study has adopted a single case study approach and has explored the how and why of cocreation in the Fiat 500 open innovation project (‘500 wants you'), and the results achieved in terms of innovation generated and impact on the corporate brand image
Statistical Symbolic Execution with Informed Sampling
Symbolic execution techniques have been proposed recently for the probabilistic analysis of programs. These techniques seek to quantify the likelihood of reaching program events of interest, e.g., assert violations. They have many promising applications but have scalability issues due to high computational demand. To address this challenge, we propose a statistical symbolic execution technique that performs Monte Carlo sampling of the symbolic program paths and uses the obtained information for Bayesian estimation and hypothesis testing with respect to the probability of reaching the target events. To speed up the convergence of the statistical analysis, we propose Informed Sampling, an iterative symbolic execution that first explores the paths that have high statistical significance, prunes them from the state space and guides the execution towards less likely paths. The technique combines Bayesian estimation with a partial exact analysis for the pruned paths leading to provably improved convergence of the statistical analysis. We have implemented statistical symbolic execution with in- formed sampling in the Symbolic PathFinder tool. We show experimentally that the informed sampling obtains more precise results and converges faster than a purely statistical analysis and may also be more efficient than an exact symbolic analysis. When the latter does not terminate symbolic execution with informed sampling can give meaningful results under the same time and memory limits
A formal approach to adaptive software: continuous assurance of non-functional requirements
Abstract
Modern software systems are increasingly requested to be adaptive to changes in the environment in which they are embedded. Moreover, adaptation often needs to be performed automatically, through self-managed reactions enacted by the application at run time. Off-line, human-driven changes should be requested only if self-adaptation cannot be achieved successfully. To support this kind of autonomic behavior, software systems must be empowered by a rich run-time support that can monitor the relevant phenomena of the surrounding environment to detect changes, analyze the data collected to understand the possible consequences of changes, reason about the ability of the application to continue to provide the required service, and finally react if an adaptation is needed. This paper focuses on non-functional requirements, which constitute an essential component of the quality that modern software systems need to exhibit. Although the proposed approach is quite general, it is mainly exemplified in the paper in the context of service-oriented systems, where the quality of service (QoS) is regulated by contractual obligations between the application provider and its clients. We analyze the case where an application, exported as a service, is built as a composition of other services. Non-functional requirements—such as reliability and performance—heavily depend on the environment in which the application is embedded. Thus changes in the environment may ultimately adversely affect QoS satisfaction. We illustrate an approach and support tools that enable a holistic view of the design and run-time management of adaptive software systems. The approach is based on formal (probabilistic) models that are used at design time to reason about dependability of the application in quantitative terms. Models continue to exist at run time to enable continuous verification and detection of changes that require adaptation.</jats:p
Syntax-driven program verification of matching logic properties
We describe a novel approach to program verification and its application to verification of C programs, where properties are expressed in matching logic. The general approach is syntax-directed: semantic rules, expressed according to Knuths attribute grammars, specify how verification conditions can be computed. Evaluation is performed by interplaying attribute computation and propagation through the syntax tree with invocation of a solver of logic formulae. The benefit of a general syntax-driven approach is that it provides a reusable reference scheme for implementing verifiers for different languages. We show that the instantiation of a general approach to a specific language does not penalize the efficiency of the resulting verifier. This is done by comparing our C verifier for matching logic with an existing tool for the same programming language and logic. A further key advantage of the syntax-directed approach is that it can be the starting point for an incremental verifier -- which is our long-term research target
Lightweight adaptive filtering for efficient learning and updating of probabilistic models
Adaptive software systems are designed to cope with unpredictable and evolving usage behaviors and environmental conditions. For these systems reasoning mechanisms are needed to drive evolution, which are usually based on models capturing relevant aspects of the running software. The continuous update of these models in evolving environments requires efficient learning procedures, having low overhead and being robust to changes. Most of the available approaches achieve one of these goals at the price of the other. In this paper we propose a lightweight adaptive filter to accurately learn time-varying transition probabilities of discrete time Markov models, which provides robustness to noise and fast adaptation to changes with a very low overhead. A formal stability, unbiasedness and consistency assessment of the learning approach is provided, as well as an experimental comparison with state-of-the-art alternatives
- …
