10,682 research outputs found

    Taming Uncertainty in the Assurance Process of Self-Adaptive Systems: a Goal-Oriented Approach

    Full text link
    Goals are first-class entities in a self-adaptive system (SAS) as they guide the self-adaptation. A SAS often operates in dynamic and partially unknown environments, which cause uncertainty that the SAS has to address to achieve its goals. Moreover, besides the environment, other classes of uncertainty have been identified. However, these various classes and their sources are not systematically addressed by current approaches throughout the life cycle of the SAS. In general, uncertainty typically makes the assurance provision of SAS goals exclusively at design time not viable. This calls for an assurance process that spans the whole life cycle of the SAS. In this work, we propose a goal-oriented assurance process that supports taming different sources (within different classes) of uncertainty from defining the goals at design time to performing self-adaptation at runtime. Based on a goal model augmented with uncertainty annotations, we automatically generate parametric symbolic formulae with parameterized uncertainties at design time using symbolic model checking. These formulae and the goal model guide the synthesis of adaptation policies by engineers. At runtime, the generated formulae are evaluated to resolve the uncertainty and to steer the self-adaptation using the policies. In this paper, we focus on reliability and cost properties, for which we evaluate our approach on the Body Sensor Network (BSN) implemented in OpenDaVINCI. The results of the validation are promising and show that our approach is able to systematically tame multiple classes of uncertainty, and that it is effective and efficient in providing assurances for the goals of self-adaptive systems

    Part 3: Systemic risk in ecology and engineering

    Get PDF
    The Federal Reserve Bank of New York released a report -- New Directions for Understanding Systemic Risk -- that presents key findings from a cross-disciplinary conference that it cosponsored in May 2006 with the National Academy of Sciences' Board on Mathematical Sciences and Their Applications. ; The pace of financial innovation over the past decade has increased the complexity and interconnectedness of the financial system. This development is important to central banks, such as the Federal Reserve, because of their traditional role in addressing systemic risks to the financial system. ; To encourage innovative thinking about systemic issues, the New York Fed partnered with the National Academy of Sciences to bring together more than 100 experts on systemic risk from 22 countries to compare cross-disciplinary perspectives on monitoring, addressing and preventing this type of risk. ; This report, released as part of the Bank's Economic Policy Review series, outlines some of the key points concerning systemic risk made by the various disciplines represented - including economic research, ecology, physics and engineering - as well as presentations on market-oriented models of financial crises, and systemic risk in the payments system and the interbank funds market. The report concludes with observations gathered from the sessions and a discussion of potential applications to policy. ; The three papers presented in this conference session highlighted the positive feedback effects that produce herdlike behavior in markets, and the subsequent discussion focused in part on means of encouraging heterogeneous investment strategies to counter such behavior. Participants in the session also discussed the types of models used to study systemic risk and commented on the challenges and trade-offs researchers face in developing their models.Financial risk management ; Financial markets ; Financial stability ; Financial crises

    Dynamic decision networks for decision-making in self-adaptive systems: a case study

    Get PDF
    Bayesian decision theory is increasingly applied to support decision-making processes under environmental variability and uncertainty. Researchers from application areas like psychology and biomedicine have applied these techniques successfully. However, in the area of software engineering and specifically in the area of self-adaptive systems (SASs), little progress has been made in the application of Bayesian decision theory. We believe that techniques based on Bayesian Networks (BNs) are useful for systems that dynamically adapt themselves at runtime to a changing environment, which is usually uncertain. In this paper, we discuss the case for the use of BNs, specifically Dynamic Decision Networks (DDNs), to support the decision-making of self-adaptive systems. We present how such a probabilistic model can be used to support the decision-making in SASs and justify its applicability. We have applied our DDN-based approach to the case of an adaptive remote data mirroring system. We discuss results, implications and potential benefits of the DDN to enhance the development and operation of self-adaptive systems, by providing mechanisms to cope with uncertainty and automatically make the best decision

    State of the art of learning styles-based adaptive educational hypermedia systems (Ls-Baehss)

    Get PDF
    The notion that learning can be enhanced when a teaching approach matches a learner’s learning style has been widely accepted in classroom settings since the latter represents a predictor of student’s attitude and preferences. As such, the traditional approach of ‘one-size-fits-all’ as may be applied to teaching delivery in Educational Hypermedia Systems (EHSs) has to be changed with an approach that responds to users’ needs by exploiting their individual differences. However, establishing and implementing reliable approaches for matching the teaching delivery and modalities to learning styles still represents an innovation challenge which has to be tackled. In this paper, seventy six studies are objectively analysed for several goals. In order to reveal the value of integrating learning styles in EHSs, different perspectives in this context are discussed. Identifying the most effective learning style models as incorporated within AEHSs. Investigating the effectiveness of different approaches for modelling students’ individual learning traits is another goal of this study. Thus, the paper highlights a number of theoretical and technical issues of LS-BAEHSs to serve as a comprehensive guidance for researchers who interest in this area

    Confronting input, parameter, structural, and measurement uncertainty in multi-site multiple-response watershed modeling using Bayesian inferences

    Get PDF
    2012 Fall.Includes bibliographical references.Simulation modeling is arguably one of the most powerful scientific tools available to address questions, assess alternatives, and support decision making for environmental management. Watershed models are used to describe and understand hydrologic and water quality responses of land and water systems under prevailing and projected conditions. Since the promulgation of the Clean Water Act of 1972 in the United States, models are increasingly used to evaluate potential impacts of mitigation strategies and support policy instruments for pollution control such as the Total Maximum Daily Load (TMDL) program. Generation, fate, and transport of water and contaminants within watershed systems comprise a highly complex network of interactions. It is difficult, if not impossible, to capture all important processes within a modeling framework. Although critical natural processes and management actions can be resolved at varying spatial and temporal scales, simulation models will always remain an approximation of the real system. As a result, the use of models with limited knowledge of the system and model structure is fraught with uncertainty. Wresting environmental decisions from model applications must consider factors that could conspire against credible model outcomes. The main goal of this study is to develop a novel Bayesian-based computational framework for characterization and incorporation of uncertainties from forcing inputs, model parameters, model structures, and measured responses in the parameter estimation process for multisite multiple-response watershed modeling. Specifically, the following objectives are defined: (i) to evaluate the effectiveness and efficiency of different computational strategies in sampling the model parameter space; (ii) to examine the role of measured responses at various locations in the stream network as well as intra-watershed processes in enhancing the model performance credibility; (iii) to facilitate combining predictions from competing model structures; and (iv) to develop a statistically rigorous procedure for incorporation of errors from input, parameter, structural and measurement sources in the parameter estimation process. The proposed framework was applied for simulating streamflow and total nitrogen at multiple locations within a 248 square kilometer watershed in the Midwestern United States using the Soil and Water Assessment Tool (SWAT). Results underlined the importance of simultaneous treatment of all sources of uncertainty for parameter estimation. In particular, it became evident that incorporation of input uncertainties was critical for determination of model structure for runoff generation and also representation of intra-watershed processes such as denitrification rate and dominant pathways for transport of nitrate within the system. The computational framework developed in this study can be implemented to establish credibility for modeling watershed processes. More importantly, the framework can reveal how collection of data from different responses at different locations within a watershed system of interest would enhance the predictive capability of watershed models by reducing input, parametric, structural, and measurement uncertainties

    A Review and Characterization of Progressive Visual Analytics

    Get PDF
    Progressive Visual Analytics (PVA) has gained increasing attention over the past years. It brings the user into the loop during otherwise long-running and non-transparent computations by producing intermediate partial results. These partial results can be shown to the user for early and continuous interaction with the emerging end result even while it is still being computed. Yet as clear-cut as this fundamental idea seems, the existing body of literature puts forth various interpretations and instantiations that have created a research domain of competing terms, various definitions, as well as long lists of practical requirements and design guidelines spread across different scientific communities. This makes it more and more difficult to get a succinct understanding of PVA’s principal concepts, let alone an overview of this increasingly diverging field. The review and discussion of PVA presented in this paper address these issues and provide (1) a literature collection on this topic, (2) a conceptual characterization of PVA, as well as (3) a consolidated set of practical recommendations for implementing and using PVA-based visual analytics solutions
    corecore