28,418 research outputs found

    Taming Uncertainty in the Assurance Process of Self-Adaptive Systems: a Goal-Oriented Approach

    Full text link
    Goals are first-class entities in a self-adaptive system (SAS) as they guide the self-adaptation. A SAS often operates in dynamic and partially unknown environments, which cause uncertainty that the SAS has to address to achieve its goals. Moreover, besides the environment, other classes of uncertainty have been identified. However, these various classes and their sources are not systematically addressed by current approaches throughout the life cycle of the SAS. In general, uncertainty typically makes the assurance provision of SAS goals exclusively at design time not viable. This calls for an assurance process that spans the whole life cycle of the SAS. In this work, we propose a goal-oriented assurance process that supports taming different sources (within different classes) of uncertainty from defining the goals at design time to performing self-adaptation at runtime. Based on a goal model augmented with uncertainty annotations, we automatically generate parametric symbolic formulae with parameterized uncertainties at design time using symbolic model checking. These formulae and the goal model guide the synthesis of adaptation policies by engineers. At runtime, the generated formulae are evaluated to resolve the uncertainty and to steer the self-adaptation using the policies. In this paper, we focus on reliability and cost properties, for which we evaluate our approach on the Body Sensor Network (BSN) implemented in OpenDaVINCI. The results of the validation are promising and show that our approach is able to systematically tame multiple classes of uncertainty, and that it is effective and efficient in providing assurances for the goals of self-adaptive systems

    Forensic science evidence in question

    Get PDF
    How should forensic scientists and other expert witnesses present their evidence in court? What kinds and quality of data can experts properly draw on in formulating their conclusions? In an important recent decision in R. v T1 the Court of Appeal revisited these perennial questions, with the complicating twist that the evidence in question incorporated quantified probabilities, not all of which were based on statistical data. Recalling the sceptical tenor of previous judgments addressing the role of probability in the evaluation of scientific evidence,2 the Court of Appeal in R. v T condemned the expert’s methodology and served notice that it should not be repeated in future, a ruling which rapidly reverberated around the forensic science community causing consternation, and even dismay, amongst many seasoned practitioners.3 At such moments of perceived crisis it is essential to retain a sense of perspective. There is, in fact, much to welcome in the Court of Appeal’s judgment in R. v T, starting with the court’s commendable determination to subject the quality of expert evidence adduced in criminal litigation to searching scrutiny. English courts have not consistently risen to this challenge, sometimes accepting rather too easily the validity of questionable scientific techniques.4 However, the Court of Appeal’s reasoning in R. v T is not always easy to follow, and there are certain passages in the judgment which, taken out of context, might even appear to confirm forensic scientists’ worst fears. This article offers a constructive reading of R. v T, emphasising its positive features whilst rejecting interpretations which threaten, despite the Court of Appeal’s best intentions, to diminish the integrity of scientific evidence adduced in English criminal trials and distort its probative value

    Rehabilitating Statistical Evidence

    Get PDF
    Recently, the practice of deciding legal cases on purely statistical evidence has been widely criticised. Many feel uncomfortable with finding someone guilty on the basis of bare probabilities, even though the chance of error might be stupendously small. This is an important issue: with the rise of DNA profiling, courts are increasingly faced with purely statistical evidence. A prominent line of argument—endorsed by Blome-Tillmann 2017; Smith 2018; and Littlejohn 2018—rejects the use of such evidence by appealing to epistemic norms that apply to individual inquirers. My aim in this paper is to rehabilitate purely statistical evidence by arguing that, given the broader aims of legal systems, there are scenarios in which relying on such evidence is appropriate. Along the way I explain why popular arguments appealing to individual epistemic norms to reject legal reliance on bare statistics are unconvincing, by showing that courts and individuals face different epistemic predicaments (in short, individuals can hedge when confronted with statistical evidence, whilst legal tribunals cannot). I also correct some misconceptions about legal practice that have found their way into the recent literature

    Convex Optimization for Binary Classifier Aggregation in Multiclass Problems

    Full text link
    Multiclass problems are often decomposed into multiple binary problems that are solved by individual binary classifiers whose results are integrated into a final answer. Various methods, including all-pairs (APs), one-versus-all (OVA), and error correcting output code (ECOC), have been studied, to decompose multiclass problems into binary problems. However, little study has been made to optimally aggregate binary problems to determine a final answer to the multiclass problem. In this paper we present a convex optimization method for an optimal aggregation of binary classifiers to estimate class membership probabilities in multiclass problems. We model the class membership probability as a softmax function which takes a conic combination of discrepancies induced by individual binary classifiers, as an input. With this model, we formulate the regularized maximum likelihood estimation as a convex optimization problem, which is solved by the primal-dual interior point method. Connections of our method to large margin classifiers are presented, showing that the large margin formulation can be considered as a limiting case of our convex formulation. Numerical experiments on synthetic and real-world data sets demonstrate that our method outperforms existing aggregation methods as well as direct methods, in terms of the classification accuracy and the quality of class membership probability estimates.Comment: Appeared in Proceedings of the 2014 SIAM International Conference on Data Mining (SDM 2014

    The Political Economy of Corruption and the Role of Financial Institutions

    Get PDF
    In many developing countries, we observe rather high levels of corruption. This is surprising from a political economy perspective, as the majority of people generally suffers from high corruption levels. We explain why citizens do not exert enough political pressure to reduce corruption if financial institutions are missing. Our model is based on the fact that corrupt officials have to pay entry fees to get lucrative positions. The mode of financing this entry fee determines the distribution of the rents from corruption. In a probabilistic voting model, we show that a lack of financial institutions can lead to more corruption as more voters are part of the corrupt system. Thus, the economic system has an effect on political outcomes. Well-functioning financial institutions, in turn, can increase the political support for anti-corruption measures

    Stochastic Nonlinear Model Predictive Control with Efficient Sample Approximation of Chance Constraints

    Full text link
    This paper presents a stochastic model predictive control approach for nonlinear systems subject to time-invariant probabilistic uncertainties in model parameters and initial conditions. The stochastic optimal control problem entails a cost function in terms of expected values and higher moments of the states, and chance constraints that ensure probabilistic constraint satisfaction. The generalized polynomial chaos framework is used to propagate the time-invariant stochastic uncertainties through the nonlinear system dynamics, and to efficiently sample from the probability densities of the states to approximate the satisfaction probability of the chance constraints. To increase computational efficiency by avoiding excessive sampling, a statistical analysis is proposed to systematically determine a-priori the least conservative constraint tightening required at a given sample size to guarantee a desired feasibility probability of the sample-approximated chance constraint optimization problem. In addition, a method is presented for sample-based approximation of the analytic gradients of the chance constraints, which increases the optimization efficiency significantly. The proposed stochastic nonlinear model predictive control approach is applicable to a broad class of nonlinear systems with the sufficient condition that each term is analytic with respect to the states, and separable with respect to the inputs, states and parameters. The closed-loop performance of the proposed approach is evaluated using the Williams-Otto reactor with seven states, and ten uncertain parameters and initial conditions. The results demonstrate the efficiency of the approach for real-time stochastic model predictive control and its capability to systematically account for probabilistic uncertainties in contrast to a nonlinear model predictive control approaches.Comment: Submitted to Journal of Process Contro
    • …
    corecore