1,670 research outputs found

    Constraint-based Modelling of Organisations

    Get PDF
    Modern organisations are characterised by a great variety of forms and often involve many actors with diverse goals, performing a wide range of tasks in changing environmental conditions. Due to high complexity, mistakes and inconsistencies are not rare in organisations. To provide better insights into the organisational operation and to identify different types of organisational problems explicit specification of relations and rules, on which the structure and behaviour of an organisation are based, is required. Before it is used, the specification of an organisation should be checked for internal consistency and validity w.r.t. the domain. To this end, the paper introduces a framework for formal specification of constraints that ensure the correctness of organisational specifications. To verify the satisfaction of constraints, efficient and scalable algorithms have been developed and implemented. The application of the proposed approach is illustrated by a case study from the air traffic domain

    Specification Reformulation During Specification Validation

    Get PDF
    The goal of the ARIES Simulation Component (ASC) is to uncover behavioral errors by 'running' a specification at the earliest possible points during the specification development process. The problems to be overcome are the obvious ones the specification may be large, incomplete, underconstrained, and/or uncompilable. This paper describes how specification reformulation is used to mitigate these problems. ASC begins by decomposing validation into specific validation questions. Next, the specification is reformulated to abstract out all those features unrelated to the identified validation question thus creating a new specialized specification. ASC relies on a precise statement of the validation question and a careful application of transformations so as to preserve the essential specification semantics in the resulting specialized specification. This technique is a win if the resulting specialized specification is small enough so the user my easily handle any remaining obstacles to execution. This paper will: (1) describe what a validation question is; (2) outline analysis techniques for identifying what concepts are and are not relevant to a validation question; and (3) identify and apply transformations which remove these less relevant concepts while preserving those which are relevant

    Computational Techniques for Stochastic Reachability

    Get PDF
    As automated control systems grow in prevalence and complexity, there is an increasing demand for verification and controller synthesis methods to ensure these systems perform safely and to desired specifications. In addition, uncertain or stochastic behaviors are often exhibited (such as wind affecting the motion of an aircraft), making probabilistic verification desirable. Stochastic reachability analysis provides a formal means of generating the set of initial states that meets a given objective (such as safety or reachability) with a desired level of probability, known as the reachable (or safe) set, depending on the objective. However, the applicability of reachability analysis is limited in the scope and size of system it can address. First, generating stochastic reachable or viable sets is computationally intensive, and most existing methods rely on an optimal control formulation that requires solving a dynamic program, and which scales exponentially in the dimension of the state space. Second, almost no results exist for extending stochastic reachability analysis to systems with incomplete information, such that the controller does not have access to the full state of the system. This thesis addresses both of the above limitations, and introduces novel computational methods for generating stochastic reachable sets for both perfectly and partially observable systems. We initially consider a linear system with additive Gaussian noise, and introduce two methods for computing stochastic reachable sets that do not require dynamic programming. The first method uses a particle approximation to formulate a deterministic mixed integer linear program that produces an estimate to reachability probabilities. The second method uses a convex chance-constrained optimization problem to generate an under-approximation to the reachable set. Using these methods we are able to generate stochastic reachable sets for a four-dimensional spacecraft docking example in far less time than it would take had we used a dynamic program. We then focus on discrete time stochastic hybrid systems, which provide a flexible modeling framework for systems that exhibit mode-dependent behavior, and whose state space has both discrete and continuous components. We incorporate a stochastic observation process into the hybrid system model, and derive both theoretical and computational results for generating stochastic reachable sets subject to an observation process. The derivation of an information state allows us to recast the problem as one of perfect information, and we prove that solving a dynamic program over the information state is equivalent to solving the original problem. We then demonstrate that the dynamic program to solve the reachability problem for a partially observable stochastic hybrid system shares the same properties as for a partially observable Markov decision process (POMDP) with an additive cost function, and so we can exploit approximation strategies designed for POMDPs to solve the reachability problem. To do so, however, we first generate approximate representations of the information state and value function as either vectors or Gaussian mixtures, through a finite state approximation to the hybrid system or using a Gaussian mixture approximation to an indicator function defined over a convex region. For a system with linear dynamics and Gaussian measurement noise, we show that it exhibits special properties that do not require an approximation of the information state, which enables much more efficient computation of the reachable set. In all cases we provide convergence results and numerical examples

    Measuring Sociality in Driving Interaction

    Full text link
    Interacting with other human road users is one of the most challenging tasks for autonomous vehicles. For congruent driving behaviors, it is essential to recognize and comprehend sociality, encompassing both implicit social norms and individualized social preferences of human drivers. To understand and quantify the complex sociality in driving interactions, we propose a Virtual-Game-based Interaction Model (VGIM) that is parameterized by a social preference measurement, Interaction Preference Value (IPV). The IPV is designed to capture the driver's relative inclination towards individual rewards over group rewards. A method for identifying IPV from observed driving trajectory is also developed, with which we assessed human drivers' IPV using driving data recorded in a typical interactive driving scenario, the unprotected left turn. Our findings reveal that (1) human drivers exhibit particular social preference patterns while undertaking specific tasks, such as turning left or proceeding straight; (2) competitive actions could be strategically conducted by human drivers in order to coordinate with others. Finally, we discuss the potential of learning sociality-aware navigation from human demonstrations by incorporating a rule-based humanlike IPV expressing strategy into VGIM and optimization-based motion planners. Simulation experiments demonstrate that (1) IPV identification improves the motion prediction performance in interactive driving scenarios and (2) the dynamic IPV expressing strategy extracted from human driving data makes it possible to reproduce humanlike coordination patterns in the driving interaction

    Proceedings of the Workshop on Change of Representation and Problem Reformulation

    Get PDF
    The proceedings of the third Workshop on Change of representation and Problem Reformulation is presented. In contrast to the first two workshops, this workshop was focused on analytic or knowledge-based approaches, as opposed to statistical or empirical approaches called 'constructive induction'. The organizing committee believes that there is a potential for combining analytic and inductive approaches at a future date. However, it became apparent at the previous two workshops that the communities pursuing these different approaches are currently interested in largely non-overlapping issues. The constructive induction community has been holding its own workshops, principally in conjunction with the machine learning conference. While this workshop is more focused on analytic approaches, the organizing committee has made an effort to include more application domains. We have greatly expanded from the origins in the machine learning community. Participants in this workshop come from the full spectrum of AI application domains including planning, qualitative physics, software engineering, knowledge representation, and machine learning

    LCCC Workshop on Process Control

    Get PDF

    Parameter Synthesis for Markov Models

    Full text link
    Markov chain analysis is a key technique in reliability engineering. A practical obstacle is that all probabilities in Markov models need to be known. However, system quantities such as failure rates or packet loss ratios, etc. are often not---or only partially---known. This motivates considering parametric models with transitions labeled with functions over parameters. Whereas traditional Markov chain analysis evaluates a reliability metric for a single, fixed set of probabilities, analysing parametric Markov models focuses on synthesising parameter values that establish a given reliability or performance specification φ\varphi. Examples are: what component failure rates ensure the probability of a system breakdown to be below 0.00000001?, or which failure rates maximise reliability? This paper presents various analysis algorithms for parametric Markov chains and Markov decision processes. We focus on three problems: (a) do all parameter values within a given region satisfy φ\varphi?, (b) which regions satisfy φ\varphi and which ones do not?, and (c) an approximate version of (b) focusing on covering a large fraction of all possible parameter values. We give a detailed account of the various algorithms, present a software tool realising these techniques, and report on an extensive experimental evaluation on benchmarks that span a wide range of applications.Comment: 38 page
    corecore