8,750 research outputs found

    Active Sampling-based Binary Verification of Dynamical Systems

    Full text link
    Nonlinear, adaptive, or otherwise complex control techniques are increasingly relied upon to ensure the safety of systems operating in uncertain environments. However, the nonlinearity of the resulting closed-loop system complicates verification that the system does in fact satisfy those requirements at all possible operating conditions. While analytical proof-based techniques and finite abstractions can be used to provably verify the closed-loop system's response at different operating conditions, they often produce conservative approximations due to restrictive assumptions and are difficult to construct in many applications. In contrast, popular statistical verification techniques relax the restrictions and instead rely upon simulations to construct statistical or probabilistic guarantees. This work presents a data-driven statistical verification procedure that instead constructs statistical learning models from simulated training data to separate the set of possible perturbations into "safe" and "unsafe" subsets. Binary evaluations of closed-loop system requirement satisfaction at various realizations of the uncertainties are obtained through temporal logic robustness metrics, which are then used to construct predictive models of requirement satisfaction over the full set of possible uncertainties. As the accuracy of these predictive statistical models is inherently coupled to the quality of the training data, an active learning algorithm selects additional sample points in order to maximize the expected change in the data-driven model and thus, indirectly, minimize the prediction error. Various case studies demonstrate the closed-loop verification procedure and highlight improvements in prediction error over both existing analytical and statistical verification techniques.Comment: 23 page

    Teaching Concurrent Software Design: A Case Study Using Android

    Full text link
    In this article, we explore various parallel and distributed computing topics from a user-centric software engineering perspective. Specifically, in the context of mobile application development, we study the basic building blocks of interactive applications in the form of events, timers, and asynchronous activities, along with related software modeling, architecture, and design topics.Comment: Submitted to CDER NSF/IEEE-TCPP Curriculum Initiative on Parallel and Distributed Computing - Core Topics for Undergraduate

    Hypothesis exploration with visualization of variance.

    Get PDF
    BackgroundThe Consortium for Neuropsychiatric Phenomics (CNP) at UCLA was an investigation into the biological bases of traits such as memory and response inhibition phenotypes-to explore whether they are linked to syndromes including ADHD, Bipolar disorder, and Schizophrenia. An aim of the consortium was in moving from traditional categorical approaches for psychiatric syndromes towards more quantitative approaches based on large-scale analysis of the space of human variation. It represented an application of phenomics-wide-scale, systematic study of phenotypes-to neuropsychiatry research.ResultsThis paper reports on a system for exploration of hypotheses in data obtained from the LA2K, LA3C, and LA5C studies in CNP. ViVA is a system for exploratory data analysis using novel mathematical models and methods for visualization of variance. An example of these methods is called VISOVA, a combination of visualization and analysis of variance, with the flavor of exploration associated with ANOVA in biomedical hypothesis generation. It permits visual identification of phenotype profiles-patterns of values across phenotypes-that characterize groups. Visualization enables screening and refinement of hypotheses about variance structure of sets of phenotypes.ConclusionsThe ViVA system was designed for exploration of neuropsychiatric hypotheses by interdisciplinary teams. Automated visualization in ViVA supports 'natural selection' on a pool of hypotheses, and permits deeper understanding of the statistical architecture of the data. Large-scale perspective of this kind could lead to better neuropsychiatric diagnostics

    A fast passivity test for descriptor systems via structure-preserving transformations of Skew-Hamiltonian/Hamiltonian matrix pencils

    Get PDF
    Passivity in a VLSI model is an important property to guarantee stable global simulation. Most VLSI models are naturally described as descriptor systems (DSs) or singular state spaces. Passivity tests for DSs, however, are much less developed compared to their non-singular state space counterparts. For large-scale DSs, the existing test based on linear matrix inequality (LMI) is computationally prohibitive. Other system decoupling techniques involve complicated coding and sometimes ill-conditioned transformations. This paper proposes a simple DS passivity test based on the key insight that the sum of a passive system and its adjoint must be impulse-free. A sidetrack shows that the proper (non-impulsive) part of a passive DS can be easily decoupled along the test flow. Numerical examples confirm the effectiveness of the proposed DS passivity test over conventional approaches. Copyright 2006 ACM.published_or_final_versio

    Software components as invariant-typed arrows

    Get PDF
    Keynote talk at CBSOFT, Natal, September 2012nvariants are constraints on software components which restrict their behavior in some desirable way, but whose maintenance entails some kind of proof obligation discharge. Such constraints may act not only over the input and output domains, as in a purely functional setting, but also over the underlying state space, as in the case of reactive components. This talk introduces an approach for reasoning about invariants which is both compositional and calculational: compositional because it is based on rules which break the complexity of such proof obligations across the structures involved; calculational because such rules are de- rived thanks to an algebra of invariants encoded in the language of binary relations. A main tool of this approach is the pointfree transform of the predicate calculus, which opens the possibility of changing the underly- ing mathematical space so as to enable agile algebraic calculation. The development of a theory of invariant preservation requires a broad, but uniform view of computational processes embodied in software components able to take into account data persistence and continued interaction. Such is the plan for this talk: we first introduce such processes as arrows, and then invariants as their types.(undefined
    • 

    corecore