3,585 research outputs found

    Design and Validation of Embedded Real-Time Applications

    Get PDF
    International audienceThe design and validation of embedded real-time applications is challenging, especially when legacy sub-systems are involved. To account for the uncertainty in system-development at early design stages we use statistical modelling and discrete event simulation to perform sensitivity analysis. These analysis results provide vital information about the system characteristics and indicate usage scenarios where the behaviour of the system differs significantly from the average case. Based on the simulation results and the initial system requirements a usage model for the application is being set up. The model represents the requirements in an unambiguous and traceably correct manner. For each possible path through the model, considering stimuli and their timing, a unique system reaction is defined. This way the requirements are clarified. The usage model allows the derivation of test cases that can be used in the design phase to validate the model and in the acceptance phase to test the final system. Through the combination of the simulation results and the usage modelling we are able to:•identify critical system conditions.•validate the system design w.r.t. the usage modelThe proposed methods are currently applied in both the design and validation of safety critical applications

    Hierarchical testing designs for pattern recognition

    Full text link
    We explore the theoretical foundations of a ``twenty questions'' approach to pattern recognition. The object of the analysis is the computational process itself rather than probability distributions (Bayesian inference) or decision boundaries (statistical learning). Our formulation is motivated by applications to scene interpretation in which there are a great many possible explanations for the data, one (``background'') is statistically dominant, and it is imperative to restrict intensive computation to genuinely ambiguous regions. The focus here is then on pattern filtering: Given a large set Y of possible patterns or explanations, narrow down the true one Y to a small (random) subset \hat Y\subsetY of ``detected'' patterns to be subjected to further, more intense, processing. To this end, we consider a family of hypothesis tests for Y\in A versus the nonspecific alternatives Y\in A^c. Each test has null type I error and the candidate sets A\subsetY are arranged in a hierarchy of nested partitions. These tests are then characterized by scope (|A|), power (or type II error) and algorithmic cost. We consider sequential testing strategies in which decisions are made iteratively, based on past outcomes, about which test to perform next and when to stop testing. The set \hat Y is then taken to be the set of patterns that have not been ruled out by the tests performed. The total cost of a strategy is the sum of the ``testing cost'' and the ``postprocessing cost'' (proportional to |\hat Y|) and the corresponding optimization problem is analyzed.Comment: Published at http://dx.doi.org/10.1214/009053605000000174 in the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Modern software cybernetics: new trends

    Get PDF
    Software cybernetics research is to apply a variety of techniques from cybernetics research to software engineering research. For more than fifteen years since 2001, there has been a dramatic increase in work relating to software cybernetics. From cybernetics viewpoint, the work is mainly on the first-order level, namely, the software under observation and control. Beyond the first-order cybernetics, the software, developers/users, and running environments influence each other and thus create feedback to form more complicated systems. We classify software cybernetics as Software Cybernetics I based on the first-order cybernetics, and as Software Cybernetics II based on the higher order cybernetics. This paper provides a review of the literature on software cybernetics, particularly focusing on the transition from Software Cybernetics I to Software Cybernetics II. The results of the survey indicate that some new research areas such as Internet of Things, big data, cloud computing, cyber-physical systems, and even creative computing are related to Software Cybernetics II. The paper identifies the relationships between the techniques of Software Cybernetics II applied and the new research areas to which they have been applied, formulates research problems and challenges of software cybernetics with the application of principles of Phase II of software cybernetics; identifies and highlights new research trends of software cybernetic for further research

    Modern software cybernetics: New trends

    Get PDF
    The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link.Software cybernetics research is to apply a variety of techniques from cybernetics research to software engineering research. For more than fifteen years since 2001, there has been a dramatic increase in work relating to software cybernetics. From cybernetics viewpoint, the work is mainly on the first-order level, namely, the software under observation and control. Beyond the first-order cybernetics, the software, developers/users, and running environments influence each other and thus create feedback to form more complicated systems. We classify software cybernetics as Software Cybernetics I based on the first-order cybernetics, and as Software Cybernetics II based on the higher order cybernetics. This paper provides a review of the literature on software cybernetics, particularly focusing on the transition from Software Cybernetics I to Software Cybernetics II. The results of the survey indicate that some new research areas such as Internet of Things, big data, cloud computing, cyber-physical systems, and even creative computing are related to Software Cybernetics II. The paper identifies the relationships between the techniques of Software Cybernetics II applied and the new research areas to which they have been applied, formulates research problems and challenges of software cybernetics with the application of principles of Phase II of software cybernetics; identifies and highlights new research trends of software cybernetic for further research

    Deep Learning on Smart Meter Data: Non-Intrusive Load Monitoring and Stealthy Black-Box Attacks

    Get PDF
    Climate change and environmental concerns are instigating widespread changes in modern electricity sectors due to energy policy initiatives and advances in sustainable technologies. To raise awareness of sustainable energy usage and capitalize on advanced metering infrastructure (AMI), a novel deep learning non-intrusive load monitoring (NILM) model is proposed to disaggregate smart meter readings and identify the operation of individual appliances. This model can be used by Electric power utility (EPU) companies and third party entities, and then utilized to perform active or passive consumer power demand management. Although machine learning (ML) algorithms are powerful, these remain vulnerable to adversarial attacks. In this thesis, a novel stealthy black-box attack that targets NILM models is proposed. This work sheds light on both effectiveness and vulnerabilities of ML models in the smart grid context and provides valuable insights for maintaining security especially with increasing proliferation of artificial intelligence in the power system

    Supporting self-adaptation via quantitative verification and sensitivity analysis at run time

    Get PDF
    Modern software-intensive systems often interact with an environment whose behavior changes over time, often unpredictably. The occurrence of changes may jeopardize their ability to meet the desired requirements. It is therefore desirable to design software in a way that it can self-adapt to the occurrence of changes with limited, or even without, human intervention. Self-adaptation can be achieved by bringing software models and model checking to run time, to support perpetual automatic reasoning about changes. Once a change is detected, the system itself can predict if requirements violations may occur and enable appropriate counter-actions. However, existing mainstream model checking techniques and tools were not conceived for run-time usage; hence they hardly meet the constraints imposed by on-the-fly analysis in terms of execution time and memory usage. This paper addresses this issue and focuses on perpetual satisfaction of non-functional requirements, such as reliability or energy consumption. Its main contribution is the description of a mathematical framework for run-time efficient probabilistic model checking. Our approach statically generates a set of verification conditions that can be efficiently evaluated at run time as soon as changes occur. The proposed approach also supports sensitivity analysis, which enables reasoning about the effects of changes and can drive effective adaptation strategies

    Novel Monte Carlo Methods for Large-Scale Linear Algebra Operations

    Get PDF
    Linear algebra operations play an important role in scientific computing and data analysis. With increasing data volume and complexity in the Big Data era, linear algebra operations are important tools to process massive datasets. On one hand, the advent of modern high-performance computing architectures with increasing computing power has greatly enhanced our capability to deal with a large volume of data. One the other hand, many classical, deterministic numerical linear algebra algorithms have difficulty to scale to handle large data sets. Monte Carlo methods, which are based on statistical sampling, exhibit many attractive properties in dealing with large volume of datasets, including fast approximated results, memory efficiency, reduced data accesses, natural parallelism, and inherent fault tolerance. In this dissertation, we present new Monte Carlo methods to accommodate a set of fundamental and ubiquitous large-scale linear algebra operations, including solving large-scale linear systems, constructing low-rank matrix approximation, and approximating the extreme eigenvalues/ eigenvectors, across modern distributed and parallel computing architectures. First of all, we revisit the classical Ulam-von Neumann Monte Carlo algorithm and derive the necessary and sufficient condition for its convergence. To support a broad family of linear systems, we develop Krylov subspace Monte Carlo solvers that go beyond the use of Neumann series. New algorithms used in the Krylov subspace Monte Carlo solvers include (1) a Breakdown-Free Block Conjugate Gradient algorithm to address the potential rank deficiency problem occurred in block Krylov subspace methods; (2) a Block Conjugate Gradient for Least Squares algorithm to stably approximate the least squares solutions of general linear systems; (3) a BCGLS algorithm with deflation to gain convergence acceleration; and (4) a Monte Carlo Generalized Minimal Residual algorithm based on sampling matrix-vector products to provide fast approximation of solutions. Secondly, we design a rank-revealing randomized Singular Value Decomposition (R3SVD) algorithm for adaptively constructing low-rank matrix approximations to satisfy application-specific accuracy. Thirdly, we study the block power method on Markov Chain Monte Carlo transition matrices and find that the convergence is actually depending on the number of independent vectors in the block. Correspondingly, we develop a sliding window power method to find stationary distribution, which has demonstrated success in modeling stochastic luminal Calcium release site. Fourthly, we take advantage of hybrid CPU-GPU computing platforms to accelerate the performance of the Breakdown-Free Block Conjugate Gradient algorithm and the randomized Singular Value Decomposition algorithm. Finally, we design a Gaussian variant of Freivalds’ algorithm to efficiently verify the correctness of matrix-matrix multiplication while avoiding undetectable fault patterns encountered in deterministic algorithms
    • …
    corecore