2,357 research outputs found

    Formal Verification of a Gain Scheduling Control Scheme

    Get PDF
    Gain scheduling is a commonly used closed-loop control approach for safety critical non-linear systems, such as commercial gas turbine engines. It is preferred over more advanced control strategies due to a known route to certification. Nonetheless, the stability of the system is hard to prove analytically, and consequently, safety and airworthiness is achieved by burdensome extensive testing. Model checking can aid in bringing down development costs of such a control system and simultaneously improve safety by providing guarantees on properties of embedded control systems. Due to model-checking exhaustive verification capabilities, it has long been recognised that coverage and error-detection rate can be increased compared to traditional testing methods. However, the statespace explosion is still a major computational limitation when applying model-checking to verify dynamic system behaviour. A practical methodology to incrementally design and formally verify control system requirements for a gain scheduling scheme is demonstrated in this paper, overcoming the computational constraints traditionally imposed by model checking. In this manner, the gain-scheduled controller can be efficiently and safely generated with the aid of the model checker

    Formal functional testing of graphical user interfaces.

    Get PDF
    SIGLEAvailable from British Library Document Supply Centre- DSC:DX177960 / BLDSC - British Library Document Supply CentreGBUnited Kingdo

    The relevance of model-driven engineering thirty years from now

    Get PDF
    International audienceAlthough model-driven engineering (MDE) is now an established approach for developing complex software systems, it has not been universally adopted by the software industry. In order to better understand the reasons for this, as well as to identify future opportunities for MDE, we carried out a week-long design thinking experiment with 15 MDE experts. Participants were facilitated to identify the biggest problems with current MDE technologies, to identify grand challenges for society in the near future, and to identify ways that MDE could help to address these challenges. The outcome is a reflection of the current strengths of MDE, an outlook of the most pressing challenges for society at large over the next three decades, and an analysis of key future MDE research opportunities

    Exploratory study of barriers to use of Feigenbaum\u27s quality cost strategy within design engineering firms

    Get PDF
    It has been more than a half century since Armand Feigenbaum first conceived of the strategy for the manufacturing sector, yet quality costing strategies have not found a foothold among engineering firms. This study was aimed at constructing a set of theories that explains possible barriers to acceptance of the Feigenbaum strategy by analyzing attitudes and opinions of players in an actual engineering firm and the culture in which they work. Modeled after the Feigenbaum approach and tailored to the business model of engineering and architectural firms, a system was developed to classify and record costs of quality. A local office of a large engineering firm was recruited to apply the system on a hand-picked design project. Using qualitative study with a phenomenology approach, records were examined and participants were interviewed. The observed concepts and emergent hypotheses begin to tell an interesting story that might be the key to the ultimate success or failure of the Feigenbaum approach to quality costing in engineering firms. Concepts related to the mechanics of a quality cost reporting system were originally thought to be overwhelming when applied to a working engineering firm; instead they have been observed to be relatively simple and have practical and straightforward solutions. However, concepts related to perceptions of quality and quality management appear to be much more daunting barriers to a prevention-based system due to policies and perceptions that have persisted for years --Abstract, page iii

    Deploying ontologies in software design

    Get PDF
    In this thesis we will be concerned with the relation between ontologies and software design. Ontologies are studied in the artificial intelligence community as a means to explicitly represent standardised domain knowledge in order to enable knowledge sharÂŹ ing and reuse. We deploy ontologies in software design with emphasis on a traditional software engineering theme: error detection. In particular, we identify a type of error that is often difficult to detect: conceptual errors. These are related to the description of the domain whom which the system will operate. They require subjective knowledge about correct forms of domain description to detect them. Ontologies provide these forms of domain description and we are interested in applying them and verify their correctness(chapter 1). After presenting an in depth analysis of the field of ontologies and software testing as conceived and implemented by the software engineering and artificial intelligence communities(chapter 2), we discuss an approach which enabled us to deploy ontologies in the early phases of software development (i.e., specifications) in order to detect conceptual errors (chapter 3). This is based on the provision of ontological axioms which are used to verify conformance of specification constructs to the underpinning ontology. To facilitate the integration of ontology with applications that adopt it we developed an architecture and built tools to implement this form of conceptual error check(chapter 4). We apply and evaluate the architecture in a variety of contexts to identify potential uses (chapter 5). An implication of this method for deÂŹ ploying ontologies to reason about the correctness of applications is to raise our trust in the given ontologies. However, when the ontologies themselves are erroneous we might fail to reveal pernicious discrepancies. To cope with this problem we extended the architecture to a multi-layer form(chapter 4) which gives us the ability to check the ontologies themselves for correctness. We apply this multi-layer architecture to capÂŹ ture errors found in a complex ontologies lattice(chapter 6). We further elaborate on the weaknesses in ontology evaluation methods and employ a technique stemming from software engineering, that of experience management, to facilitate ontology testing and deployment(chapter 7). The work presented in this thesis aims to improve practice in ontology use and identify areas to which ontologies could be of benefits other than the advocated ones of knowledge sharing and reuse(chapter 8)

    Abstract Code Injection: A Semantic Approach Based on Abstract Non-Interference

    Get PDF
    Code injection attacks have been the most critical security risks for almost a decade. These attacks are due to an interference between an untrusted input (potentially controlled by an attacker) and the execution of a string-to-code statement, interpreting as code its parameter. In this paper, we provide a semantic-based model for code injection parametric on what the programmer considers safe behaviors. In particular, we provide a general (abstract) non-interference-based framework for abstract code injection policies, i.e., policies characterizing safety against code injection w.r.t. a given specification of safe behaviors. We expect the new semantic perspective on code injection to provide a deeper knowledge on the nature itself of this security threat. Moreover, we devise a mechanism for enforcing (abstract) code injection policies, soundly detecting attacks, i.e., avoiding false negatives

    Expert system verification and validation study: Workshop and presentation material

    Get PDF
    Workshop and presentation material are included. Following an introduction, the basic concepts, techniques, and guidelines are discussed. Handouts and worksheets are included
    • 

    corecore