9,289 research outputs found

    Refinement complements verification and validation.

    Get PDF
    Knowledge based systems are being applied in ever increasing numbers. The development of knowledge acquisition tools has eased the Knowledge Acquisition Bottleneck. More recently there has been a demand for mechanisms to assure the quality of knowledge based systems. Checking the contents of the knowledge base and the performance of the knowledge based systems at various stages throughout its life cycle is an important component of quality assurance. Hence, the demand now is for verification and validation tools. However, traditionally, verification and validation have identified possible faults in the knowledge base. In contrast, this paper advocates the use of knowledge refinement to correct identified faults in parallel with the ongoing verification and validation, thus easing the progress towards correct knowledge based systems. An automated refinement tool is described which uses the output from verification and validation tools to assemble evidence from which the refinement process can propose repairs. It is hoped that automated refinement in parallel with validation and verification may ease the Knowledge V &V Bottleneck

    Iterative criteria-based approach to engineering the requirements of software development methodologies

    Get PDF
    Software engineering endeavours are typically based on and governed by the requirements of the target software; requirements identification is therefore an integral part of software development methodologies. Similarly, engineering a software development methodology (SDM) involves the identification of the requirements of the target methodology. Methodology engineering approaches pay special attention to this issue; however, they make little use of existing methodologies as sources of insight into methodology requirements. The authors propose an iterative method for eliciting and specifying the requirements of a SDM using existing methodologies as supplementary resources. The method is performed as the analysis phase of a methodology engineering process aimed at the ultimate design and implementation of a target methodology. An initial set of requirements is first identified through analysing the characteristics of the development situation at hand and/or via delineating the general features desirable in the target methodology. These initial requirements are used as evaluation criteria; refined through iterative application to a select set of relevant methodologies. The finalised criteria highlight the qualities that the target methodology is expected to possess, and are therefore used as a basis for de. ning the final set of requirements. In an example, the authors demonstrate how the proposed elicitation process can be used for identifying the requirements of a general object-oriented SDM. Owing to its basis in knowledge gained from existing methodologies and practices, the proposed method can help methodology engineers produce a set of requirements that is not only more complete in span, but also more concrete and rigorous

    Performance Evaluation of Components Using a Granularity-based Interface Between Real-Time Calculus and Timed Automata

    Get PDF
    To analyze complex and heterogeneous real-time embedded systems, recent works have proposed interface techniques between real-time calculus (RTC) and timed automata (TA), in order to take advantage of the strengths of each technique for analyzing various components. But the time to analyze a state-based component modeled by TA may be prohibitively high, due to the state space explosion problem. In this paper, we propose a framework of granularity-based interfacing to speed up the analysis of a TA modeled component. First, we abstract fine models to work with event streams at coarse granularity. We perform analysis of the component at multiple coarse granularities and then based on RTC theory, we derive lower and upper bounds on arrival patterns of the fine output streams using the causality closure algorithm. Our framework can help to achieve tradeoffs between precision and analysis time.Comment: QAPL 201

    Concurrent Specification and Timing Analysis of Digital Hardware using SDL (extended version)

    Get PDF
    Digital hardware is treated as a collection of interacting parallel components. This permits the use of a standard formal technique for specification and analysis of circuit designs. The ANISEED method (Analysis In SDL Enhancing Electronic Design) is presented for specifying and analysing timing characteristics of hardware designs using SDL (Specification and Description Language). A signal carries a binary value and an optional time-stamp. Components and circuit designs are instances of block types in library packages. The library contains specifications of typical components in single/multi-bit and untimed/timed forms. Timing may be specified at an abstract, behavioural or structural level. Timing properties are investigated using an SDL simulator or validator. Consistency of temporal and functional aspects may be assessed between designs at different levels of detail. Timing characteristics of a design may also be inferred from validator traces. A variety of examples is used, ranging from a simple gate specification to realistic examples drawn from a standard hardware verification benchmark

    Towards a Method for Combined Model-based Testing and Analysis

    Get PDF

    Evaluating Software Architectures: Development Stability and Evolution

    Get PDF
    We survey seminal work on software architecture evaluationmethods. We then look at an emerging class of methodsthat explicates evaluating software architectures forstability and evolution. We define architectural stabilityand formulate the problem of evaluating software architecturesfor stability and evolution. We draw the attention onthe use of Architectures Description Languages (ADLs) forsupporting the evaluation of software architectures in generaland for architectural stability in specific

    AVA: A Video Dataset of Spatio-temporally Localized Atomic Visual Actions

    Get PDF
    This paper introduces a video dataset of spatio-temporally localized Atomic Visual Actions (AVA). The AVA dataset densely annotates 80 atomic visual actions in 430 15-minute video clips, where actions are localized in space and time, resulting in 1.58M action labels with multiple labels per person occurring frequently. The key characteristics of our dataset are: (1) the definition of atomic visual actions, rather than composite actions; (2) precise spatio-temporal annotations with possibly multiple annotations for each person; (3) exhaustive annotation of these atomic actions over 15-minute video clips; (4) people temporally linked across consecutive segments; and (5) using movies to gather a varied set of action representations. This departs from existing datasets for spatio-temporal action recognition, which typically provide sparse annotations for composite actions in short video clips. We will release the dataset publicly. AVA, with its realistic scene and action complexity, exposes the intrinsic difficulty of action recognition. To benchmark this, we present a novel approach for action localization that builds upon the current state-of-the-art methods, and demonstrates better performance on JHMDB and UCF101-24 categories. While setting a new state of the art on existing datasets, the overall results on AVA are low at 15.6% mAP, underscoring the need for developing new approaches for video understanding.Comment: To appear in CVPR 2018. Check dataset page https://research.google.com/ava/ for detail
    • …
    corecore