224 research outputs found

    Conformance testing of network simulators based on metamorphic testing technique

    Get PDF
    Network simulators, which implement network protocols under some simulated conditions, have been widely used to analyze the feasibility of network protocols. Conformance testing of the simulator against the protocol is a very important task in the community of telecommunications. However, many current conformance testing methods face a problem of finding a systematic mechanism to verify the test outputs. This paper proposes to use an innovative testing approach, metamorphic testing (MT), to alleviate such a problem. We select one adhoc on-demand distance vector (AODV) simulator for study and test its conformance against the AODV protocol by the MT technique. Through our experiments, we illustrate the applicability of MT in the protocol conformance testing, confirm the reliability of the selected AODV simulator, and demonstrate the cost-effectiveness of MT using the mutation analysis technique

    Proceedings of the First NASA Formal Methods Symposium

    Get PDF
    Topics covered include: Model Checking - My 27-Year Quest to Overcome the State Explosion Problem; Applying Formal Methods to NASA Projects: Transition from Research to Practice; TLA+: Whence, Wherefore, and Whither; Formal Methods Applications in Air Transportation; Theorem Proving in Intel Hardware Design; Building a Formal Model of a Human-Interactive System: Insights into the Integration of Formal Methods and Human Factors Engineering; Model Checking for Autonomic Systems Specified with ASSL; A Game-Theoretic Approach to Branching Time Abstract-Check-Refine Process; Software Model Checking Without Source Code; Generalized Abstract Symbolic Summaries; A Comparative Study of Randomized Constraint Solvers for Random-Symbolic Testing; Component-Oriented Behavior Extraction for Autonomic System Design; Automated Verification of Design Patterns with LePUS3; A Module Language for Typing by Contracts; From Goal-Oriented Requirements to Event-B Specifications; Introduction of Virtualization Technology to Multi-Process Model Checking; Comparing Techniques for Certified Static Analysis; Towards a Framework for Generating Tests to Satisfy Complex Code Coverage in Java Pathfinder; jFuzz: A Concolic Whitebox Fuzzer for Java; Machine-Checkable Timed CSP; Stochastic Formal Correctness of Numerical Algorithms; Deductive Verification of Cryptographic Software; Coloured Petri Net Refinement Specification and Correctness Proof with Coq; Modeling Guidelines for Code Generation in the Railway Signaling Context; Tactical Synthesis Of Efficient Global Search Algorithms; Towards Co-Engineering Communicating Autonomous Cyber-Physical Systems; and Formal Methods for Automated Diagnosis of Autosub 6000

    Model-based passive testing of safety-critical components

    Get PDF
    Passive testing is a complementary technique to active testing. For some types of systems, for example dynamic or adaptive distributed systems which are able to re-configure themselves at runtime in response to changes in their environments, exhaustive active testing before deployment is either theoretically impossible or practically not feasible. For such types of systems the additional application of the technique of passive testing is recommendable. However, a comprehensive theory and taxonomy of methods and techniques for model-based passive testing does –as far as we know– not yet exist and is from today’s perspective still very much a topic for future research in this domain. For this reason the presentation of the topic in this chapter is very much example-based such as to provide the reader with some first intuitions about what model-based passive testing is, what kinds of techniques could be used to implement it, and what could be some typical application scenarios for model-based passive testing in the domains of software systems, hardware systems, as well as embedded software+hardware systems.Note: Section 5 of our chapter, as well as several Figures and a number of Acknowledgments, which will appear in the above-mentioned book, are OMITTED in this pre-print version.http://www.crcpress.com/product/isbn/978143981845

    Automated Validation of State-Based Client-Centric Isolation with TLA <sup>+</sup>

    Get PDF
    Clear consistency guarantees on data are paramount for the design and implementation of distributed systems. When implementing distributed applications, developers require approaches to verify the data consistency guarantees of an implementation choice. Crooks et al. define a state-based and client-centric model of database isolation. This paper formalizes this state-based model in, reproduces their examples and shows how to model check runtime traces and algorithms with this formalization. The formalized model in enables semi-automatic model checking for different implementation alternatives for transactional operations and allows checking of conformance to isolation levels. We reproduce examples of the original paper and confirm the isolation guarantees of the combination of the well-known 2-phase locking and 2-phase commit algorithms. Using model checking this formalization can also help finding bugs in incorrect specifications. This improves feasibility of automated checking of isolation guarantees in synthesized synchronization implementations and it provides an environment for experimenting with new designs.</p

    Applying Process-Oriented Data Science to Dentistry

    Get PDF
    Background: Healthcare services now often follow evidence-based principles, so technologies such as process and data mining will help inform their drive towards optimal service delivery. Process mining (PM) can help the monitoring and reporting of this service delivery, measure compliance with guidelines, and assess effectiveness. In this research, PM extracts information about clinical activity recorded in dental electronic health records (EHRs) converts this into process-models providing stakeholders with unique insights to the dental treatment process. This thesis addresses a gap in prior research by demonstrating how process analytics can enhance our understanding of these processes and the effects of changes in strategy and policy over time. It also emphasises the importance of a rigorous and documented methodological approach often missing from the published literature. Aim: Apply the emerging technology of PM to an oral health dataset, illustrating the value of the data in the dental repository, and demonstrating how it can be presented in a useful and actionable manner to address public health questions. A subsidiary aim is to present the methodology used in this research in a way that provides useful guidance to future applications of dental PM. Objectives: Review dental and healthcare PM literature establishing state-of-the-art. Evaluate existing PM methods and their applicability to this research’s dataset. Extend existing PM methods achieving the aims of this research. Apply PM methods to the research dataset addressing public health questions. Document and present this research’s methodology. Apply data-mining, PM, and data-visualisation to provide insights into the variable pathways leading to different outcomes. Identify the data needed for PM of a dental EHR. Identify challenges to PM of dental EHR data. Methods: Extend existing PM methods to facilitate PM research in public health by detailing how data extracts from a dental EHR can be effectively managed, prepared, and used for PM. Use existing dental EHR and PM standards to generate a data reference model for effective PM. Develop a data-quality management framework. Results: Comparing the outputs of PM to established care-pathways showed that the dataset facilitated generation of high-level pathways but was less suitable for detailed guidelines. Used PM to identify the care pathway preceding a dental extraction under general anaesthetic and provided unique insights into this and the effects of policy decisions around school dental screenings. Conclusions: Research showed that PM and data-mining techniques can be applied to dental EHR data leading to fresh insights about dental treatment processes. This emerging technology along with established data mining techniques, should provide valuable insights to policy makers such as principal and chief dental officers to inform care pathways and policy decisions

    INTERACTIVE PROGRAMMING SUPPORT FOR SECURE SOFTWARE DEVELOPMENT

    Get PDF
    Software vulnerabilities originating from insecure code are one of the leading causes of security problems people face today. Unfortunately, many software developers have not been adequately trained in writing secure programs that are resistant from attacks violating program confidentiality, integrity, and availability, a style of programming which I refer to as secure programming. Worse, even well-trained developers can still make programming errors, including security ones. This may be either because of their lack of understanding of secure programming practices, and/or their lapses of attention on security. Much work on software security has focused on detecting software vulnerabilities through automated analysis techniques. While they are effective, they are neither sufficient nor optimal. For instance, current tool support for secure programming, both from tool vendors as well as within the research community, focuses on catching security errors after the program is written. Static and dynamic analyzers work in a similar way as early compilers: developers must first run the tool, obtain and analyze results, diagnose programs, and finally fix the code if necessary. Thus, these tools tend to be used to find vulnerabilities at the end of the development lifecycle. However, their popularity does not guarantee utilization; other business priorities may take precedence. Moreover, using such tools often requires some security expertise and can be costly. What is worse, these approaches exclude programmers from the security loop, and therefore, do not discourage them from continuing to write insecure code. In this dissertation, I investigate an approach to increase developer awareness and promoting good practices of secure programming by interactively reminding program- mers of secure programming practices in situ, helping them to either close the secure programming knowledge gap or overcome attention/memory lapses. More specifi- cally, I designed two techniques to help programmers prevent common secure coding errors: interactive code refactoring and interactive code annotation. My thesis is that by providing reminder support in a programming environment, e.g. modern IDE, one can effectively reduce common security vulnerabilities in software systems. I have implemented interactive code refactoring as a proof-of-concept plugin for Eclipse (32) and Java (57). Extensive evaluation results show that this approach can detect and address common web application vulnerabilities and can serve as an effective aid for programmers in writing secure code. My approach can also effectively complement existing software security best practices and significantly increase developer productivity. I have also implemented interactive code annotation, and conducted user studies to investigate its effectiveness and impact on developers’ programming behaviors and awareness towards writing secure code

    Designing Effective Interfaces for Older Users

    Get PDF
    The thesis examines the factors that need to be considered in order to undertake successful design of user interfaces for older users. The literature on aging is surveyed for age related changes that are of relevance to interface design. The findings from the literature review are extended and placed in a human context using observational studies of older people and their supporters as these older people attempted to learn about and use computers. These findings are then applied in three case studies of interface design and product development for older users. These case studies are reported and examined in depth. For each case study results are presented on the acceptance of the final product by older people. These results show that, for each case study, the interfaces used led to products that the older people evaluating them rated as unusually suitable to their needs as older users. The relationship between the case studies and the overall research aims is then examined in a discussion of the research methodology. In the case studies there is an evolving approach used in developing the interface designs. This approach includes intensive contribution by older people to the shaping of the interface design. This approach is analyzed and is presented as an approach to designing user interfaces for older people. It was found that a number of non-standard techniques were useful in order to maximize the benefit from the involvement of the older contributors and to ensure their ethical treatment. These techniques and the rationale behind them are described. Finally the interface design approach that emerged has strong links to the approach used by the UTOPIA team based at the university of Dundee. The extent to which the thesis provides support for the UTOPIA approach is discussed

    Trusted product lines

    Get PDF
    This thesis describes research undertaken into the application of software product line approaches to the development of high-integrity, embedded real-time software systems that are subject to regulatory approval/certification. The motivation for the research arose from a real business need to reduce cost and lead time of aerospace software development projects. The thesis hypothesis can be summarised as follows: It is feasible to construct product line models that allow the specification of required behaviour within a reference architecture that can be transformed into an effective product implementation, whilst enabling suitable supporting evidence for certification to be produced. The research concentrates on the following four main areas: 1. Construction of an argument framework in which the application of product line techniques to high-integrity software development can be assessed and critically reviewed. 2. Definition of a product-line reference architecture that can host components containing variation. 3. Design of model transformations that can automatically instantiate products from a set of components hosted within the reference architecture. 4. Identification of verification approaches that may provide evidence that the transformations designed in step 3 above preserve properties of interest from the product line model into the product instantiations. Together, these areas form the basis of an approach we term “Trusted Product Lines”. The approach has been evaluated and validated by deployment on a real aerospace project; the approach has been used to produce DO-178B/ED-12B Level A applications of over 300 KSLOC in size. The effect of this approach on the software development process has been critically evaluated in this thesis, both quantitatively (in terms of cost and relative size of process phases) and qualitatively (in terms of software quality). The “Trusted Product Lines” approach, as described within the thesis, shows how product line approaches can be applied to high-integrity software development, and how certification evidence created and arguments constructed for products instantiated from the product line. To the best of our knowledge, the development and effective application of product line techniques in a certification environment is novel and unique
    • 

    corecore