885,886 research outputs found

    A Formal TLS Handshake Model in LNT

    Get PDF
    Testing of network services represents one of the biggest challenges in cyber security. Because new vulnerabilities are detected on a regular basis, more research is needed. These faults have their roots in the software development cycle or because of intrinsic leaks in the system specification. Conformance testing checks whether a system behaves according to its specification. Here model-based testing provides several methods for automated detection of shortcomings. The formal specification of a system behavior represents the starting point of the testing process. In this paper, a widely used cryptographic protocol is specified and tested for conformance with a test execution framework. The first empirical results are presented and discussed.Comment: In Proceedings MARS/VPT 2018, arXiv:1803.0866

    Testing reactive systems with data : enumerative methods and constraint solving

    Get PDF
    Software faults are a well-known phenomenon. In most cases, they are just annoying – if the computer game does not work as expected – or expensive – if once again a space project fails due to some faulty data conversion. In critical systems, however, faults can have life-threatening consequences. It is the task of software quality assurance to avoid such faults, but this is a cumbersome, expensive and also erroneous undertaking. For this reason, research has been done over the last years in order to automate this task as much as possible. In this thesis, the connection of constraint solving techniques with formal methods is investigated. We have the goal to ��?nd faults in the models and implementations of reactive systems with data, such as automatic teller machines (ATMs). In order to do so, we ��?rst develop a translation of formal speci��?cations in the process algebra µCRL to a constraint logic program (CLP). In the course of this translation, we pay special attention on the fact that the CLP together with the constraint solver correctly simulates the underlying term rewriting system. One way to validate a system is the test whether this system conforms its speci��?cation. In this thesis, we develop a test process to automatically generate and execute test cases for the conformance test of data-oriented systems. The applicability of this process to process-oriented software systems is demonstrated in a case study with an ATM as the system under test. The applicability of the process to document-centered applications is shown by means of the open source web browser Mozilla Firefox. The test process is partially based on the tool TGV, which is an enumerative test case generator. It generates test cases from a system speci��?cation and a test purpose. An enumerative approach to the analysis of system speci��?cations always tries to enumerate all possible combinations of values for the system’s data elements, i.e. the system’s states. The states of those systems, which we regard here, are influenced by data of possibly in��?nite domains. Hence, the state space of such systems grows beyond all limits, it explodes, and cannot be handled anymore by enumerative algorithms. For this reason, the state space is limited prior to test case generation by a data abstraction. We use a chaotic abstraction here with all possible input data from a system’s environment being replaced by a single constant. In parallel, we generate a CLP from the system speci��?cation. With this CLP, we reintroduce the actual data at the time of test execution. This approach does not only limit the state space of the system, but also leads to a separation of system behavior and data. This allows to reuse test cases by only varying their data parameters. In the developed process, tests are executed by the tool BAiT. This tool has also been created in the course of this thesis. Some systems do not always show an identical behavior under the same circumstances. This phenomenon is known as nondeterminism. There are many reasons for nondeterminism. In most cases, input froma system’s environment is asynchronously processed by several components of the system, which do not always terminate in the same order. BAiT works as follows: The tool chooses a trace through the system behavior from the set of traces in the generated test cases. Then, it parameterizes this trace with data and tries to execute it. When the nondeterministic system digresses from the selected trace, BAiT tries to appropriately adapt it. If this can be done according to the system speci��?cation, the test can be executed further and a possibly false positive test verdict has been successfully avoided. The test of an implementation signi��?cantly reduces the numbers of faults in a system. However, the system is only tested against its speci��?cation. In many cases, this speci��?cation already does not completely ful��?ll a customer ’s expectations. In order to reduce the risk for faults further, the models of the system themselves also have to be veri��?ed. This happens during model checking prior to testing the software. Again, the explosion of the state space of the system must be avoided by a suitable abstraction of the models. A consequence of model abstractions in the context of model checking are so-called false negatives. Those traces are counterexamples which point out a fault in the abstracted model, but who do not exist in the concrete one. Usually, these false negatives are ignored. In this thesis, we also develop a methodology to reuse the knowledge of potential faults by abstracting the counterexamples further and deriving a violation pattern from it. Afterwards, we search for a concrete counterexample utilizing a constraint solver

    Statistical fluctuations in pedestrian evacuation times and the effect of social contagion

    Get PDF
    Mathematical models of pedestrian evacuation and the associated simulation software have become essential tools for the assessment of the safety of public facilities and buildings. While a variety of models are now available, their calibration and test against empirical data are generally restricted to global, averaged quantities, the statistics compiled from the time series of individual escapes (" microscopic " statistics) measured in recent experiments are thus overlooked. In the same spirit, much research has primarily focused on the average global evacuation time, whereas the whole distribution of evacuation times over some set of realizations should matter. In the present paper we propose and discuss the validity of a simple relation between this distribution and the " microscopic " statistics, which is theoretically valid in the absence of correlations. To this purpose, we develop a minimal cellular automaton, with novel features that afford a semi-quantitative reproduction of the experimental " microscopic " statistics. We then introduce a process of social contagion of impatient behavior in the model and show that the simple relation under test may dramatically fail at high contagion strengths, the latter being responsible for the emergence of strong correlations in the system. We conclude with comments on the potential practical relevance for safety science of calculations based on " microscopic " statistics

    Distributed Load Testing by Modeling and Simulating User Behavior

    Get PDF
    Modern human-machine systems such as microservices rely upon agile engineering practices which require changes to be tested and released more frequently than classically engineered systems. A critical step in the testing of such systems is the generation of realistic workloads or load testing. Generated workload emulates the expected behaviors of users and machines within a system under test in order to find potentially unknown failure states. Typical testing tools rely on static testing artifacts to generate realistic workload conditions. Such artifacts can be cumbersome and costly to maintain; however, even model-based alternatives can prevent adaptation to changes in a system or its usage. Lack of adaptation can prevent the integration of load testing into system quality assurance, leading to an incomplete evaluation of system quality. The goal of this research is to improve the state of software engineering by addressing open challenges in load testing of human-machine systems with a novel process that a) models and classifies user behavior from streaming and aggregated log data, b) adapts to changes in system and user behavior, and c) generates distributed workload by realistically simulating user behavior. This research contributes a Learning, Online, Distributed Engine for Simulation and Testing based on the Operational Norms of Entities within a system (LODESTONE): a novel process to distributed load testing by modeling and simulating user behavior. We specify LODESTONE within the context of a human-machine system to illustrate distributed adaptation and execution in load testing processes. LODESTONE uses log data to generate and update user behavior models, cluster them into similar behavior profiles, and instantiate distributed workload on software systems. We analyze user behavioral data having differing characteristics to replicate human-machine interactions in a modern microservice environment. We discuss tools, algorithms, software design, and implementation in two different computational environments: client-server and cloud-based microservices. We illustrate the advantages of LODESTONE through a qualitative comparison of key feature parameters and experimentation based on shared data and models. LODESTONE continuously adapts to changes in the system to be tested which allows for the integration of load testing into the quality assurance process for cloud-based microservices

    A plan classifier based on Chi-square distribution tests

    Get PDF
    To make good decisions in a social context, humans often need to recognize the plan underlying the behavior of others, and make predictions based on this recognition. This process, when carried out by software agents or robots, is known as plan recognition, or agent modeling. Most existing techniques for plan recognition assume the availability of carefully hand-crafted plan libraries, which encode the a-priori known behavioral repertoire of the observed agents; during run-time, plan recognition algorithms match the observed behavior of the agents against the plan-libraries, and matches are reported as hypotheses. Unfortunately, techniques for automatically acquiring plan-libraries from observations, e.g., by learning or data-mining, are only beginning to emerge. We present an approach for automatically creating the model of an agent behavior based on the observation and analysis of its atomic behaviors. In this approach, observations of an agent behavior are transformed into a sequence of atomic behaviors (events). This stream is analyzed in order to get the corresponding behavior model, represented by a distribution of relevant events. Once the model has been created, the proposed approach presents a method using a statistical test for classifying an observed behavior. Therefore, in this research, the problem of behavior classification is examined as a problem of learning to characterize the behavior of an agent in terms of sequences of atomic behaviors. The experiment results of this paper show that a system based on our approach can efficiently recognize different behaviors in different domains, in particular UNIX command-line data, and RoboCup soccer simulationThis work has been partially supported by the Spanish Government under project TRA2007-67374-C02-0

    IMPROVING TRACEABILITY RECOVERY TECHNIQUES THROUGH THE STUDY OF TRACING METHODS AND ANALYST BEHAVIOR

    Get PDF
    Developing complex software systems often involves multiple stakeholder interactions, coupled with frequent requirements changes while operating under time constraints and budget pressures. Such conditions can lead to hidden problems, manifesting when software modifications lead to unexpected software component interactions that can cause catastrophic or fatal situations. A critical step in ensuring the success of software systems is to verify that all requirements can be traced to the design, source code, test cases, and any other software artifacts generated during the software development process. The focus of this research is to improve on the trace matrix generation process and study how human analysts create the final trace matrix using traceability information generated from automated methods. This dissertation presents new results in the automated generation of traceability matrices and in the analysis of analyst actions during a tracing task. The key contributions of this dissertation are as follows: (1) Development of a Proximity-based Vector Space Model for automated generation of TMs. (2) Use of Mean Average Precision (a ranked retrieval-based measure) and 21-point interpolated precision-recall graph (a set-based measure) for statistical evaluation of automated methods. (3) Logging and visualization of analyst actions during a tracing task. (4) Study of human analyst tracing behavior with consideration of decisions made during the tracing task and analyst tracing strategies. (5) Use of potential recall, sensitivity, and effort distribution as analyst performance measures. Results show that using both a ranked retrieval-based and a set-based measure with statistical rigor provides a framework for evaluating automated methods. Studying the human analyst provides insight into how analysts use traceability information to create the final trace matrix and identifies areas for improvement in the traceability process. Analyst performance measures can be used to identify analysts that perform the tracing task well and use effective tracing strategies to generate a high quality final trace matrix

    Stochastic Modeling of Individual Resource Consumption during the Programming Phase of Software Development

    Get PDF
    In the past several years there has been a considerable amount of research effort devoted to developing models of individual resource consumption during the software development process. Since many conditions affect individual resource consumption during the software development process, including several which are difficult if not impossible to quantify, it is our contention that a stochastic model is more appropriate than a deterministic model. In order to test our hypothesis we conducted an experiment based upon several student programming assignments. Data from this experiment is used to demonstrate that the two parameter Log-Normal distribution is appropriate for describing the probabilistic behavior of the random variable \u27resource consumption\u27. In addition we present a theoretic argument for the applicability of the Log-Normal distribution based on the concept of a proportional effects model for the growth of a program

    Time-dependent deformation and associated failure of roof in underground mines

    Get PDF
    In underground coal mines, roof falls are a major contributor to injuries and fatalities. Studies have related the occurrence of roof fall to weak immediate roof, high horizontal stress, entry orientation, etc. An often-neglected factor in studies on this topic has been the influence of time-dependent behavior of the roof rock on roof falls. The time-dependent roof failure activity involves both intact and failed rocks. Numerical simulation techniques are available that include time-dependent behavior; however, they lack constitutive models that consider both the intact and the failed behavior of rock. In addition, input properties for the creep models only include the intact rock properties that are determined through constant load (time-dependent) tests. For failed rock, standard creep tests cannot be performed on the rock specimens. This thesis aims to understand this behavior through the following steps: Develop a new laboratory test method Develop a new constitutive model that incorporates the pre- and post-failure behavior Implement the constitutive model into 3DEC Analyze the hypothetical mine geometry using the new constitutive model First, this study develops a new relaxation equation based on Burgers Model. Relaxation tests studied rock specimens in both intact and failed stages. The results from the tests showed significant difference in the viscous property between intact and failed rocks. The results of these relaxation tests determined the viscous parameters with the new relaxation equation. Next, this study constructed numerical models of laboratory sized specimens in 3DEC software. The models incorporated the new relaxation equation, and model runs showed that stress relaxation is significantly present in the post-failure region, rather than in the pre-failure region. Further, a single entry mine model in 3DEC analyzed the influence of strength degradation and the variation in the viscous property on the time-dependent failure process. Variation in the viscous parameter showed significant effects on the failure process in the rock mass. A series of unconfined relaxation tests was performed on sandstone specimens and coal measure rocks. For sandstone, specimens were cored from sandstone blocks; for coal measure rocks, which include shale, sandy shale, and shaly limestone, cores were obtained from mine sites. The test results show that the relaxation behaviors of intact and failed rock specimens are different. The stress relaxation curves in the pre-failure region showed a typical, smooth stress relaxation behavior, while the stress relaxation demonstrates stepped behavior in the post-failure region. For coal measure rocks, the variation in the time-dependent properties of failed rock were insignificant. A viscoelastic-strain-softening constitutive model was developed by incorporating both time-dependent and strain-softening behavior. The model was included in the 3DEC software as a user-defined model. Parametric model runs based on the time-dependent laboratory tests verified the accuracy of the proposed model. Finally, the user-defined model, using hypothetical condition, investigated the influence of various factors on the time-dependent deformation and failure of massive and bedded roof. These simulations investigated the influence of directional horizontal stress, step-wise excavation, and bedding planes on the time-dependent response of mine roof. This research achieved a comprehensive understanding of the time-dependent formation of roof fall

    Doctor of Philosophy

    Get PDF
    dissertationThis dissertation consists of three essays that investigate the general decision process of users' choices regarding information technology (IT) applications and products, focusing on placebo effects of software pricing, incorporating user perceptions and product attributes in modeling software product choices, and firms' practices of green IT. Taking a customer-centric approach to users' assessments of IT applications and products, I address the evaluative responses of individual consumers and organizations to market information including price, product attributes, and key contextual factors. The objective of the first essay is to understand the placebo-like effects invoked by the price of software products on consumers' satisfaction, problem-solving performance, and purchasing behavior. Built upon the response expectancy theory, a research framework and a series of hypotheses are proposed. I test the hypotheses with a controlled experiment, and the data supports most of the hypotheses. Specifically, a user's outcome expectancy, as activated by software price, affects not only his/her satisfaction, but also the problem-solving performance using the software product. Satisfaction and actual problem-solving performance in turn affects the user's willingness-to-pay. In order to better explain and predict consumers' preferential choices of software products, I propose in the second essay a model that incorporates product attributes and consumer perceptions to estimate users' software product selection. The influences of product attributes on users' perceptions of product characteristics are also examined. With a choice-based conjoint study, and the collection of additional data on users' perceived product characteristics, I demonstrate that the proposed model can better explain and predict users' software choices than the model with product attributes only, or with user perceptions only, in terms of the in-sample fit and the holdout prediction hit rate at the individual-level and the aggregate-level. The third essay examines important drivers of green IT practices by firms. I propose a framework premised on social contracts theory and institutional theory, and then use it to develop a model that explains firms' decisions. I test the model and the associated hypotheses with the survey data collected from 304 major firms in Taiwan. Overall, the results show global environmental awareness, industry norms, and key stakeholders' attitudes affect a firm's green IT practices directly. Competitors seem to play a limited role, as suggested by an insignificant impact on the firm's green IT practices

    An approach to verification and validation of a reliable multicasting protocol: Extended Abstract

    Get PDF
    This paper describes the process of implementing a complex communications protocol that provides reliable delivery of data in multicast-capable, packet-switching telecommunication networks. The protocol, called the Reliable Multicasting Protocol (RMP), was developed incrementally using a combination of formal and informal techniques in an attempt to ensure the correctness of its implementation. Our development process involved three concurrent activities: (1) the initial construction and incremental enhancement of a formal state model of the protocol machine; (2) the initial coding and incremental enhancement of the implementation; and (3) model-based testing of iterative implementations of the protocol. These activities were carried out by two separate teams: a design team and a V&V team. The design team built the first version of RMP with limited functionality to handle only nominal requirements of data delivery. This initial version did not handle off-nominal cases such as network partitions or site failures. Meanwhile, the V&V team concurrently developed a formal model of the requirements using a variant of SCR-based state tables. Based on these requirements tables, the V&V team developed test cases to exercise the implementation. In a series of iterative steps, the design team added new functionality to the implementation while the V&V team kept the state model in fidelity with the implementation. This was done by generating test cases based on suspected errant or off-nominal behaviors predicted by the current model. If the execution of a test in the model and implementation agreed, then the test either found a potential problem or verified a required behavior. However, if the execution of a test was different in the model and implementation, then the differences helped identify inconsistencies between the model and implementation. In either case, the dialogue between both teams drove the co-evolution of the model and implementation. We have found that this interactive, iterative approach to development allows software designers to focus on delivery of nominal functionality while the V&V team can focus on analysis of off nominal cases. Testing serves as the vehicle for keeping the model and implementation in fidelity with each other. This paper describes (1) our experiences in developing our process model; and (2) three example problems found during the development of RMP. Although RMP has provided our research effort with a rich set of test cases, it also has practical applications within NASA. For example, RMP is being considered for use in the NASA EOSDIS project due to its significant performance benefits in applications that need to replicate large amounts of data to many network sites
    corecore