2,895 research outputs found

    An Exploratory Study of Field Failures

    Get PDF
    Field failures, that is, failures caused by faults that escape the testing phase leading to failures in the field, are unavoidable. Improving verification and validation activities before deployment can identify and timely remove many but not all faults, and users may still experience a number of annoying problems while using their software systems. This paper investigates the nature of field failures, to understand to what extent further improving in-house verification and validation activities can reduce the number of failures in the field, and frames the need of new approaches that operate in the field. We report the results of the analysis of the bug reports of five applications belonging to three different ecosystems, propose a taxonomy of field failures, and discuss the reasons why failures belonging to the identified classes cannot be detected at design time but shall be addressed at runtime. We observe that many faults (70%) are intrinsically hard to detect at design-time

    An Exploratory Study of Field Failures

    Full text link
    Field failures, that is, failures caused by faults that escape the testing phase leading to failures in the field, are unavoidable. Improving verification and validation activities before deployment can identify and timely remove many but not all faults, and users may still experience a number of annoying problems while using their software systems. This paper investigates the nature of field failures, to understand to what extent further improving in-house verification and validation activities can reduce the number of failures in the field, and frames the need of new approaches that operate in the field. We report the results of the analysis of the bug reports of five applications belonging to three different ecosystems, propose a taxonomy of field failures, and discuss the reasons why failures belonging to the identified classes cannot be detected at design time but shall be addressed at runtime. We observe that many faults (70%) are intrinsically hard to detect at design-time

    Experimental Design in Game Testing

    Get PDF
    The gaming industry has been on constant rise over the last few years. Companies invest huge amounts of money for the release of their games. A part of this money is invested in testing the games. Current game testing methods include manual execution of pre-written test cases in the game. Each test case may or may not result in a bug. In a game, a bug is said to occur when the game does not behave according to its intended design. The process of writing the test cases to test games requires standardization. We believe that this standardization can be achieved by implementing experimental design to video game testing. In this thesis, we discuss the implementation of combinatorial testing to test games. Combinatorial testing is a method of experimental design that is used to generate test cases and is primarily used for commercial software testing. In addition to the discussion of the implementation of combinatorial testing techniques in video game testing, we present a method for finding combinations resulting in video game bugs

    Process of designing robust, dependable, safe and secure software for medical devices: Point of care testing device as a case study

    Get PDF
    This article has been made available through the Brunel Open Access Publishing Fund.Copyright © 2013 Sivanesan Tulasidas et al. This paper presents a holistic methodology for the design of medical device software, which encompasses of a new way of eliciting requirements, system design process, security design guideline, cloud architecture design, combinatorial testing process and agile project management. The paper uses point of care diagnostics as a case study where the software and hardware must be robust, reliable to provide accurate diagnosis of diseases. As software and software intensive systems are becoming increasingly complex, the impact of failures can lead to significant property damage, or damage to the environment. Within the medical diagnostic device software domain such failures can result in misdiagnosis leading to clinical complications and in some cases death. Software faults can arise due to the interaction among the software, the hardware, third party software and the operating environment. Unanticipated environmental changes and latent coding errors lead to operation faults despite of the fact that usually a significant effort has been expended in the design, verification and validation of the software system. It is becoming increasingly more apparent that one needs to adopt different approaches, which will guarantee that a complex software system meets all safety, security, and reliability requirements, in addition to complying with standards such as IEC 62304. There are many initiatives taken to develop safety and security critical systems, at different development phases and in different contexts, ranging from infrastructure design to device design. Different approaches are implemented to design error free software for safety critical systems. By adopting the strategies and processes presented in this paper one can overcome the challenges in developing error free software for medical devices (or safety critical systems).Brunel Open Access Publishing Fund

    GENROUTE: A genetic algorithm printed wire board (printed wire board (PWB) Router)

    Get PDF
    The major effort of this thesis was to develop an electronic circuit routing system that utilizes genetic algorithms to perform Printed Wire Board (PWB) routing rather than brute force exhaustive searching methods. This problem can be classified as an NP-Hard optimization problem searching a large solution space. Some desirable characteristics of an electronic routing system are that it: Minimize the number of potential solutions Minimize the number of board layers and tap holes Minimize trace lengths and the number of jogs Minimize trace cross-talk and the board capacitance

    Risk-based reliability allocation at component level in non-repairable systems by using evolutionary algorithm

    Get PDF
    The approach for setting system reliability in the risk-based reliability allocation (RBRA) method is driven solely by the amount of ‘total losses’ (sum of reliability investment and risk of failure) associated with a non-repairable system failure. For a system consisting of many components, reliability allocation by RBRA method becomes a very complex combinatorial optimisation problem particularly if large numbers of alternatives, with different levels of reliability and associated cost, are considered for each component. Furthermore, the complexity of this problem is magnified when the relationship between cost and reliability assumed to be nonlinear and non-monotone. An optimisation algorithm (OA) is therefore developed in this research to demonstrate the solution for such difficult problems. The core design of the OA originates from the fundamental concepts of basic Evolutionary Algorithms which are well known for emulating Natural process of evolution in solving complex optimisation problems through computer simulations of the key genetic operations such as 'reproduction', ‘crossover’ and ‘mutation’. However, the OA has been designed with significantly different model of evolution (for identifying valuable parent solutions and subsequently turning them into even better child solutions) compared to the classical genetic model for ensuring rapid and efficient convergence of the search process towards an optimum solution. The vital features of this OA model are 'generation of all populations (samples) with unique chromosomes (solutions)', 'working exclusively with the elite chromosomes in each iteration' and 'application of prudently designed genetic operators on the elite chromosomes with extra emphasis on mutation operation'. For each possible combination of alternatives, both system reliability and cost of failure is computed by means of Monte-Carlo simulation technique. For validation purposes, the optimisation algorithm is first applied to solve an already published reliability optimisation problem with constraint on some target level of system reliability, which is required to be achieved at a minimum system cost. After successful validation, the viability of the OA is demonstrated by showing its application in optimising four different non-repairable sample systems in view of the risk based reliability allocation method. Each system is assumed to have discrete choice of component data set, showing monotonically increasing cost and reliability relationship among the alternatives, and a fixed amount associated with cost of failure. While this optimisation process is the main objective of the research study, two variations are also introduced in this process for the purpose of undertaking parametric studies. To study the effects of changes in the reliability investment on system reliability and total loss, the first variation involves using a different choice of discrete data set exhibiting a non-monotonically increasing relationship between cost and reliability among the alternatives. To study the effects of risk of failure, the second variation in the optimisation process is introduced by means of a different cost of failure amount, associated with a given non-repairable system failure. The optimisation processes show very interesting results between system reliability and total loss. For instance, it is observed that while maximum reliability can generally be associated with high total loss and low risk of failure, the minimum observed value of the total loss is not always associated with minimum system reliability. Therefore, the results exhibit various levels of system reliability and total loss with both values showing strong sensitivity towards the selected combination of component alternatives. The first parametric study shows that second data set (nonmonotone) creates more opportunities for the optimisation process for producing better values of the loss function since cheaper components with higher reliabilities can be selected with higher probabilities. In the second parametric study, it can be seen that the reduction in the cost of failure amount reduces the size of risk of failure which also increases the chances of using cheaper components with lower levels of reliability hence producing lower values of the loss functions. The research study concludes that the risk-based reliability allocation method together with the optimisation algorithm can be used as a powerful tool for highlighting various levels of system reliabilities with associated total losses for any given system in consideration. This notion can be further extended in selecting optimal system configuration from various competing topologies. With such information to hand, reliability engineers can streamline complicated system designs in view of the required level of system reliability with minimum associated total cost of premature failure. In all cases studied, the run time of the optimisation algorithm increases linearly with the complexity of the algorithm and due to its unique model of evolution, it appears to conduct very detailed multi-directional search across the solution space in fewer generations - a very important attribute for solving the kind of problem studied in this research. Consequently, it converges rapidly towards optimum solution unlike the classical genetic algorithm which gradually reaches the optimum, when successful. The research also identifies key areas for future development with the scope to expand in various other dimensions due to its interdisciplinary applications

    Safety system design optimisation

    Get PDF
    This thesis investigates the efficiency of a design optimisation scheme that is appropriate for systems which require a high likelihood of functioning on demand. Traditional approaches to the design of safety critical systems follow the preliminary design, analysis, appraisal and redesign stages until what is regarded as an acceptable design is achieved. For safety systems whose failure could result in loss of life it is imperative that the best use of the available resources is made and a system which is optimal, not just adequate, is produced. The object of the design optimisation problem is to minimise system unavailability through manipulation of the design variables, such that limitations placed on them by constraints are not violated. Commonly, with mathematical optimisation problem; there will be an explicit objective function which defines how the characteristic to be minimised is related to the variables. As regards the safety system problem, an explicit objective function cannot be formulated, and as such, system performance is assessed using the fault tree method. By the use of house events a single fault tree is constructed to represent the failure causes of each potential design to overcome the time consuming task of constructing a fault tree for each design investigated during the optimisation procedure. Once the fault tree has been constructed for the design in question it is converted to a BDD for analysis. A genetic algorithm is first employed to perform the system optimisation, where the practicality of this approach is demonstrated initially through application to a High-Integrity Protection System (HIPS) and subsequently a more complex Firewater Deluge System (FDS). An alternative optimisation scheme achieves the final design specification by solving a sequence of optimisation problems. Each of these problems are defined by assuming some form of the objective function and specifying a sub-region of the design space over which this function will be representative of the system unavailability. The thesis concludes with attention to various optimisation techniques, which possess features able to address difficulties in the optimisation of safety critical systems. Specifically, consideration is given to the use of a statistically designed experiment and a logical search approach
    corecore