2,895 research outputs found
An Exploratory Study of Field Failures
Field failures, that is, failures caused by faults that escape the testing
phase leading to failures in the field, are unavoidable. Improving verification
and validation activities before deployment can identify and timely remove many
but not all faults, and users may still experience a number of annoying
problems while using their software systems. This paper investigates the nature
of field failures, to understand to what extent further improving in-house
verification and validation activities can reduce the number of failures in the
field, and frames the need of new approaches that operate in the field. We
report the results of the analysis of the bug reports of five applications
belonging to three different ecosystems, propose a taxonomy of field failures,
and discuss the reasons why failures belonging to the identified classes cannot
be detected at design time but shall be addressed at runtime. We observe that
many faults (70%) are intrinsically hard to detect at design-time
An Exploratory Study of Field Failures
Field failures, that is, failures caused by faults that escape the testing
phase leading to failures in the field, are unavoidable. Improving verification
and validation activities before deployment can identify and timely remove many
but not all faults, and users may still experience a number of annoying
problems while using their software systems. This paper investigates the nature
of field failures, to understand to what extent further improving in-house
verification and validation activities can reduce the number of failures in the
field, and frames the need of new approaches that operate in the field. We
report the results of the analysis of the bug reports of five applications
belonging to three different ecosystems, propose a taxonomy of field failures,
and discuss the reasons why failures belonging to the identified classes cannot
be detected at design time but shall be addressed at runtime. We observe that
many faults (70%) are intrinsically hard to detect at design-time
Experimental Design in Game Testing
The gaming industry has been on constant rise over the last few years. Companies invest huge amounts of money for the release of their games. A part of this money is invested in testing the games. Current game testing methods include manual execution of pre-written test cases in the game. Each test case may or may not result in a bug. In a game, a bug is said to occur when the game does not behave according to its intended design. The process of writing the test cases to test games requires standardization. We believe that this standardization can be achieved by implementing experimental design to video game testing. In this thesis, we discuss the implementation of combinatorial testing to test games. Combinatorial testing is a method of experimental design that is used to generate test cases and is primarily used for commercial software testing. In addition to the discussion of the implementation of combinatorial testing techniques in video game testing, we present a method for finding combinations resulting in video game bugs
Process of designing robust, dependable, safe and secure software for medical devices: Point of care testing device as a case study
This article has been made available through the Brunel Open Access Publishing Fund.Copyright © 2013 Sivanesan Tulasidas et al. This paper presents a holistic methodology for the design of medical device software, which encompasses of a new way of eliciting requirements, system design process, security design guideline, cloud architecture design, combinatorial testing process and agile project management. The paper uses point of care diagnostics as a case study where the software and hardware must be robust, reliable to provide accurate diagnosis of diseases. As software and software intensive systems are becoming increasingly complex, the impact of failures can lead to significant property damage, or damage to the environment. Within the medical diagnostic device software domain such failures can result in misdiagnosis leading to clinical complications and in some cases death. Software faults can arise due to the interaction among the software, the hardware, third party software and the operating environment. Unanticipated environmental changes and latent coding errors lead to operation faults despite of the fact that usually a significant effort has been expended in the design, verification and validation of the software system. It is becoming increasingly more apparent that one needs to adopt different approaches, which will guarantee that a complex software system meets all safety, security, and reliability requirements, in addition to complying with standards such as IEC 62304. There are many initiatives taken to develop safety and security critical systems, at different development phases and in different contexts, ranging from infrastructure design to device design. Different approaches are implemented to design error free software for safety critical systems. By adopting the strategies and processes presented in this paper one can overcome the challenges in developing error free software for medical devices (or safety critical systems).Brunel Open Access Publishing Fund
Recommended from our members
Enhancing Usability and Explainability of Data Systems
The recent growth of data science expanded its reach to an ever-growing user base of nonexperts, increasing the need for usability, understandability, and explainability in these systems. Enhancing usability makes data systems accessible to people with different skills and backgrounds alike, leading to democratization of data systems. Furthermore, proper understanding of data and data-driven systems is necessary for the users to trust the function of the systems that learn from data. Finally, data systems should be transparent: when a data system behaves unexpectedly or malfunctions, the users deserve proper explanation of what caused the observed incident. Unfortunately, most existing data systems offer limited usability and support for explanations: these systems are usable only by experts with sound technical skills, and even expert users are hindered by the lack of transparency into the systems\u27 inner workings and functions. The aim of my thesis is to bridge the usability gap between nonexpert users and complex data systems, aid all sort of users, including the expert ones, in data and system understanding, and provide explanations that help reason about unexpected outcomes involving data systems. Specifically, my thesis has the following three goals: (1) enhancing usability of data systems for nonexperts, (2) enable data understanding that can assist users in a variety of tasks such as achieving trust in data-driven machine learning, gaining data understanding, and data cleaning, and (3) explaining causes of unexpected outcomes involving data and data systems.
For enhancing usability, we focus on example-driven user intent discovery. We develop systems based on example-driven interactions in two different settings: querying relational databases and personalized document summarization. Towards data understanding, we develop a new data-profiling primitive that can characterize tuples for which a machine-learned model is likely to produce untrustworthy predictions. We also develop an explanation framework to explain causes of such untrustworthy predictions. Additionally, this new data-profiling primitive enables interactive data cleaning. Finally, we develop two explanation frameworks, tailored to provide explanations in debugging data system components, including the data itself. The explanation frameworks focus on explaining the root cause of a concurrent application\u27s intermittent failure and exposing issues in the data that cause a data-driven system to malfunction
Recommended from our members
Bicarbonate Resensitization of Methicillin-Resistant Staphylococcus aureus to β-Lactam Antibiotics.
Endovascular infections caused by methicillin-resistant Staphylococcus aureus (MRSA) are a major health care concern, especially infective endocarditis (IE). Standard antimicrobial susceptibility testing (AST) defines most MRSA strains as "resistant" to β-lactams, often leading to the use of costly and/or toxic treatment regimens. In this investigation, five prototype MRSA strains, representing the range of genotypes in current clinical circulation, were studied. We identified two distinct MRSA phenotypes upon AST using standard media, with or without sodium bicarbonate (NaHCO3) supplementation: one highly susceptible to the antistaphylococcal β-lactams oxacillin and cefazolin (NaHCO3 responsive) and one resistant to such agents (NaHCO3 nonresponsive). These phenotypes accurately predicted clearance profiles of MRSA from target tissues in experimental MRSA IE treated with each β-lactam. Mechanistically, NaHCO3 reduced the expression of two key genes involved in the MRSA phenotype, mecA and sarA, leading to decreased production of penicillin-binding protein 2a (that mediates methicillin resistance), in NaHCO3-responsive (but not in NaHCO3-nonresponsive) strains. Moreover, both cefazolin and oxacillin synergistically killed NaHCO3-responsive strains in the presence of the host defense antimicrobial peptide (LL-37) in NaHCO3-supplemented media. These findings suggest that AST of MRSA strains in NaHCO3-containing media may potentially identify infections caused by NaHCO3-responsive strains that are appropriate for β-lactam therapy
GENROUTE: A genetic algorithm printed wire board (printed wire board (PWB) Router)
The major effort of this thesis was to develop an electronic circuit routing system that utilizes genetic algorithms to perform Printed Wire Board (PWB) routing rather than brute force exhaustive searching methods. This problem can be classified as an NP-Hard optimization problem searching a large solution space. Some desirable characteristics of an electronic routing system are that it: Minimize the number of potential solutions Minimize the number of board layers and tap holes Minimize trace lengths and the number of jogs Minimize trace cross-talk and the board capacitance
Risk-based reliability allocation at component level in non-repairable systems by using evolutionary algorithm
The approach for setting system reliability in the risk-based reliability allocation
(RBRA) method is driven solely by the amount of ‘total losses’ (sum of reliability
investment and risk of failure) associated with a non-repairable system failure. For a
system consisting of many components, reliability allocation by RBRA
method becomes a very complex combinatorial optimisation problem particularly if
large numbers of alternatives, with different levels of reliability and associated cost,
are considered for each component. Furthermore, the complexity of this problem is
magnified when the relationship between cost and reliability assumed to be nonlinear
and non-monotone. An optimisation algorithm (OA) is therefore developed in
this research to demonstrate the solution for such difficult problems.
The core design of the OA originates from the fundamental concepts of
basic Evolutionary Algorithms which are well known for emulating Natural process
of evolution in solving complex optimisation problems through computer simulations
of the key genetic operations such as 'reproduction', ‘crossover’ and ‘mutation’.
However, the OA has been designed with significantly different model of evolution
(for identifying valuable parent solutions and subsequently turning them into even
better child solutions) compared to the classical genetic model for ensuring rapid and
efficient convergence of the search process towards an optimum solution. The vital
features of this OA model are 'generation of all populations (samples) with unique
chromosomes (solutions)', 'working exclusively with the elite chromosomes in each
iteration' and 'application of prudently designed genetic operators on the elite
chromosomes with extra emphasis on mutation operation'. For each possible
combination of alternatives, both system reliability and cost of failure is computed by
means of Monte-Carlo simulation technique.
For validation purposes, the optimisation algorithm is first applied to
solve an already published reliability optimisation problem with constraint on some
target level of system reliability, which is required to be achieved at a minimum
system cost. After successful validation, the viability of the OA is demonstrated by
showing its application in optimising four different non-repairable sample systems in view of the risk based reliability allocation method. Each system is assumed to have
discrete choice of component data set, showing monotonically increasing cost and
reliability relationship among the alternatives, and a fixed amount associated with
cost of failure. While this optimisation process is the main objective of the research
study, two variations are also introduced in this process for the purpose of
undertaking parametric studies. To study the effects of changes in the reliability
investment on system reliability and total loss, the first variation involves using a
different choice of discrete data set exhibiting a non-monotonically increasing
relationship between cost and reliability among the alternatives. To study the effects
of risk of failure, the second variation in the optimisation process is introduced by
means of a different cost of failure amount, associated with a given non-repairable
system failure.
The optimisation processes show very interesting results between system
reliability and total loss. For instance, it is observed that while maximum reliability
can generally be associated with high total loss and low risk of failure, the minimum
observed value of the total loss is not always associated with minimum system
reliability. Therefore, the results exhibit various levels of system reliability and total
loss with both values showing strong sensitivity towards the selected combination of
component alternatives. The first parametric study shows that second data set (nonmonotone)
creates more opportunities for the optimisation process for producing
better values of the loss function since cheaper components with higher reliabilities
can be selected with higher probabilities. In the second parametric study, it can be
seen that the reduction in the cost of failure amount reduces the size of risk of failure
which also increases the chances of using cheaper components with lower levels of
reliability hence producing lower values of the loss functions.
The research study concludes that the risk-based reliability allocation
method together with the optimisation algorithm can be used as a powerful tool for
highlighting various levels of system reliabilities with associated total losses for any
given system in consideration. This notion can be further extended in selecting
optimal system configuration from various competing topologies. With such
information to hand, reliability engineers can streamline complicated system designs
in view of the required level of system reliability with minimum associated total cost of premature failure. In all cases studied, the run time of the optimisation algorithm
increases linearly with the complexity of the algorithm and due to its unique model
of evolution, it appears to conduct very detailed multi-directional search across the
solution space in fewer generations - a very important attribute for solving the kind
of problem studied in this research. Consequently, it converges rapidly towards
optimum solution unlike the classical genetic algorithm which gradually reaches the
optimum, when successful. The research also identifies key areas for future
development with the scope to expand in various other dimensions due to its
interdisciplinary applications
Safety system design optimisation
This thesis investigates the efficiency of a design optimisation scheme that is
appropriate for systems which require a high likelihood of functioning on demand.
Traditional approaches to the design of safety critical systems follow the preliminary
design, analysis, appraisal and redesign stages until what is regarded as an acceptable
design is achieved. For safety systems whose failure could result in loss of life it is
imperative that the best use of the available resources is made and a system which is
optimal, not just adequate, is produced.
The object of the design optimisation problem is to minimise system unavailability
through manipulation of the design variables, such that limitations placed on them by
constraints are not violated.
Commonly, with mathematical optimisation problem; there will be an explicit
objective function which defines how the characteristic to be minimised is related to
the variables. As regards the safety system problem, an explicit objective function
cannot be formulated, and as such, system performance is assessed using the fault tree
method. By the use of house events a single fault tree is constructed to represent the
failure causes of each potential design to overcome the time consuming task of
constructing a fault tree for each design investigated during the optimisation
procedure. Once the fault tree has been constructed for the design in question it is
converted to a BDD for analysis.
A genetic algorithm is first employed to perform the system optimisation, where the
practicality of this approach is demonstrated initially through application to a High-Integrity
Protection System (HIPS) and subsequently a more complex Firewater
Deluge System (FDS).
An alternative optimisation scheme achieves the final design specification by solving
a sequence of optimisation problems. Each of these problems are defined by
assuming some form of the objective function and specifying a sub-region of the
design space over which this function will be representative of the system
unavailability.
The thesis concludes with attention to various optimisation techniques, which possess
features able to address difficulties in the optimisation of safety critical systems.
Specifically, consideration is given to the use of a statistically designed experiment
and a logical search approach
- …