211,953 research outputs found
A controlled experiment for the empirical evaluation of safety analysis techniques for safety-critical software
Context: Today's safety critical systems are increasingly reliant on
software. Software becomes responsible for most of the critical functions of
systems. Many different safety analysis techniques have been developed to
identify hazards of systems. FTA and FMEA are most commonly used by safety
analysts. Recently, STPA has been proposed with the goal to better cope with
complex systems including software. Objective: This research aimed at comparing
quantitatively these three safety analysis techniques with regard to their
effectiveness, applicability, understandability, ease of use and efficiency in
identifying software safety requirements at the system level. Method: We
conducted a controlled experiment with 21 master and bachelor students applying
these three techniques to three safety-critical systems: train door control,
anti-lock braking and traffic collision and avoidance. Results: The results
showed that there is no statistically significant difference between these
techniques in terms of applicability, understandability and ease of use, but a
significant difference in terms of effectiveness and efficiency is obtained.
Conclusion: We conclude that STPA seems to be an effective method to identify
software safety requirements at the system level. In particular, STPA addresses
more different software safety requirements than the traditional techniques FTA
and FMEA, but STPA needs more time to carry out by safety analysts with little
or no prior experience.Comment: 10 pages, 1 figure in Proceedings of the 19th International
Conference on Evaluation and Assessment in Software Engineering (EASE '15).
ACM, 201
Software component testing : a standard and the effectiveness of techniques
This portfolio comprises two projects linked by the theme of software component testing, which is also
often referred to as module or unit testing. One project covers its standardisation, while the other
considers the analysis and evaluation of the application of selected testing techniques to an existing
avionics system. The evaluation is based on empirical data obtained from fault reports relating to the
avionics system.
The standardisation project is based on the development of the BC BSI Software Component Testing
Standard and the BCS/BSI Glossary of terms used in software testing, which are both included in the
portfolio. The papers included for this project consider both those issues concerned with the adopted
development process and the resolution of technical matters concerning the definition of the testing
techniques and their associated measures.
The test effectiveness project documents a retrospective analysis of an operational avionics system to
determine the relative effectiveness of several software component testing techniques. The methodology
differs from that used in other test effectiveness experiments in that it considers every possible set of
inputs that are required to satisfy a testing technique rather than arbitrarily chosen values from within
this set. The three papers present the experimental methodology used, intermediate results from a failure
analysis of the studied system, and the test effectiveness results for ten testing techniques, definitions for
which were taken from the BCS BSI Software Component Testing Standard.
The creation of the two standards has filled a gap in both the national and international software testing
standards arenas. Their production required an in-depth knowledge of software component testing
techniques, the identification and use of a development process, and the negotiation of the
standardisation process at a national level. The knowledge gained during this process has been
disseminated by the author in the papers included as part of this portfolio. The investigation of test
effectiveness has introduced a new methodology for determining the test effectiveness of software
component testing techniques by means of a retrospective analysis and so provided a new set of data that
can be added to the body of empirical data on software component testing effectiveness
Software development: A paradigm for the future
A new paradigm for software development that treats software development as an experimental activity is presented. It provides built-in mechanisms for learning how to develop software better and reusing previous experience in the forms of knowledge, processes, and products. It uses models and measures to aid in the tasks of characterization, evaluation and motivation. An organization scheme is proposed for separating the project-specific focus from the organization's learning and reuse focuses of software development. The implications of this approach for corporations, research and education are discussed and some research activities currently underway at the University of Maryland that support this approach are presented
Design diversity: an update from research on reliability modelling
Diversity between redundant subsystems is, in various forms, a common design approach for improving system dependability. Its value in the case of software-based systems is still controversial. This paper gives an overview of reliability modelling work we carried out in recent projects on design diversity, presented in the context of previous knowledge and practice. These results provide additional insight for decisions in applying diversity and in assessing diverseredundant systems. A general observation is that, just as diversity is a very general design approach, the models of diversity can help conceptual understanding of a range of different situations. We summarise results in the general modelling of common-mode failure, in inference from observed failure data, and in decision-making for diversity in development.
Systematic evaluation of design choices for software development tools
[Abstract]: Most design and evaluation of software tools
is based on the intuition and experience of the designers.
Software tool designers consider themselves typical users
of the tools that they build and tend to subjectively evaluate their products rather than objectively evaluate them using established usability methods. This subjective approach is inadequate if the quality of software tools is to improve and the use of more systematic methods is advocated. This paper summarises a sequence of studies that
show how user interface design choices for software development tools can be evaluated using established usability engineering techniques. The techniques used included guideline review, predictive modelling and experimental studies with users
Evolution of statistical analysis in empirical software engineering research: Current state and steps forward
Software engineering research is evolving and papers are increasingly based
on empirical data from a multitude of sources, using statistical tests to
determine if and to what degree empirical evidence supports their hypotheses.
To investigate the practices and trends of statistical analysis in empirical
software engineering (ESE), this paper presents a review of a large pool of
papers from top-ranked software engineering journals. First, we manually
reviewed 161 papers and in the second phase of our method, we conducted a more
extensive semi-automatic classification of papers spanning the years 2001--2015
and 5,196 papers. Results from both review steps was used to: i) identify and
analyze the predominant practices in ESE (e.g., using t-test or ANOVA), as well
as relevant trends in usage of specific statistical methods (e.g.,
nonparametric tests and effect size measures) and, ii) develop a conceptual
model for a statistical analysis workflow with suggestions on how to apply
different statistical methods as well as guidelines to avoid pitfalls. Lastly,
we confirm existing claims that current ESE practices lack a standard to report
practical significance of results. We illustrate how practical significance can
be discussed in terms of both the statistical analysis and in the
practitioner's context.Comment: journal submission, 34 pages, 8 figure
Preliminary Survey on Empirical Research Practices in Requirements Engineering
Context and Motivation:\ud
Based on published output in the premium RE conferences and journals, we observe a growing body of research using both quantitative and qualitative research methods to help understand which RE technique, process or tool work better in which context. Also, more and more empirical studies in RE aim at comparing and evaluating alternative techniques that are solutions to common problems. However, until now there have been few meta studies of the current state of knowledge about common practices carried out by researchers and practitioners in empirical RE. Also, surprisingly little has been published on how RE researchers perceive the usefulness of these best practices.\ud
\ud
Objective:\ud
The goal of our study is to improve our understanding of what empirical practices are performed by researchers and practitioners in RE, for the purpose of understanding the extent to which the research methods of empirical software engineering are adopted in the RE community.\ud
\ud
Method:\ud
We surveyed the practices that participants of the REFSQ conference have been using in their empirical research projects. The survey was part of the REFSQ 2012 Empirical Track.\ud
\ud
Conclusions:\ud
We found that there are 15 commonly used practices out of a set of 27. The study has two implications: first it presents a list of practices that are commonly used in the RE community, and a list of practices that still remain to be practiced. Researchers may now make an informed decision on how to extend the practices they use in producing and executing their research designs, so that their designs get better. Second, we found that senior researchers and PhD students do not always converge in their perceptions about the usefulness of research practices. Whether this is all right and whether something needs to be done in the face of this finding remains an open question
LittleDarwin: a Feature-Rich and Extensible Mutation Testing Framework for Large and Complex Java Systems
Mutation testing is a well-studied method for increasing the quality of a
test suite. We designed LittleDarwin as a mutation testing framework able to
cope with large and complex Java software systems, while still being easily
extensible with new experimental components. LittleDarwin addresses two
existing problems in the domain of mutation testing: having a tool able to work
within an industrial setting, and yet, be open to extension for cutting edge
techniques provided by academia. LittleDarwin already offers higher-order
mutation, null type mutants, mutant sampling, manual mutation, and mutant
subsumption analysis. There is no tool today available with all these features
that is able to work with typical industrial software systems.Comment: Pre-proceedings of the 7th IPM International Conference on
Fundamentals of Software Engineerin
Annotated bibliography of Software Engineering Laboratory literature
An annotated bibliography of technical papers, documents, and memorandums produced by or related to the Software Engineering Laboratory is given. More than 100 publications are summarized. These publications cover many areas of software engineering and range from research reports to software documentation. All materials have been grouped into eight general subject areas for easy reference: The Software Engineering Laboratory; The Software Engineering Laboratory: Software Development Documents; Software Tools; Software Models; Software Measurement; Technology Evaluations; Ada Technology; and Data Collection. Subject and author indexes further classify these documents by specific topic and individual author
- …