207 research outputs found
Circular Ocean: eco-innovation guide for start-ups, entrepreneurs & small and medium-sized enterprises (SMEs)
A report completed within the Circular Ocean project
Good RE artifacts? I know it when I use it!
The definition of high-quality or good RE artifacts is often provided through normative references, such as quality standards or text books (e.g., ISO/IEEE/IEC-29148). We see various problems of such normative references.
Quality standards are incomplete. Several quality standards describe quality through a set of abstract criteria. When analyzing the characteristics in detail, we see that there are two different types of criteria: Some criteria, such as ambiguity, consistency, completeness, and singularity are factors that describe properties of the RE artifact itself. In contrast, feasibility, traceability and verifiability state that activities can be performed with the artifact. This is a small, yet important difference: While the former can be assessed by analyzing just the artifact by itself, the latter describe a relationship of the artifact in the context of its usage. Yet this usage context is incompletely represented in the quality standards: For example, why is it important that requirements can be implemented (feasible in the terminology of ISO-29148) and verified, but other activities, such as maintenance, are not part of the quality model? Therefore, we argue that normative standards do not take all activities into account systematically, and thus, are missing relevant quality factors.
Quality standards are only implicitly context-dependent. One could go even further and ask about the value of some artifact-based properties such as singularity. A normative approach does not provide such rationales. This is different for activity-based properties, such as verifiability, since these properties are defined through their usage: If we need to verify the requirements, properties of the artifact that increase verifiability are important. If we do not need to verify this requirement, e.g., because we use the artifacts only for task management in an agile process, these properties might not necessarily be relevant. This example shows that, in contrast to the normative definition of quality in RE standards, RE quality usually depends on the context.
Quality standards lack precise reasoning. For defining most of the aforementioned criteria, the standards remain abstract and vague. For some criteria, such as ambiguity, the standards provide a detailed lists of factors to avoid. However, these criteria have an imprecise relation to the abstract criteria mentioned above, and, consequently, the harm that they might potentially cause remains unclear
Requirements Quality Is Quality in Use
The quality of requirements engineering artifacts is widely considered a success factor for software projects. Currently, the definition of high-quality or good RE artifacts is often provided through normative references, such as quality standards, textbooks, or generic guidelines. We see various problems of such normative references: (1) It is hard to ensure that the contained rules are complete, (2) the contained rules are not context-dependent, and (3) the standards lack precise reasoning why certain criteria are considered bad quality. To change this understanding, we postulate that creating an RE artifact is rarely an end in itself, but just a means to understand and reach the project’s goals. Following this line of thought, the purpose of an RE artifact is to support the stakeholders in whatever activities they are performing in the project. This purpose must define high-quality RE artifacts. To express this view, we contribute an activity-based RE quality meta model and show applications of this paradigm. Lastly, we describe the impacts of this view onto research and practice.BMBF, 01IS15003, Q-Effekt : Qualitätssicherungsmaßnahmen effektiv steuer
An approach for creating sentence patterns for quality requirements
Requirements are usually categorized in functional requirements (FRs) and quality requirements (QR). FRs describe "things the product must do" while QRs describe "qualities the product must have". Besides the definition, classification, and representation problems identified by Glinz, there are two further problems with current definitions of quality requirements: (i) the definitions are imprecise and thus difficult to understand and apply, and (ii) the definitions provide no guidance or support for their application in a given organizational context. To tackle these two problems, we propose an approach that - given a quality attribute (e.g., performance) as input - provides a means to specify quality requirements by sentence patterns regarding this quality attribute. In this paper, we contribute a detailed presentation and description of our approach and a discussion of our lessons learnt while instantiating it for performance requirements. Additionally, we give guidance on how to apply our approach for further quality attributes. Through this approach, we aim at encouraging researchers to help us improve the precision of definitions for quality requirements and support practitioners in eliciting and documenting better quality requirements
In Quest for Requirements Engineering Oracles: Dependent Variables and Measurements for (good) RE
Context: For many years, researchers and practitioners have been proposing various methods and approaches to Requirements Engineering (RE). Those contributions remain, however, too often on the level of apodictic discussions with- out having proper knowledge about the practical problems they propagate to address, or how to measure the success of the contributions when applying them in practical con- texts. While the scientific impact of research might not be threatened, the practical impact of the contributions is. Aim: We aim at better understanding practically relevant variables in RE, how those variables relate to each other, and to what extent we can measure those variables. This allows for the establishment of generalisable improvement goals, and the measurement of success of solution proposals. Method: We establish a first empirical basis of de- pendent variables in RE and means for their measurement. We classify the variables according to their dimension (e.g. RE, company, SW project), their measurability, and their actionability. Results: We reveal 93 variables with 167 dependencies of which a large subset is measurable directly in RE while further variables remain unmeasurable or have too complex dependencies for reliable measurements. We critically reflect on the results and show direct implications for research in the field of RE. Conclusion: We discuss a variety of conclusions we can draw from our results. For example, we show a set of first improvement goals directly usable for evidence-based RE research such as "increase flexibility in the RE process", we discuss suitable study types, and, finally, we can underpin the importance of replication studies to obtain generalisability
Challenging incompleteness of performance requirements by sentence patterns
Performance requirements play an important role in software development. They describe system behavior that directly impacts the user experience. Specifying performance requirements in a way that all necessary content is contained, i.e., the completeness of the individual requirements, is challenging, yet project critical. Furthermore, it is still an open question, what content is necessary to make a performance requirement complete. To address this problem, we introduce a framework for specifying performance requirements. This framework (i) consists of a unified model derived from existing performance classifications, (ii) denotes completeness through a content model, and (iii) is operationalized through sentence patterns. We evaluate both the applicability of the framework as well as its ability uncover incompleteness with performance requirements taken from 11 industrial specifications. In our study, we were able to specify 86% of the examined performance requirements by means of our framework. Furthermore, we show that 68% of the specified performance requirements are incomplete with respect to our notion of completeness. We argue that our framework provides an actionable definition of completeness for performance requirements
Which Requirements Artifact Quality Defects are Automatically Detectable? A Case Study
[Context] The quality of requirements engineering artifacts, e.g.
requirements specifications, is acknowledged to be an important success factor
for projects. Therefore, many companies spend significant amounts of money to
control the quality of their RE artifacts. To reduce spending and improve the
RE artifact quality, methods were proposed that combine manual quality control,
i.e. reviews, with automated approaches. [Problem] So far, we have seen various
approaches to automatically detect certain aspects in RE artifacts. However, we
still lack an overview what can and cannot be automatically detected.
[Approach] Starting from an industry guideline for RE artifacts, we classify
166 existing rules for RE artifacts along various categories to discuss the
share and the characteristics of those rules that can be automated. For those
rules, that cannot be automated, we discuss the main reasons. [Contribution] We
estimate that 53% of the 166 rules can be checked automatically either
perfectly or with a good heuristic. Most rules need only simple techniques for
checking. The main reason why some rules resist automation is due to imprecise
definition. [Impact] By giving first estimates and analyses of automatically
detectable and not automatically detectable rule violations, we aim to provide
an overview of the potential of automated methods in requirements quality
control.Comment: 2017 25th International Requirements Engineering Conference Workshops
(REW) (pp. 400-406
On The Impact of Passive Voice Requirements on Domain Modelling
Context: The requirements specification is a central arte- fact in the software engineering (SE) process, and its quality (might) influence downstream activities like implementation or testing. One quality defect that is often mentioned in standards is the use of passive voice. However, the con- sequences of this defect are still unclear. Goal: We need to understand whether the use of passive voice in requirements has an influence on other activities in SE. In this work we focus on domain modelling. Method: We designed an experiment, in which we ask students to draw a domain model from a given set of requirements written in active or passive voice. We compared the completeness of the resulting domain model by counting the number of missing actors, domain objects and their associations with respect to a specified solution. Results: While we could not see a difference in the number of missing actors and objects, participants which received passive sentences missed almost twice the associations. Conclusion: Our experiment indicates that, against common knowledge, actors and objects in a requirement can often be understood from the context. However, the study also shows that passive sentences complicate understanding how certain domain concepts are interconnected
Identifying Relevant Information Cues for Vulnerability Assessment Using CVSS
The assessment of new vulnerabilities is an activity that accounts for
information from several data sources and produces a `severity' score for the
vulnerability. The Common Vulnerability Scoring System (\CVSS) is the reference
standard for this assessment. Yet, no guidance currently exists on \emph{which
information} aids a correct assessment and should therefore be considered.
In this paper we address this problem by evaluating which information cues
increase (or decrease) assessment accuracy.
We devise a block design experiment with 67 software engineering students
with varying vulnerability information and measure scoring accuracy under
different information sets.
We find that baseline vulnerability descriptions provided by standard
vulnerability sources provide only part of the information needed to achieve an
accurate vulnerability assessment. Further, we find that additional information
on \texttt{assets}, \texttt{attacks}, and \texttt{vulnerability type}
contributes in increasing the accuracy of the assessment; conversely,
information on \texttt{known threats} misleads the assessor and decreases
assessment accuracy and should be avoided when assessing vulnerabilities. These
results go in the direction of formalizing the vulnerability communication to,
for example, fully automate security assessments.Comment: 9 pages, CODASPY 201
- …
