101 research outputs found

    Hacking an Ambiguity Detection Tool to Extract Variation Points: an Experience Report

    Get PDF
    Natural language (NL) requirements documents can be a precious source to identify variability information. This information can be later used to define feature models from which different systems can be instantiated. In this paper, we are interested in validating the approach we have recently proposed to extract variability issues from the ambiguity defects found in NL requirement documents. To this end, we single out ambiguities using an available NL analysis tool, QuARS, and we classify the ambiguities returned by the tool by distinguishing among false positives, real ambiguities, and variation points. We consider three medium sized requirement documents from different domains, namely, train control, social web, home automation. We report in this paper the results of the assessment. Although the validation set is not so large, the results obtained are quite uniform and permit to draw some interesting conclusions. Starting from the results obtained, we can foresee the tailoring of a NL analysis tool for extracting variability from NL requirement documents

    Identifying nocuous ambiguity in natural language requirements

    Get PDF
    This dissertation is an investigation into how ambiguity should be classified for authors and readers of text, and how this process can be automated. Usually, authors and readers disambiguate ambiguity, either consciously or unconsciously. However, disambiguation is not always appropriate. For instance, a linguistic construction may be read differently by different people, with no consensus about which reading is the intended one. This is particularly dangerous if they do not realise that other readings are possible. Misunderstandings may then occur. This is particularly serious in the field of requirements engineering. If requirements are misunderstood, systems may be built incorrectly, and this can prove very costly. Our research uses natural language processing techniques to address ambiguity in requirements. We develop a model of ambiguity, and a method of applying it, which represent a novel approach to the problem described here. Our model is based on the notion that human perception is the only valid criterion for judging ambiguity. If people perceive very differently how an ambiguity should be read, it will cause misunderstandings. Assigning a preferred reading to it is therefore unwise. In text, such ambiguities should be located and rewritten in a less ambiguous form; others need not be reformulated. We classify the former as nocuous and the latter as innocuous. We allow the dividing line between these two classifications to be adjustable. We term this the ambiguity threshold, and it represents a level of intolerance to ambiguity. A nocuous ambiguity can be an unacknowledged or an acknowledged ambiguity for a given set of readers. In the former case, they assign disparate readings to the ambiguity, but each is unaware that the others read it differently. In the latter case, they recognise that the ambiguity has more than one reading, but this fact may be unacknowledged by new readers. We present an automated approach to determine whether ambiguities in text are nocuous or innocuous. We use heuristics to distinguish ambiguities for which there is a strong consensus about how they should be read. These are innocuous ambiguities. The remaining nocuous ambiguities can then be rewritten at a later stage. We find consensus opinions about ambiguities by surveying human perceptions on them. Our heuristics try to predict these perceptions automatically. They utilise various types of linguistic information: generic corpus data, morphology and lexical subcategorisations are the most successful. We use coordination ambiguity as the test case for this research. This occurs where the scope of words such as and and or is unclear. Our research contributes to both the requirements engineering and the natural language processing literatures. Ambiguity is known to be a serious problem in requirements engineering, but has rarely been dealt with effectively and thoroughly. Our approach is an appropriate solution, and our flexible ambiguity threshold is a particularly useful concept. For instance, high ambiguity intolerance can be implemented when writing requirements for safety-critical systems. Coordination ambiguities are widespread and known to cause misunderstandings, but have received comparatively little attention. Our heuristics show that linguistic data can be used successfully to predict preferred readings of very diverse coordinations. Used in combination, these heuristics demonstrate that nocuous ambiguity can be distinguished from innocuous ambiguity under certain conditions. Employing appropriate ambiguity thresholds, accuracy representing 28% improvement on the baselines can be achieved

    Stress and Decision Making: Effects on Valuation, Learning, and Risk-taking

    Get PDF
    A wide range of stressful experiences can influence human decision making in complex ways beyond the simple predictions of a fight-or-flight model. Recent advances may provide insight into this complicated interaction, potentially in directions that could result in translational applications. Early research suggests that stress exposure influences basic neural circuits involved in reward processing and learning, while also biasing decisions toward habit and modulating our propensity to engage in risk-taking. That said, a substantial array of theoretical and methodological considerations in research on the topic challenge strong cross study comparisons necessary for the field to move forward. In this review we examine the multifaceted stress construct in the context of human decision making, emphasizing stress’ effect on valuation, learning, and risk-taking

    Ontology Extraction and Semantic Ranking of Unambiguous Requirements

    Get PDF
    Abstract: This paper describes a new method for ontology based standardization of concepts in a domain. In Requirements engineering, abstraction of the concepts and the entities in a domain is significant as most of the software fail due to incorrectly elicited requirements. In this paper, we introduce a framework for requirements engineering that applies Semantic Ranking and significant terms extraction in a domain. This work aims to identify and present concepts and their relationships as domain specific ontologies of particular significance. The framework is build to detect and eliminate ambiguities. Semantic Graph is constructed using semantic relatedness between two ontologies which is computed based on highest value path connecting any pair of the terms. Based on the nodes of the graph and their significance scores, both single as well as multi word terms can be extracted from the domain documents. A reference document of ontologies that will help requirement analyst to create SRS and will be useful in the design is created

    Defining linguistic antipatterns towards the improvement of source code quality

    Get PDF
    Previous studies showed that linguistic aspect of source code is a valuable source of information that can help to improve program comprehension. The proposed research work focuses on supporting quality improvement of source code by identifying, specifying, and studying common negative practices (i.e., linguistic antipatterns) with respect to linguistic information. We expect the definition of linguistic antipatterns to increase the awareness of the existence of such bad practices and to discourage their use. We also propose to study the relation between negative practices in linguistic information (i.e., linguistic antipatterns) and negative practices in structural information (i.e., design antipatterns) with respect to comprehension effort and fault/change proneness. We discuss the proposed methodology and some preliminary results

    Early Identification of Implicit Requirements with the COTIR Approach using Common Sense, Ontology and Text Mining

    Get PDF
    The ability of a system to meet its requirements is a strong determinant of success. Thus effective Software Requirements Specification (SRS) is crucial. Explicit Requirements are well-defined needs for a system to execute. IMplicit Requirements (IMRs) are assumed needs that a system is expected to fulfill though not elicited during requirements gathering. Studies have shown that a major factor in the failure of software systems is the presence of unhandled IMRs. Since relevance of IMRs is important for efficient system functionality, there are methods developed to aid the identification and management of IMRs. In this research, we emphasize that commonsense knowledge, in the field of Knowledge Representation in AI, would be useful to automatically identify and manage IMRs. This research is aimed at identifying the sources of IMRs and also proposing an automated support tool for managing IMRs within an organizational context. Since this is found to be a present gap in practice, our work makes a contribution here. We propose a novel approach called COTIR (Commonsense, Ontology and Text mining for Implicit Requirements) to identify and manage IMRs. As the name implies, COTIR is based on an integrated framework of three core technologies: commonsense knowledge (CSK), text mining and ontology. We claim that discovery and handling of unknown and non-elicited requirements would reduce risks and costs in software development
    • …
    corecore