4 research outputs found

    Heuristics-based method for head and modifier detection in Malay sentences from the cultural heritage domain

    Get PDF
    The process of detection for the head and modifier in Malay sentences from the cultural heritage domain is difficult to identify. This is due to the position of head and modifier which varies in sentences depending on the sentence structures. Hence, there are different point of views about the theory and concept of detection for the head and modifier in a compound noun that have been discussed by language experts. Additionally, the existing research is also limited especially in the areas of computational linguistics. Therefore, research should be conducted to identify appropriate methods especially used in the detection of head and modifier which appear in Malay setences from the cultural heritage domain. The aim of this study is to construct a list of heuristic rules to be used for detecting the position of compound nouns in Malay sentences from cultural heritage domain. By using 15 rules, the position of head and modifier that exist in a compound noun can also be detected. These rules are called heuristic rules. The purpose of formulating these 15 rules is to detect the head and modifier that exist in the Malay sentences from the cultural heritage domain. To measure the accuracy of the results, precision, recall and F1-score values are used. Based on the results of the experiments, Sentence Structure of Malay Cultural Heritage Domain (SADWBM) have an F1-score of 80.4% compared to Noun Phrase Structure (SFN) which is 56%. Consequently, SADWBM shows better scores compared to SFN. Therefore it is clear that the approach used in this study is effective in resolving the identified problems

    Bracketing Compound Nouns for Logic Form Derivation

    No full text
    this paper we address the issue of bracketing coordinated compound noun

    Identifying nocuous ambiguity in natural language requirements

    Get PDF
    This dissertation is an investigation into how ambiguity should be classified for authors and readers of text, and how this process can be automated. Usually, authors and readers disambiguate ambiguity, either consciously or unconsciously. However, disambiguation is not always appropriate. For instance, a linguistic construction may be read differently by different people, with no consensus about which reading is the intended one. This is particularly dangerous if they do not realise that other readings are possible. Misunderstandings may then occur. This is particularly serious in the field of requirements engineering. If requirements are misunderstood, systems may be built incorrectly, and this can prove very costly. Our research uses natural language processing techniques to address ambiguity in requirements. We develop a model of ambiguity, and a method of applying it, which represent a novel approach to the problem described here. Our model is based on the notion that human perception is the only valid criterion for judging ambiguity. If people perceive very differently how an ambiguity should be read, it will cause misunderstandings. Assigning a preferred reading to it is therefore unwise. In text, such ambiguities should be located and rewritten in a less ambiguous form; others need not be reformulated. We classify the former as nocuous and the latter as innocuous. We allow the dividing line between these two classifications to be adjustable. We term this the ambiguity threshold, and it represents a level of intolerance to ambiguity. A nocuous ambiguity can be an unacknowledged or an acknowledged ambiguity for a given set of readers. In the former case, they assign disparate readings to the ambiguity, but each is unaware that the others read it differently. In the latter case, they recognise that the ambiguity has more than one reading, but this fact may be unacknowledged by new readers. We present an automated approach to determine whether ambiguities in text are nocuous or innocuous. We use heuristics to distinguish ambiguities for which there is a strong consensus about how they should be read. These are innocuous ambiguities. The remaining nocuous ambiguities can then be rewritten at a later stage. We find consensus opinions about ambiguities by surveying human perceptions on them. Our heuristics try to predict these perceptions automatically. They utilise various types of linguistic information: generic corpus data, morphology and lexical subcategorisations are the most successful. We use coordination ambiguity as the test case for this research. This occurs where the scope of words such as and and or is unclear. Our research contributes to both the requirements engineering and the natural language processing literatures. Ambiguity is known to be a serious problem in requirements engineering, but has rarely been dealt with effectively and thoroughly. Our approach is an appropriate solution, and our flexible ambiguity threshold is a particularly useful concept. For instance, high ambiguity intolerance can be implemented when writing requirements for safety-critical systems. Coordination ambiguities are widespread and known to cause misunderstandings, but have received comparatively little attention. Our heuristics show that linguistic data can be used successfully to predict preferred readings of very diverse coordinations. Used in combination, these heuristics demonstrate that nocuous ambiguity can be distinguished from innocuous ambiguity under certain conditions. Employing appropriate ambiguity thresholds, accuracy representing 28% improvement on the baselines can be achieved
    corecore