914 research outputs found

    Optimal-constraint lexicons for requirements specifications

    Full text link
    Constrained Natural Languages (CNLs) are becoming an increasingly popular way of writing technical documents such as requirements specifications. This is because CNLs aim to reduce the ambiguity inherent within natural languages, whilst maintaining their readability and expressiveness. The design of existing CNLs appears to be unfocused towards achieving specific quality outcomes, in that the majority of lexical selections have been based upon lexicographer preferences rather than an optimum trade-off between quality factors such as ambiguity, readability, expressiveness, and lexical magnitude. In this paper we introduce the concept of 'replaceability' as a way of identifying the lexical redundancy inherent within a sample of requirements. Our novel and practical approach uses Natural Language Processing (NLP) techniques to enable us to make dynamic trade-offs between quality factors to optimise the resultant CNL. We also challenge the concept of a CNL being a one-dimensional static language, and demonstrate that our optimal-constraint process results in a CNL that can adapt to a changing domain while maintaining its expressiveness. © Springer-Verlag Berlin Heidelberg 2007

    Design and evaluation of a method to reduce the lexical ambiguity of requirement specifications

    Full text link
    University of Technology, Sydney. Faculty of Engineering and Information Technology.The requirements engineering process has been criticised for its immaturity. Firstly, in the context of safety-critical systems, missing, misunderstood, and erroneous requirements have been attributed as the cause of many safety-system faults; and secondly, in the context of project success factors, many IT projects have identified requirement defects as a primary cause of being over-time or over-budget. Ambiguity is a requirement defect that is commonly associated with challenged IT projects, however there are but few empirical studies on how ambiguity can be reduced or eliminated from requirement specifications. Eliminating the ambiguity inherent within a requirement specification is the seemingly unattainable ambition of the systems engineering zealot. This is because ambiguity is considered an unavoidable side-effect of using natural language, and most requirement specifications are written in natural language. One proposed solution to the ambiguity problem is to express requirements in Controlled Natural Language (CNL). CNLs enforce grammatical and/or lexical constraints to reduce the inherent ambiguity of natural language without sacrificing correctness, readability, or expressiveness. There is, however, a view in the literature that CNLs are overly restrictive and unnatural to read and write. Furthermore, the design and development of CNLs is both labour-intensive and time-i ntensive. This thes1s describes how a requirements spec1fication can be automatica11y re-expressed in a way that significantly reduces its lexical ambiguity, without significantly reducing its correctness or conventionality. The thesis specifical1y focuses on lexical ambiguity, since this is the fom1 of ambiguity most attributable to the lexicon used to express the specification. 111e tem1 re-expression is used to di stinguish this approach from that of CNLs, since the lexicon is not static, but is optimally selected on a word-by-word basis such that lexical ambiguity is minimised, whilst correctness and conventionality are maximised. Fundamental to the optimal word selection is a new concept: replaceability(W1, W2) , which is the degree to which word W 1 can replace word W2• The replaceability equation developed within this thesis is a function of semantic similarity, polysemy, frequency, and lexical width. We implement a software prototype, and execute it on an existing industry-specification. A controlled expe1iment is used to measure the effects of the re-expression in terms of correctness, conventionality, and lexical ambiguity. Data are collected from project stakeholders using a questionnaire-style approach, and hypothesis testing is used to decide whether or not the optimal re-expression has significantly reduced lexical ambiguity without significantly reducing correctness or conventionality

    A Process Modelling Framework Based on Point Interval Temporal Logic with an Application to Modelling Patient Flows

    Get PDF
    This thesis considers an application of a temporal theory to describe and model the patient journey in the hospital accident and emergency (A&E) department. The aim is to introduce a generic but dynamic method applied to any setting, including healthcare. Constructing a consistent process model can be instrumental in streamlining healthcare issues. Current process modelling techniques used in healthcare such as flowcharts, unified modelling language activity diagram (UML AD), and business process modelling notation (BPMN) are intuitive and imprecise. They cannot fully capture the complexities of the types of activities and the full extent of temporal constraints to an extent where one could reason about the flows. Formal approaches such as Petri have also been reviewed to investigate their applicability to the healthcare domain to model processes. Additionally, to schedule patient flows, current modelling standards do not offer any formal mechanism, so healthcare relies on critical path method (CPM) and program evaluation review technique (PERT), that also have limitations, i.e. finish-start barrier. It is imperative to specify the temporal constraints between the start and/or end of a process, e.g., the beginning of a process A precedes the start (or end) of a process B. However, these approaches failed to provide us with a mechanism for handling these temporal situations. If provided, a formal representation can assist in effective knowledge representation and quality enhancement concerning a process. Also, it would help in uncovering complexities of a system and assist in modelling it in a consistent way which is not possible with the existing modelling techniques. The above issues are addressed in this thesis by proposing a framework that would provide a knowledge base to model patient flows for accurate representation based on point interval temporal logic (PITL) that treats point and interval as primitives. These objects would constitute the knowledge base for the formal description of a system. With the aid of the inference mechanism of the temporal theory presented here, exhaustive temporal constraints derived from the proposed axiomatic system’ components serves as a knowledge base. The proposed methodological framework would adopt a model-theoretic approach in which a theory is developed and considered as a model while the corresponding instance is considered as its application. Using this approach would assist in identifying core components of the system and their precise operation representing a real-life domain deemed suitable to the process modelling issues specified in this thesis. Thus, I have evaluated the modelling standards for their most-used terminologies and constructs to identify their key components. It will also assist in the generalisation of the critical terms (of process modelling standards) based on their ontology. A set of generalised terms proposed would serve as an enumeration of the theory and subsume the core modelling elements of the process modelling standards. The catalogue presents a knowledge base for the business and healthcare domains, and its components are formally defined (semantics). Furthermore, a resolution theorem-proof is used to show the structural features of the theory (model) to establish it is sound and complete. After establishing that the theory is sound and complete, the next step is to provide the instantiation of the theory. This is achieved by mapping the core components of the theory to their corresponding instances. Additionally, a formal graphical tool termed as point graph (PG) is used to visualise the cases of the proposed axiomatic system. PG facilitates in modelling, and scheduling patient flows and enables analysing existing models for possible inaccuracies and inconsistencies supported by a reasoning mechanism based on PITL. Following that, a transformation is developed to map the core modelling components of the standards into the extended PG (PG*) based on the semantics presented by the axiomatic system. A real-life case (from the King’s College hospital accident and emergency (A&E) department’s trauma patient pathway) is considered to validate the framework. It is divided into three patient flows to depict the journey of a patient with significant trauma, arriving at A&E, undergoing a procedure and subsequently discharged. Their staff relied upon the UML-AD and BPMN to model the patient flows. An evaluation of their representation is presented to show the shortfalls of the modelling standards to model patient flows. The last step is to model these patient flows using the developed approach, which is supported by enhanced reasoning and scheduling

    Data science for engineering design: State of the art and future directions

    Get PDF
    Abstract Engineering design (ED) is the process of solving technical problems within requirements and constraints to create new artifacts. Data science (DS) is the inter-disciplinary field that uses computational systems to extract knowledge from structured and unstructured data. The synergies between these two fields have a long story and throughout the past decades, ED has increasingly benefited from an integration with DS. We present a literature review at the intersection between ED and DS, identifying the tools, algorithms and data sources that show the most potential in contributing to ED, and identifying a set of challenges that future data scientists and designers should tackle, to maximize the potential of DS in supporting effective and efficient designs. A rigorous scoping review approach has been supported by Natural Language Processing techniques, in order to offer a review of research across two fuzzy-confining disciplines. The paper identifies challenges related to the two fields of research and to their interfaces. The main gaps in the literature revolve around the adaptation of computational techniques to be applied in the peculiar context of design, the identification of data sources to boost design research and a proper featurization of this data. The challenges have been classified considering their impacts on ED phases and applicability of DS methods, giving a map for future research across the fields. The scoping review shows that to fully take advantage of DS tools there must be an increase in the collaboration between design practitioners and researchers in order to open new data driven opportunities

    Knowledge Representation with Ontologies: The Present and Future

    No full text
    Recently, we have seen an explosion of interest in ontologies as artifacts to represent human knowledge and as critical components in knowledge management, the semantic Web, business-to-business applications, and several other application areas. Various research communities commonly assume that ontologies are the appropriate modeling structure for representing knowledge. However, little discussion has occurred regarding the actual range of knowledge an ontology can successfully represent

    TEI and LMF crosswalks

    Get PDF
    The present paper explores various arguments in favour of making the Text Encoding Initia-tive (TEI) guidelines an appropriate serialisation for ISO standard 24613:2008 (LMF, Lexi-cal Mark-up Framework) . It also identifies the issues that would have to be resolved in order to reach an appropriate implementation of these ideas, in particular in terms of infor-mational coverage. We show how the customisation facilities offered by the TEI guidelines can provide an adequate background, not only to cover missing components within the current Dictionary chapter of the TEI guidelines, but also to allow specific lexical projects to deal with local constraints. We expect this proposal to be a basis for a future ISO project in the context of the on going revision of LMF

    Constraint-based graphical layout of multimodal presentations

    Get PDF
    When developing advanced multimodal interfaces, combining the characteristics of different modalities such as natural language, graphics, animation, virtual realities, etc., the question of automatically designing the graphical layout of such presentations in an appropriate format becomes increasingly important. So, to communicate information to the user in an expressive and effective way, a knowledge-based layout component has to be integrated into the architecture of an intelligent presentation system. In order to achieve a coherent output, it must be able to reflect certain semantic and pragmatic relations specified by a presentation planner to arrange the visual appearance of a mixture of textual and graphic fragments delivered by mode-specific generators. In this paper we will illustrate by the example of LayLab, the layout manager of the multimodal presentation system WIP, how the complex positioning problem for multimodal information can be treated as a constraint satisfaction problem. The design of an aesthetically pleasing layout is characterized as a combination of a general search problem in a finite discrete search space and an optimization problem. Therefore, we have integrated two dedicated constraint solvers, an incremental hierarchy solver and a finite domain solver, in a layered constraint solver model CLAY, which is triggered from a common metalevel by rules and defaults. The underlying constraint language is able to encode graphical design knowledge expressed by semantic/pragmatic, geometrical/topological, and temporal relations. Furthermore, this mechanism allows one to prioritize the constraints as well as to handle constraint solving over finite domains. As graphical constraints frequently have only local effects, they are incrementally generated by the system on the fly. Ultimately, we will illustrate the functionality of LayLab by some snapshots of an example run

    The rephonologization of Hausa loanwords from English: an optimality theory analysis

    Get PDF
    Faculty of Humanities School of Literature, Language and Media University of the Witwatersrand A Master’s DissertationThis study investigates how Hausa, a West Chadic language (Afro Asiatic phyla) remodells loanwords from English (Indo – European) to suit its pre-existing phonology. Loanword adaptation is quite inevitable due to the fact that languages of the world differ, one from another in many ways: phonological, syntactical, morphological and so on (Inkelas & Zoll, 2003, p. 1). Based on this claim, receptor languages therefore employ ways to rephonologize new words borrowed into their vocabularies to fit, and to conform to native structure demands. Hausa disallows complex onsets, preferably operates open syllables and avoids consonant clustering in word-medial positions as at its best can tolerate no more than a single consonant at a syllable edge (Clements, 2000; Han, 2009). On the contrary, English permits complex onsets as well as closed syllables (Skandera & Burleigh, 2005). Such distinctions in both phonologies motivate for loanword adaptation. Hausa therefore employs repair strategies such as vowel epenthesis, consonant deletions and segmental substitutions and/or replacements (Newman, 2000; Abubakre, 2008; Alqhatani & Musa, 2014) to remodell loanwords. For analytical purposes, this research adopts theoretical tools of Feature Geometry (FG) (Clements & Hume, 1995) and Optimality Theory (OT) (Prince & Smolensky, 2004) to clearly illustrate how loanwords are modified to satisfy Hausa native demands (Kadenge, 2012). Vowel epenthesis in Hausa involves two main strategies: consonantal assimilation and default insertions. During consonantal assimilation, coronal and labial segments spread place features unto the epenthetic segment in the process determining the vowel type and/or quality, while in the case of default insertions, fresh segments are introduced context independently. Concerning segmental substitutions, most notably are English consonants /p/ and /v/ maximally replaced with similar ones, [f] and [b] that exist in Hausa on the basis that former and latter segments share same phonation feature
    • …
    corecore