2,709 research outputs found

    A framework for software requirement ambiguity avoidance

    Get PDF
    This research deals with software requirements ambiguity problems. Among these are incomplete, incorrect, improper, inaccurate and unambiguous requirements. Interestingly, published material related to Software Requirements Specifications (SRS) problems discusses ambiguity as one of the most conversed problems. This paper proposes a Software Requirement Ambiguity Avoidance Framework (SRAAF) to assist and support requirement engineers to write unambiguous requirements, by selecting correct elicitation technique based on the evaluation of various attributes and by applying the W6H technique. We explored existing theories and the outcomes of experimental research to construct the framework. On the basis of existing and inferred knowledge, we tried to justify proposed frameworks components. Our selection process focuses on various situational attributes. We added various situational attributes related to project, stakeholders and requirement engineer for the selection process. Newly devised approach chooses techniques other than traditional techniques or most common techniques and deals with ambiguity to capture the correct requirements information from stakeholders. The framework will be able to address the selection and ambiguity issues in a more effective way and can handle vagueness. New evidence related to attributes and adequacy matrix can be easily added to the framework without any inconvenience

    Semi-Automated Development of Conceptual Models from Natural Language Text

    Get PDF
    The process of converting natural language specifications into conceptual models requires detailed analysis of natural language text, and designers frequently make mistakes when undertaking this transformation manually. Although many approaches have been used to help designers translate natural language text into conceptual models, each approach has its limitations. One of the main limitations is the lack of a domain-independent ontology that can be used as a repository for entities and relationships, thus guiding the transition from natural language processing into a conceptual model. Such an ontology is not currently available because it would be very difficult and time consuming to produce. In this thesis, a semi-automated system for mapping natural language text into conceptual models is proposed. The model, which is called SACMES, combines a linguistic approach with an ontological approach and human intervention to achieve the task. The model learns from the natural language specifications that it processes, and stores the information that is learnt in a conceptual model ontology and a user history knowledge database. It then uses the stored information to improve performance and reduce the need for human intervention. The evaluation conducted on SACMES demonstrates that (1) designers’ creation of conceptual models is improved when using the system comparing with not using any system, and that (2) the performance of the system is improved by processing more natural language requirements, and thus, the need for human intervention has decreased. However, these advantages may be improved further through development of the learning and retrieval techniques used by the system

    IMAGINE Final Report

    No full text

    Automated energy compliance checking in construction

    Get PDF
    Automated energy compliance checking aims to automatically check the compliance of a building design – in a building information model (BIM) – with applicable energy requirements. A significant number of efforts in both industry and academia have been undertaken to automate the compliance checking process. Such efforts have achieved various levels of automation, expressivity, representativeness, accuracy, and efficiency. Despite the contributions of these efforts, there are two main gaps in existing automated compliance checking (ACC) efforts. First, existing methods are not fully-automated and/or not generalizable across different types of documents. They require different degrees of manual efforts to extract requirements from text into computer-processable representations, and matching the concept representations of the extracted requirements to those of the BIM. Second, existing methods only focused on code checking. There is still a lack of efforts that address contract specification checking. To address these gaps, this thesis aims to develop a fully-automated ACC method for checking BIM-represented building designs for compliance with energy codes and contract specifications. The research included six primary research tasks: (1) conducting a comprehensive literature review; (2) developing a semantic, domain-specific, machine learning-based text classification method and algorithm for classifying energy regulatory documents (including energy codes) and contract specifications for supporting energy ACC in construction; (3) developing a semantic, natural language processing (NLP)-enabled, rule-based information extraction method and algorithm for automated extraction of energy requirements from energy codes; (4) adapting the information extraction method and algorithm for automated extraction of energy requirements from contract specifications; (5) developing a fully-automated, semantic information alignment method and algorithm for aligning the representations used in the BIMs to the representations used in the energy codes and contract specifications; and (6) implementing the aforementioned methods and algorithms in a fully-automated energy compliance checking prototype, called EnergyACC, and using it in conducting a case study to identify the feasibility and challenges for developing an ACC method that is fully-automated and generalized across different types of regulatory documents. Promising noncompliance detection performance was achieved for both energy code checking (95.7% recall and 85.9% precision) and contract specification checking (100% recall and 86.5% precision)

    Enhanced ontology-based text classification algorithm for structurally organized documents

    Get PDF
    Text classification (TC) is an important foundation of information retrieval and text mining. The main task of a TC is to predict the text‟s class according to the type of tag given in advance. Most TC algorithms used terms in representing the document which does not consider the relations among the terms. These algorithms represent documents in a space where every word is assumed to be a dimension. As a result such representations generate high dimensionality which gives a negative effect on the classification performance. The objectives of this thesis are to formulate algorithms for classifying text by creating suitable feature vector and reducing the dimension of data which will enhance the classification accuracy. This research combines the ontology and text representation for classification by developing five algorithms. The first and second algorithms namely Concept Feature Vector (CFV) and Structure Feature Vector (SFV), create feature vector to represent the document. The third algorithm is the Ontology Based Text Classification (OBTC) and is designed to reduce the dimensionality of training sets. The fourth and fifth algorithms, Concept Feature Vector_Text Classification (CFV_TC) and Structure Feature Vector_Text Classification (SFV_TC) classify the document to its related set of classes. These proposed algorithms were tested on five different scientific paper datasets downloaded from different digital libraries and repositories. Experimental obtained from the proposed algorithm, CFV_TC and SFV_TC shown better average results in terms of precision, recall, f-measure and accuracy compared against SVM and RSS approaches. The work in this study contributes to exploring the related document in information retrieval and text mining research by using ontology in TC

    Guided generation of pedagogical concept maps from the Wikipedia

    Get PDF
    We propose a new method for guided generation of concept maps from open accessonline knowledge resources such as Wikies. Based on this method we have implemented aprototype extracting semantic relations from sentences surrounding hyperlinks in the Wikipedia’sarticles and letting a learner to create customized learning objects in real-time based oncollaborative recommendations considering her earlier knowledge. Open source modules enablepedagogically motivated exploration in Wiki spaces, corresponding to an intelligent tutoringsystem. The method extracted compact noun–verb–noun phrases, suggested for labeling arcsbetween nodes that were labeled with article titles. On average, 80 percent of these phrases wereuseful while their length was only 20 percent of the length of the original sentences. Experimentsindicate that even simple analysis algorithms can well support user-initiated information retrievaland building intuitive learning objects that follow the learner’s needs.Peer reviewe

    Naturalness vs. Predictability: A Key Debate in Controlled Languages

    Full text link
    Abstract. In this paper we describe two quite different philosophies used in developing controlled languages (CLs): A "naturalist " approach, in which CL interpretation is treated as a simpler form of full natural language processing; and a "formalist " approach, in which the CL interpretation is “deterministic” (context insensitive) and the CL is viewed more as an English-like formal specification language. Despite the philosophical and practical differences, we suggest that a synthesis can be made in which a deterministic core is embedded in a naturalist CL, and illustrate this with our own controlled language CPL. In the second part of this paper we present a fictitious debate between an ardent “naturalist ” and an ardent “formalist”, each arguing their respective positions, to illustrate the benefits and tradeoffs of these different philosophies in an accessible way. Part I: The Naturalist vs. Formalist Debate

    Document-Driven Design for Distributed CAD Services in Service-Oriented Architecture

    Get PDF
    Current computer-aided design (CAD) systems only support interactive geometry generation, which is not ideal for distributed engineering services in enterprise-to-enterprise collaboration with a generic thin-client service-oriented architecture. This paper proposes a new feature-based modeling mechanism—document-driven design—to enable batch mode geometry construction for distributed CAD systems. A semantic feature model is developed to represent informative and communicative design intent. Feature semantics is explicitly captured as a trinary relation, which provides good extensibility and prevents semantics loss. Data interoperability between domains is enhanced by schema mapping and multiresolution semantics. This mechanism aims to enable asynchronous communication in distributed CAD environments with ease of design alternative evaluation and reuse, reduced human errors, and improved system throughput and utilization
    • …
    corecore