4,663 research outputs found

    Requirements elicitation through viewpoint control in a natural language environment

    Get PDF
    While requirements engineering is about building a conceptual model of part of reality, requirements validation involves assessing the model for correctness, completeness, and consistency. Viewpoint resolution is the process of comparing different views of a given situation and reconciling different opinions. In his doctoral dissertation Leite [72] proposes viewpoint resolution as a means for early validation of requirements of large systems. Leite concentrates on the representation of two different views using a special language, and the identification of their syntactic differences. His method relies heavily on redundancy: two viewpoints (systems analysts) should consider the same topic, use the same vocabulary, and use the same rule-based language which constrains how the rules should be expressed. The quality of discrepancies that can be detected using his method depends on the quality of the viewpoints. The hypothesis of this thesis is that, independently of the quality of the viewpoints, the number of viewpoints, the language, and the domain, it is possible to detect better quality discrepancies and to point out problems earlier than Leite's method allows. In the first part of this study, viewpoint-oriented requirements engineering methods are classified into categories based on the kind of multiplicity the methods address: multiple human agents, multiple specification processes, or multiple representation schemes. The classification provides a framework for the comparison and the evaluation of viewpoint-based methods. The study then focuses on the critical evaluation of Leite's method both analytically and experimentally. Counter examples were designed to identify the situations the method cannot handle. The second part of the work concentrates on the development of a method for the very early validation of requirements that improves on Leite's method and pushes the boundaries of the validation process upstream towards fact-finding, and downstream towards conflicts resolution. The Viewpoint Control Method draws its principles from the fields of uncertainty management and natural language engineering. The basic principle of the method is that, in order to make sense of a domain one must learn about the information sources and create models of their behaviour. These models are used to assess pieces of information, in natural language, received from the sources and to resolve conflicts between them. The models are then reassessed in the light of feedback from the results of the process of information evaluation and conflict resolution. Among the implications of this approach is the very early detection of problems, and the treatment of conflict resolution as an explicit and an integral part of the requirements engineering process. The method is designed to operate within a large environment called LOLITA that supports relevant aspects of natural language engineering. In the third part of the study the Viewpoint Control Method is applied and experimentally evaluated, using examples and practical case studies. Comparing the proposed approach to Leite's shows that the Viewpoint Control Method is of wider scope, is able to detect problems earlier, and is able to point out better quality problems. The conclusions of the investigation support the view that underlines the naivety of assuming competence or objectivity of each source of information

    The Effectiveness of an Academic Literacy Intervention to Help University Freshmen Recognize and Resolve Inconsistencies Across Multiple Texts

    Get PDF
    Students must independently complete academic literacy tasks--including reading analytically to identify problems, resolving problems that arise, and using writing to demonstrate advanced knowledge acquisition--if they are to be successful in courses across their university careers. However, a significant portion of students arrives at the university underprepared to meet these expectations for academic literacy. The purpose of this study was to examine the effectiveness of an instructional intervention to help developmental-level freshmen acquire the academic literacy skills that experienced academic readers demonstrate in order to promote independent learning. The four-week instructional intervention focused on two aspects of advanced academic literacy: 1) identifying inconsistencies across multiple texts and 2) flexibly employing evaluative heuristics (sourcing, corroboration, & contextualization) in order to resolve inconsistencies. The study, which took place at a large, urban, public university over the course of five weeks in two intact sections of a developmental-level academic literacy course taught by one instructor, used a pre-experimental one group pretest-posttest design. Participants (N = 31) were administered the Multiple Text Tasks as a pretest and a posttest in order to measure three dependent variables: 1) the number of inconsistencies identified, 2) the number of evaluative heuristics used in writing, and 3) the number of evaluative heuristics used in reading. More participants were categorized as High Use in their ability to recognize inconsistencies across multiple texts postintervention. This result was statistically significant. Although participants did increase their use of evaluative heuristics in writing and in reading postintervention, these results did not reach statistical significance. One unique finding was that developmental-level freshmen in this study used the contextualization heuristic at higher rates than in previous studies. The results suggest that the instructional intervention contributed to an increase in the number of inconsistencies identified. The increase in evaluative heuristic use suggests that the intervention may have contributed to increased use of evaluative heuristics. However, the failure to reach statistical significance suggests that the intervention was not of adequate intensity or duration

    Capture and Maintenance of Constraints in Engineering Design

    Get PDF
    The thesis investigates two domains, initially the kite domain and then part of a more demanding Rolls-Royce domain (jet engine design). Four main types of refinement rules that use the associated application conditions and domain ontology to support the maintenance of constraints are proposed. The refinement rules have been implemented in ConEditor and the extended system is known as ConEditor+. With the help of ConEditor+, the thesis demonstrates that an explicit representation of application conditions together with the corresponding constraints and the domain ontology can be used to detect inconsistencies, redundancy, subsumption and fusion, reduce the number of spurious inconsistencies and prevent the identification of inappropriate refinements of redundancy, subsumption and fusion between pairs of constraints.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    requirements and use cases

    Get PDF
    In this report, we introduce our initial vision of the Corporate Semantic Web as the next step in the broad field of Semantic Web research. We identify requirements of the corporate environment and gaps between current approaches to tackle problems facing ontology engineering, semantic collaboration, and semantic search. Each of these pillars will yield innovative methods and tools during the project runtime until 2013. Corporate ontology engineering will improve the facilitation of agile ontology engineering to lessen the costs of ontology development and, especially, maintenance. Corporate semantic collaboration focuses the human-centered aspects of knowledge management in corporate contexts. Corporate semantic search is settled on the highest application level of the three research areas and at that point it is a representative for applications working on and with the appropriately represented and delivered background knowledge. We propose an initial layout for an integrative architecture of a Corporate Semantic Web provided by these three core pillars

    Probabilistic Graphical Models for Credibility Analysis in Evolving Online Communities

    Get PDF
    One of the major hurdles preventing the full exploitation of information from online communities is the widespread concern regarding the quality and credibility of user-contributed content. Prior works in this domain operate on a static snapshot of the community, making strong assumptions about the structure of the data (e.g., relational tables), or consider only shallow features for text classification. To address the above limitations, we propose probabilistic graphical models that can leverage the joint interplay between multiple factors in online communities --- like user interactions, community dynamics, and textual content --- to automatically assess the credibility of user-contributed online content, and the expertise of users and their evolution with user-interpretable explanation. To this end, we devise new models based on Conditional Random Fields for different settings like incorporating partial expert knowledge for semi-supervised learning, and handling discrete labels as well as numeric ratings for fine-grained analysis. This enables applications such as extracting reliable side-effects of drugs from user-contributed posts in healthforums, and identifying credible content in news communities. Online communities are dynamic, as users join and leave, adapt to evolving trends, and mature over time. To capture this dynamics, we propose generative models based on Hidden Markov Model, Latent Dirichlet Allocation, and Brownian Motion to trace the continuous evolution of user expertise and their language model over time. This allows us to identify expert users and credible content jointly over time, improving state-of-the-art recommender systems by explicitly considering the maturity of users. This also enables applications such as identifying helpful product reviews, and detecting fake and anomalous reviews with limited information.Comment: PhD thesis, Mar 201

    Geospatial Narratives and their Spatio-Temporal Dynamics: Commonsense Reasoning for High-level Analyses in Geographic Information Systems

    Full text link
    The modelling, analysis, and visualisation of dynamic geospatial phenomena has been identified as a key developmental challenge for next-generation Geographic Information Systems (GIS). In this context, the envisaged paradigmatic extensions to contemporary foundational GIS technology raises fundamental questions concerning the ontological, formal representational, and (analytical) computational methods that would underlie their spatial information theoretic underpinnings. We present the conceptual overview and architecture for the development of high-level semantic and qualitative analytical capabilities for dynamic geospatial domains. Building on formal methods in the areas of commonsense reasoning, qualitative reasoning, spatial and temporal representation and reasoning, reasoning about actions and change, and computational models of narrative, we identify concrete theoretical and practical challenges that accrue in the context of formal reasoning about `space, events, actions, and change'. With this as a basis, and within the backdrop of an illustrated scenario involving the spatio-temporal dynamics of urban narratives, we address specific problems and solutions techniques chiefly involving `qualitative abstraction', `data integration and spatial consistency', and `practical geospatial abduction'. From a broad topical viewpoint, we propose that next-generation dynamic GIS technology demands a transdisciplinary scientific perspective that brings together Geography, Artificial Intelligence, and Cognitive Science. Keywords: artificial intelligence; cognitive systems; human-computer interaction; geographic information systems; spatio-temporal dynamics; computational models of narrative; geospatial analysis; geospatial modelling; ontology; qualitative spatial modelling and reasoning; spatial assistance systemsComment: ISPRS International Journal of Geo-Information (ISSN 2220-9964); Special Issue on: Geospatial Monitoring and Modelling of Environmental Change}. IJGI. Editor: Duccio Rocchini. (pre-print of article in press

    The impact of inconsistent human annotations on AI driven clinical decision making

    Get PDF
    In supervised learning model development, domain experts are often used to provide the class labels (annotations). Annotation inconsistencies commonly occur when even highly experienced clinical experts annotate the same phenomenon (e.g., medical image, diagnostics, or prognostic status), due to inherent expert bias, judgments, and slips, among other factors. While their existence is relatively well-known, the implications of such inconsistencies are largely understudied in real-world settings, when supervised learning is applied on such ‘noisy’ labelled data. To shed light on these issues, we conducted extensive experiments and analyses on three real-world Intensive Care Unit (ICU) datasets. Specifically, individual models were built from a common dataset, annotated independently by 11 Glasgow Queen Elizabeth University Hospital ICU consultants, and model performance estimates were compared through internal validation (Fleiss’ κ = 0.383 i.e., fair agreement). Further, broad external validation (on both static and time series datasets) of these 11 classifiers was carried out on a HiRID external dataset, where the models’ classifications were found to have low pairwise agreements (average Cohen’s κ = 0.255 i.e., minimal agreement). Moreover, they tend to disagree more on making discharge decisions (Fleiss’ κ = 0.174) than predicting mortality (Fleiss’ κ = 0.267). Given these inconsistencies, further analyses were conducted to evaluate the current best practices in obtaining gold-standard models and determining consensus. The results suggest that: (a) there may not always be a “super expert” in acute clinical settings (using internal and external validation model performances as a proxy); and (b) standard consensus seeking (such as majority vote) consistently leads to suboptimal models. Further analysis, however, suggests that assessing annotation learnability and using only ‘learnable’ annotated datasets for determining consensus achieves optimal models in most cases
    corecore