508 research outputs found

    The construction of a linguistic linked data framework for bilingual lexicographic resources

    Get PDF
    Little-known lexicographic resources can be of tremendous value to users once digitised. By extending the digitisation efforts for a lexicographic resource, converting the human readable digital object to a state that is also machine-readable, structured data can be created that is semantically interoperable, thereby enabling the lexicographic resource to access, and be accessed by, other semantically interoperable resources. The purpose of this study is to formulate a process when converting a lexicographic resource in print form to a machine-readable bilingual lexicographic resource applying linguistic linked data principles, using the English-Xhosa Dictionary for Nurses as a case study. This is accomplished by creating a linked data framework, in which data are expressed in the form of RDF triples and URIs, in a manner which allows for extensibility to a multilingual resource. Click languages with characters not typically represented by the Roman alphabet are also considered. The purpose of this linked data framework is to define each lexical entry as “historically dynamic”, instead of “ontologically static” (Rafferty, 2016:5). For a framework which has instances in constant evolution, focus is thus given to the management of provenance and linked data generation thereof. The output is an implementation framework which provides methodological guidelines for similar language resources in the interdisciplinary field of Library and Information Science

    “Qu’est-ce Qu’elle Dit? What she say, what she say?” Translating the Resisting Other in Contemporary Caribbean Women’s Writing

    Get PDF
    I focus my discussion of Amryl Johnson’s poem “Qu’est-ce Qu’elle Dit”, Erna Brodber’s second novel Myal, and Merle Collins’s The Colour of Forgetting, on the texts’ representations of cultural difference and cultural transformation. The poem and the novels, I argue, present a version of Caribbean history that resists colonial discourse and that effects a process of healing and recovery from the epistemic violence of colonial historiography and the continued imposition of its cultural norms. At the same time I suggest that part of the process of resistance involves a radical reconceptualising and transformation of the Other. In these texts, what Nathaniel Mackey defines as “artistic othering”(55) is, as I wish to demonstrate in this article, a mode of resistance, a textual strategy that confronts, resists and refuses a too easy reappropriation of meaning, and yet insists on possibility. I approach the three texts as examples of counterdiscursive praxis, as texts which make “an intervention into postcolonial theoretical discourse” (O’Callaghan “Play It Back” 67). Amryl Johnson’s poem, from which the title of this paper comes, is emblematic of the tensions that arise in seemingly paradoxical processes of othering, reintegration and recovery in a creolized Caribbean context

    Generic adaptation framework for unifying adaptive web-based systems

    Get PDF
    The Generic Adaptation Framework (GAF) research project first and foremost creates a common formal framework for describing current and future adaptive hypermedia (AHS) and adaptive webbased systems in general. It provides a commonly agreed upon taxonomy and a reference model that encompasses the most general architectures of the present and future, including conventional AHS, and different types of personalization-enabling systems and applications such as recommender systems (RS) personalized web search, semantic web enabled applications used in personalized information delivery, adaptive e-Learning applications and many more. At the same time GAF is trying to bring together two (seemingly not intersecting) views on the adaptation: a classical pre-authored type, with conventional domain and overlay user models and data-driven adaptation which includes a set of data mining, machine learning and information retrieval tools. To bring these research fields together we conducted a number GAF compliance studies including RS, AHS, and other applications combining adaptation, recommendation and search. We also performed a number of real systems’ case-studies to prove the point and perform a detailed analysis and evaluation of the framework. Secondly it introduces a number of new ideas in the field of AH, such as the Generic Adaptation Process (GAP) which aligns with a layered (data-oriented) architecture and serves as a reference adaptation process. This also helps to understand the compliance features mentioned earlier. Besides that GAF deals with important and novel aspects of adaptation enabling and leveraging technologies such as provenance and versioning. The existence of such a reference basis should stimulate AHS research and enable researchers to demonstrate ideas for new adaptation methods much more quickly than if they had to start from scratch. GAF will thus help bootstrap any adaptive web-based system research, design, analysis and evaluation

    Alternate Means of Digital Design Communication

    Get PDF
    This thesis reconceptualises communication in digital design as an integrated social and technical process. The friction in the communicative processes pertaining to digital design can be traced to the fact that current research and practice emphasise technical concerns at the expense of social aspects of design communication. With the advent of BIM (Building Information Modelling), a code model of communication (machine-to-machine) is inadequately applied to design communication. This imbalance is addressed in this thesis by using inferential models of communication to capture and frame the psychological and social aspects behind the communicative contracts between people. Three critical aspects of the communicative act have been analysed, namely (1) data representation, (2) data classification and (3) data transaction, with the help of a new digital design communication platform, Speckle, which was developed during this research project for this purpose. By virtue of an applied living laboratory context, Speckle facilitated both qualitative and quantitative comparisons against existing methodologies with data from real-world settings. Regarding data representation (1), this research finds that the communicative performance of a low-level composable object model is better than that of a complete and universal one as it enables a more dynamic process of ontological revision. This implies that current practice and research operates at an inappropriate level of abstraction. On data classification (2), this thesis shows that a curatorial object-based data sharing methodology, as opposed to the current file-based approaches, leads to increased relevancy and a reduction in noise (information without intent, or meaning). Finally, on data transaction (3), the analysis shows that an object-based data sharing methodology is technically better suited to enable communicative contracts between stakeholders. It allows for faster and more meaningful change-dependent transactions, as well as allow for the emergence of traceable communicative networks outside of the predefined exchanges of current practices

    A Unified Metamodel for Assessing and Predicting Software Evolvability Quality

    Get PDF
    Software quality is a key assessment factor for organizations to determine the ability of software ecosystems to meet the constantly changing requirements. Many quality models exist that capture and assess the changing factors affecting the quality of a software product. Common to these models is that they, contrary to the software ecosystems they are assessing, are not evolvable or reusable. The thesis first defines what constitutes a unified, evolvable, and reusable quality metamodel. We then introduce SE-EQUAM, a novel, ontological, quality assessment metamodel that was designed from the ground up to support quality unification, reuse, and evolvability. We then validate the reus-ability of our metamodel through instantiating a domain specific quality assessment model called OntEQAM that assesses evolvability as a non-functional software quality based on product and com-munity dimensions. A fuzzy logic based assessment process that addresses uncertainties around score boundaries supports the evolvability quality assessment. The presented assessment process also uses the unified representation of the input knowledge artifacts, the metamodel, and the model to provide a fuzzy assessment score. Finally, we further interpret and predict the evolvability as-sessment scores using a novel, cross-disciplinary approach that re-applies financial technical analy-sis, which are indicators, and patterns typically used for price analysis and the forecasting of stocks in financial markets. We performed several case studies to illustrate and evaluate the applicability of our proposed evolvability score prediction approach

    Innovations for Requirements Analysis, From Stakeholders' Needs to Formal Designs

    Get PDF
    14th MontereyWorkshop 2007 Monterey, CA, USA, September 10-13, 2007 Revised Selected PapersWe are pleased to present the proceedings of the 14thMontereyWorkshop, which took place September 10–13, 2007 in Monterey, CA, USA. In this preface, we give the reader an overview of what took place at the workshop and introduce the contributions in this Lecture Notes in Computer Science volume. A complete introduction to the theme of the workshop, as well as to the history of the Monterey Workshop series, can be found in Luqi and Kordon’s “Advances in Requirements Engineering: Bridging the Gap between Stakeholders’ Needs and Formal Designs” in this volume. This paper also contains the case study that many participants used as a problem to frame their analyses, and a summary of the workshop’s results

    Ontology-based knowledge representation and semantic search information retrieval: case study of the underutilized crops domain

    Get PDF
    The aim of using semantic technologies in domain knowledge modeling is to introduce the semantic meaning of concepts in knowledge bases, such that they are both human-readable as well as machine-understandable. Due to their powerful knowledge representation formalism and associated inference mechanisms, ontology-based approaches have been increasingly adopted to formally represent domain knowledge. The primary objective of this thesis work has been to use semantic technologies in advancing knowledge-sharing of Underutilized crops as a domain and investigate the integration of underlying ontologies developed in OWL (Web Ontology Language) with augmented SWRL (Semantic Web Rule Language) rules for added expressiveness. The work further investigated generating ontologies from existing data sources and proposed the reverse-engineering approach of generating domain specific conceptualization through competency questions posed from possible ontology users and domain experts. For utilization, a semantic search engine (the Onto-CropBase) has been developed to serve as a Web-based access point for the Underutilized crops ontology model. Relevant linked-data in Resource Description Framework Schema (RDFS) were added for comprehensiveness in generating federated queries. While the OWL/SWRL combination offers a highly expressive ontology language for modeling knowledge domains, the combination is found to be lacking supplementary descriptive constructs to model complex real-life scenarios, a necessary requirement for a successful Semantic Web application. To this end, the common logic programming formalisms for extending Description Logic (DL)-based ontologies were explored and the state of the art in SWRL expressiveness extensions determined with a view to extending the SWRL formalism. Subsequently, a novel fuzzy temporal extension to the Semantic Web Rule Language (FT-SWRL), which combines SWRL with fuzzy logic theories based on the valid-time temporal model, has been proposed to allow modeling imprecise temporal expressions in domain ontologies

    A framework for analyzing changes in health care lexicons and nomenclatures

    Get PDF
    Ontologies play a crucial role in current web-based biomedical applications for capturing contextual knowledge in the domain of life sciences. Many of the so-called bio-ontologies and controlled vocabularies are known to be seriously defective from both terminological and ontological perspectives, and do not sufficiently comply with the standards to be considered formai ontologies. Therefore, they are continuously evolving in order to fix the problems and provide valid knowledge. Moreover, many problems in ontology evolution often originate from incomplete knowledge about the given domain. As our knowledge improves, the related definitions in the ontologies will be altered. This problem is inadequately addressed by available tools and algorithms, mostly due to the lack of suitable knowledge representation formalisms to deal with temporal abstract notations, and the overreliance on human factors. Also most of the current approaches have been focused on changes within the internal structure of ontologies, and interactions with other existing ontologies have been widely neglected. In this research, alter revealing and classifying some of the common alterations in a number of popular biomedical ontologies, we present a novel agent-based framework, RLR (Represent, Legitimate, and Reproduce), to semi-automatically manage the evolution of bio-ontologies, with emphasis on the FungalWeb Ontology, with minimal human intervention. RLR assists and guides ontology engineers through the change management process in general, and aids in tracking and representing the changes, particularly through the use of category theory. Category theory has been used as a mathematical vehicle for modeling changes in ontologies and representing agents' interactions, independent of any specific choice of ontology language or particular implementation. We have also employed rule-based hierarchical graph transformation techniques to propose a more specific semantics for analyzing ontological changes and transformations between different versions of an ontology, as well as tracking the effects of a change in different levels of abstractions. Thus, the RLR framework enables one to manage changes in ontologies, not as standalone artifacts in isolation, but in contact with other ontologies in an openly distributed semantic web environment. The emphasis upon the generality and abstractness makes RLR more feasible in the multi-disciplinary domain of biomedical Ontology change management
    • …
    corecore