864 research outputs found

    Ontology modularization: principles and practice

    Get PDF
    Technological advances have provided us with the capability to build large intelligent systems capable of using knowledge, which relies on being able to represent the knowledge in a way that machines can process and interpret. This is achieved by using ontologies; that is logical theories that capture the knowledge of a domain. It is widely accepted that ontology development is a non-trivial task and can be expedited through the reuse of existing ontologies. However, it is likely that the developer would only require a part of the original ontology; obtaining this part is the purpose of ontology modularization. In this thesis a graph traversal based technique for performing ontology module extraction is presented. We present an extensive evaluation of the various ontology modularization techniques in the literature; including a proposal for an entropy inspired measure. A task-based evaluation is included, which demonstrates that traversal based ontology module extraction techniques have comparable performance to the logical based techniques. Agents, autonomous software components, use ontologies in complex systems; with each agent having its own, possibly different, ontology. In such systems agents need to communicate and successful communication relies on the agents ability to reach an agreement on the terms they will use to communicate. Ontology modularization allows the agents to agree on only those terms relevant to the purpose of the communication. Thus, this thesis presents a novel application of ontology modularization as a space reduction mechanism for the dynamic selection of ontology alignments in multi-agent systems. The evaluation of this novel application shows that ontology modularization can reduce the search space without adversely affecting the quality of the agreed ontology alignment

    Structuring Abstraction to Achieve Ontology Modularisation

    Get PDF
    Large and complex ontologies lead to usage difficulties, thereby hampering the ontology developers’ tasks. Ontology modules have been proposed as a possible solution, which is supported by some algorithms and tools. However, the majority of types of modules, including those based on abstraction, still rely on manual methods for modularisation. Toward filling this gap in modularisation techniques, we systematised abstractions and selected five types of abstractions relevant for modularisation for which we created novel algorithms, implemented them, and wrapped it in a GUI, called NOMSA, to facilitate their use by ontology developers. The algorithms were evaluated quantitatively by assessing the quality of the generated modules. The quality of a module is measured by comparing it to the benchmark metrics from an existing framework for ontology modularisation. The results show that module’s quality ranges between average to good, whilst also eliminating manual intervention

    An Automated System for the Assessment and Ranking of Domain Ontologies

    Get PDF
    As the number of intelligent software applications and the number of semantic websites continue to expand, ontologies are needed to formalize shared terms. Often it is necessary to either find a previously used ontology for a particular purpose, or to develop a new one to meet a specific need. Because of the challenge involved in creating a new ontology from scratch, the latter option is often preferable. The ability of a user to select an appropriate, high-quality domain ontology from a set of available options would be most useful in knowledge engineering and in developing intelligent applications. Being able to assess an ontology\u27s quality and suitability is also important when an ontology is developed from the beginning. These capabilities, however, require good quality assessment mechanisms as well as automated support when there are a large number of ontologies from which to make a selection. This thesis provides an in-depth analysis of the current research in domain ontology evaluation, including the development of a taxonomy to categorize the numerous directions the research has taken. Based on the lessons learned from the literature review, an approach to the automatic assessment of domain ontologies is selected and a suite of ontology quality assessment metrics grounded in semiotic theory is presented. The metrics are implemented in a Domain Ontology Rating System (DoORS), which is made available as an open source web application. An additional framework is developed that would incorporate this rating system as part of a larger system to find ontology libraries on the web, retrieve ontologies from them, and assess them to select the best ontology for a particular task. An empirical evaluation in four phases shows the usefulness of the work, including a more stringent evaluation of the metrics that assess how well an ontology fits its domain and how well an ontology is regarded within its community of users

    Measuring affective states from technical debt: A psychoempirical software engineering experiment

    Get PDF
    Software engineering is a human activity. Despite this, human aspects are under-represented in technical debt research, perhaps because they are challenging to evaluate. This study's objective was to investigate the relationship between technical debt and affective states (feelings, emotions, and moods) from software practitioners. Forty participants (N = 40) from twelve companies took part in a mixed-methods design, consisting of a repeated-measures (r = 5) experiment (n = 200), a survey employing a questionnaire, and semi-structured interviews. The statistical analysis shows that different design smells negatively or positively impact affective states. From the qualitative data, it is clear that technical debt activates a substantial portion of the emotional spectrum and is psychologically taxing. Further, the practitioner's reactions to technical debt appear to fall in different levels of maturity. We argue that human aspects in technical debt are important factors to consider, as they may result in, e.g., procrastination, apprehension, and burnout.Comment: 48 pages, 11 figures, submitted to Empirical Software Engineerin

    Service innovation - lessons from modularization and open innovation - a new service value

    Full text link
    The traditional manufacturing model of volume-variety influencing the conduct of business is not entirely representative of service-centric business. The latter has two key differences â it is much more end-user centric and individualistic in experience. The complex nature of service attributes also make it much more convoluted. The notion of product being the centre of interaction is being replaced with service processes involving participants and generally defined between a service provider and service consumer. The aim of this paper is to validate the service innovation hypotheses put forward based on significant developments in value networks, open interfaces, and business models recently. In doing so, this theoretical paper substantiates the claim that prescriptive volume-variety relationships are little meaningful in service delivery environment

    A foundation for ontology modularisation

    Get PDF
    There has been great interest in realising the Semantic Web. Ontologies are used to define Semantic Web applications. Ontologies have grown to be large and complex to the point where it causes cognitive overload for humans, in understanding and maintaining, and for machines, in processing and reasoning. Furthermore, building ontologies from scratch is time-consuming and not always necessary. Prospective ontology developers could consider using existing ontologies that are of good quality. However, an entire large ontology is not always required for a particular application, but a subset of the knowledge may be relevant. Modularity deals with simplifying an ontology for a particular context or by structure into smaller ontologies, thereby preserving the contextual knowledge. There are a number of benefits in modularising an ontology including simplified maintenance and machine processing, as well as collaborative efforts whereby work can be shared among experts. Modularity has been successfully applied to a number of different ontologies to improve usability and assist with complexity. However, problems exist for modularity that have not been satisfactorily addressed. Currently, modularity tools generate large modules that do not exclusively represent the context. Partitioning tools, which ought to generate disjoint modules, sometimes create overlapping modules. These problems arise from a number of issues: different module types have not been clearly characterised, it is unclear what the properties of a 'good' module are, and it is unclear which evaluation criteria applies to specific module types. In order to successfully solve the problem, a number of theoretical aspects have to be investigated. It is important to determine which ontology module types are the most widely-used and to characterise each such type by distinguishing properties. One must identify properties that a 'good' or 'usable' module meets. In this thesis, we investigate these problems with modularity systematically. We begin by identifying dimensions for modularity to define its foundation: use-case, technique, type, property, and evaluation metric. Each dimension is populated with sub-dimensions as fine-grained values. The dimensions are used to create an empirically-based framework for modularity by classifying a set of ontologies with them, which results in dependencies among the dimensions. The formal framework can be used to guide the user in modularising an ontology and as a starting point in the modularisation process. To solve the problem with module quality, new and existing metrics were implemented into a novel tool TOMM, and an experimental evaluation with a set of modules was performed resulting in dependencies between the metrics and module types. These dependencies can be used to determine whether a module is of good quality. For the issue with existing modularity techniques, we created five new algorithms to improve the current tools and techniques and experimentally evaluate them. The algorithms of the tool, NOMSA, performs as well as other tools for most performance criteria. For NOMSA's generated modules, two of its algorithms' generated modules are good quality when compared to the expected dependencies of the framework. The remaining three algorithms' modules correspond to some of the expected values for the metrics for the ontology set in question. The success of solving the problems with modularity resulted in a formal foundation for modularity which comprises: an exhaustive set of modularity dimensions with dependencies between them, a framework for guiding the modularisation process and annotating module, a way to measure the quality of modules using the novel TOMM tool which has new and existing evaluation metrics, the SUGOI tool for module management that has been investigated for module interchangeability, and an implementation of new algorithms to fill in the gaps of insufficient tools and techniques

    Application of service composition mechanisms to Future Networks architectures and Smart Grids

    Get PDF
    Aquesta tesi gira entorn de la hipòtesi de la metodologia i mecanismes de composició de serveis i com es poden aplicar a diferents camps d'aplicació per a orquestrar de manera eficient comunicacions i processos flexibles i sensibles al context. Més concretament, se centra en dos camps d'aplicació: la distribució eficient i sensible al context de contingut multimèdia i els serveis d'una xarxa elèctrica intel·ligent. En aquest últim camp es centra en la gestió de la infraestructura, cap a la definició d'una Software Defined Utility (SDU), que proposa una nova manera de gestionar la Smart Grid amb un enfocament basat en programari, que permeti un funcionament molt més flexible de la infraestructura de xarxa elèctrica. Per tant, revisa el context, els requisits i els reptes, així com els enfocaments de la composició de serveis per a aquests camps. Fa especial èmfasi en la combinació de la composició de serveis amb arquitectures Future Network (FN), presentant una proposta de FN orientada a serveis per crear comunicacions adaptades i sota demanda. També es presenten metodologies i mecanismes de composició de serveis per operar sobre aquesta arquitectura, i posteriorment, es proposa el seu ús (en conjunció o no amb l'arquitectura FN) en els dos camps d'estudi. Finalment, es presenta la investigació i desenvolupament realitzat en l'àmbit de les xarxes intel·ligents, proposant diverses parts de la infraestructura SDU amb exemples d'aplicació de composició de serveis per dissenyar seguretat dinàmica i flexible o l'orquestració i gestió de serveis i recursos dins la infraestructura de l'empresa elèctrica.Esta tesis gira en torno a la hipótesis de la metodología y mecanismos de composición de servicios y cómo se pueden aplicar a diferentes campos de aplicación para orquestar de manera eficiente comunicaciones y procesos flexibles y sensibles al contexto. Más concretamente, se centra en dos campos de aplicación: la distribución eficiente y sensible al contexto de contenido multimedia y los servicios de una red eléctrica inteligente. En este último campo se centra en la gestión de la infraestructura, hacia la definición de una Software Defined Utility (SDU), que propone una nueva forma de gestionar la Smart Grid con un enfoque basado en software, que permita un funcionamiento mucho más flexible de la infraestructura de red eléctrica. Por lo tanto, revisa el contexto, los requisitos y los retos, así como los enfoques de la composición de servicios para estos campos. Hace especial hincapié en la combinación de la composición de servicios con arquitecturas Future Network (FN), presentando una propuesta de FN orientada a servicios para crear comunicaciones adaptadas y bajo demanda. También se presentan metodologías y mecanismos de composición de servicios para operar sobre esta arquitectura, y posteriormente, se propone su uso (en conjunción o no con la arquitectura FN) en los dos campos de estudio. Por último, se presenta la investigación y desarrollo realizado en el ámbito de las redes inteligentes, proponiendo varias partes de la infraestructura SDU con ejemplos de aplicación de composición de servicios para diseñar seguridad dinámica y flexible o la orquestación y gestión de servicios y recursos dentro de la infraestructura de la empresa eléctrica.This thesis revolves around the hypothesis the service composition methodology and mechanisms and how they can be applied to different fields of application in order to efficiently orchestrate flexible and context-aware communications and processes. More concretely, it focuses on two fields of application that are the context-aware media distribution and smart grid services and infrastructure management, towards a definition of a Software-Defined Utility (SDU), which proposes a new way of managing the Smart Grid following a software-based approach that enable a much more flexible operation of the power infrastructure. Hence, it reviews the context, requirements and challenges of these fields, as well as the service composition approaches. It makes special emphasis on the combination of service composition with Future Network (FN) architectures, presenting a service-oriented FN proposal for creating context-aware on-demand communication services. Service composition methodology and mechanisms are also presented in order to operate over this architecture, and afterwards, proposed for their usage (in conjunction or not with the FN architecture) in the deployment of context-aware media distribution and Smart Grids. Finally, the research and development done in the field of Smart Grids is depicted, proposing several parts of the SDU infrastructure, with examples of service composition application for designing dynamic and flexible security for smart metering or the orchestration and management of services and data resources within the utility infrastructure

    Evaluation of construction contract documents to be applied in modular construction focusing ambiguities; A text processing approach

    Get PDF
    Modular coordination in building construction has become increasingly popular, particularly in Northern Europe and North America. In Canada, modular construction came to considerable attention over the last decade due to its valuable effect on project constraints, safety, and preventing construction and demolition waste. However, the modular construction industry still adopts the same administrative procedures designed for the conventional construction industry, even though the features of modular and conventional construction are different in terms of construction processes and methods. Due to this trend, ambiguities in administrative documents are widely occurred and are one of the main causes to generate conflict, disputes, and claims between owners and modular suppliers as general contractors. As a first step in the this research to overcome this challenge, the research team focuses on investigating the contents and structures of the current standard contracts and modular RFPs, which are one of the major sources of confusion in modular construction, in order to mitigate and/or remove the ambiguities based on the considering the specifications of off-site construction procedures and system. In this case, this research illustrates a conceptual framework that has two parts: First, classification of the main sources of ambiguities in construction contracts (both Conventional and modular) and second, to identify the similarities and differences between Canadian documents (standard contracts and modular RFPs) and benchmark countries by applying through text processing and readability analysis. We applied text processing to find top terms, including terms with high frequency (TF) in each document, also high TF-IDF terms, which species occur in one document and not others then, we detected manually the three standard contracts and four RFPs and compare them with the output of literature review to identify the major issues that are common. The readability analysis shows the textual complexity of a document and to what extent the documents are difficult to read. The main findings indicate that the modular industry in Canada suffers from a lack of specific standard contract documents for modular construction
    • …
    corecore