152 research outputs found

    An Ontological Framework for Context-Aware Collaborative Business Process Formulation

    Get PDF
    In cross-enterprise collaborative environment, we have dealt with challenges in business process integration for common business goals. Research directions in this domain range from business to business integration (B2Bi) to service-oriented augmentation. Ontologies are used in Business Process Management (BPM) to reduce the gap between the business world and information technology (IT), especially in the context of cross enterprise collaboration. For a dynamic collaboration, virtual enterprises need to establish collaborative processes with appropriate matching levels of tasks. However, the problem of solving the semantics mismatching is still not tackled or even harder in case of querying space between different enterprise profiles as considered as ontologies. This article presents a framework based on the ontological and context awareness during the task integration and matching in order to form collaborative processes in the manner of cross enterprise collaboration

    Knowledge representation and exploitation for interactive and cognitive robots

    Get PDF
    L'arrivée des robots dans notre vie quotidienne fait émerger le besoin pour ces systèmes d'avoir accès à une représentation poussée des connaissances et des capacités de raisonnements associées. Ainsi, les robots doivent pouvoir comprendre les éléments qui composent l'environnement dans lequel ils évoluent. De plus, la présence d'humains dans ces environnements et donc la nécessité d'interagir avec eux amènent des exigences supplémentaires. Ainsi, les connaissances ne sont plus utilisées par le robot dans le seul but d'agir physiquement sur son environnement mais aussi dans un but de communication et de partage d'information avec les humains. La connaissance ne doit plus être uniquement compréhensible par le robot lui-même mais doit aussi pouvoir être exprimée. Dans la première partie de cette thèse, nous présentons Ontologenius. C'est un logiciel permettant de maintenir des bases de connaissances sous forme d'ontologie, de raisonner dessus et de les gérer dynamiquement. Nous commençons par expliquer en quoi ce logiciel est adapté aux applications d'interaction humain-robot (HRI), notamment avec la possibilité de représenter la base de connaissances du robot mais aussi une estimation des bases de connaissances des partenaires humains ce qui permet d'implémenter les mécanismes de théorie de l'esprit. Nous poursuivons avec une présentation de ses interfaces. Cette partie se termine par une analyse des performances du système ainsi développé. Dans une seconde partie, cette thèse présente notre contribution à deux problèmes d'exploration des connaissances: l'un ayant trait au référencement spatial et l'autre à l'utilisation de connaissances sémantiques. Nous commençons par une tâche de description d'itinéraires pour laquelle nous proposons une ontologie permettant de décrire la topologie d'environnements intérieurs et deux algorithmes de recherche d'itinéraires. Nous poursuivons avec une tâche de génération d'expression de référence. Cette tâche vise à sélectionner l'ensemble optimal d'informations à communiquer afin de permettre à un auditeur d'identifier l'entité référencée dans un contexte donné. Ce dernier algorithme est ensuite affiné pour y ajouter les informations sur les activités passées provenant d'une action conjointe entre un robot et un humain, afin de générer des expressions encore plus pertinentes. Il est également intégré à un planificateur de tâches symbolique pour estimer la faisabilité et le coût des futures communications. Cette thèse se termine par la présentation de deux architectures cognitives, la première utilisant notre contribution concernant la description d'itinéraire et la seconde utilisant nos contributions autour de la Génération d'Expression de Référence. Les deux utilisent Ontologenius pour gérer la base de connaissances sémantique. À travers ces deux architectures, nous présentons comment nos travaux ont amené la base de connaissances a progressivement prendre un rôle central, fournissant des connaissances à tous les composants du système.As robots begin to enter our daily lives, we need advanced knowledge representations and associated reasoning capabilities to enable them to understand and model their environments. Considering the presence of humans in such environments, and therefore the need to interact with them, this need comes with additional requirements. Indeed, knowledge is no longer used by the robot for the sole purpose of being able to act physically on the environment but also to communicate and share information with humans. Therefore knowledge should no longer be understandable only by the robot itself, but should also be able to be narrative-enabled. In the first part of this thesis, we present our first contribution with Ontologenius. This software allows to maintain knowledge bases in the form of ontology, to reason on them and to manage them dynamically. We start by explaining how this software is suitable for \acrfull{hri} applications. To that end, for example to implement theory of mind abilities, it is possible to represent the robot's knowledge base as well as an estimate of the knowledge bases of human partners. We continue with a presentation of its interfaces. This part ends with a performance analysis, demonstrating its online usability. In a second part, we present our contribution to two knowledge exploration problems around the general topic of spatial referring and the use of semantic knowledge. We start with the route description task which aims to propose a set of possible routes leading to a target destination, in the framework of a guiding task. To achieve this task, we propose an ontology allowing us to describe the topology of indoor environments and two algorithms to search for routes. The second knowledge exploration problem we tackle is the \acrfull{reg} problem. It aims at selecting the optimal set of piece of information to communicate in order to allow a hearer to identify the referred entity in a given context. This contribution is then refined to use past activities coming from joint action between a robot and a human, in order to generate new kinds of Referring Expressions. It is also linked with a symbolic task planner to estimate the feasibility and cost of future communications. We conclude this thesis by the presentation of two cognitive architectures. The first one uses the route description contribution and the second one takes advantage of our Referring Expression Generation contribution. Both of them use Ontologenius to manage the semantic Knowledge Base. Through these two architectures, we present how our contributions enable Knowledge Base to gradually take a central role, providing knowledge to all the components of the architectures

    The construction of a linguistic linked data framework for bilingual lexicographic resources

    Get PDF
    Little-known lexicographic resources can be of tremendous value to users once digitised. By extending the digitisation efforts for a lexicographic resource, converting the human readable digital object to a state that is also machine-readable, structured data can be created that is semantically interoperable, thereby enabling the lexicographic resource to access, and be accessed by, other semantically interoperable resources. The purpose of this study is to formulate a process when converting a lexicographic resource in print form to a machine-readable bilingual lexicographic resource applying linguistic linked data principles, using the English-Xhosa Dictionary for Nurses as a case study. This is accomplished by creating a linked data framework, in which data are expressed in the form of RDF triples and URIs, in a manner which allows for extensibility to a multilingual resource. Click languages with characters not typically represented by the Roman alphabet are also considered. The purpose of this linked data framework is to define each lexical entry as “historically dynamic”, instead of “ontologically static” (Rafferty, 2016:5). For a framework which has instances in constant evolution, focus is thus given to the management of provenance and linked data generation thereof. The output is an implementation framework which provides methodological guidelines for similar language resources in the interdisciplinary field of Library and Information Science

    Semantic validation in spatio-temporal schema integration

    Get PDF
    This thesis proposes to address the well-know database integration problem with a new method that combines functionality from database conceptual modeling techniques with functionality from logic-based reasoners. We elaborate on a hybrid - modeling+validation - integration approach for spatio-temporal information integration on the schema level. The modeling part of our methodology is supported by the spatio-temporal conceptual model MADS, whereas the validation part of the integration process is delegated to the description logics validation services. We therefore adhere to the principle that, rather than extending either formalism to try to cover all desirable functionality, a hybrid system, where the database component and the logic component would cooperate, each one performing the tasks for which it is best suited, is a viable solution for semantically rich information management. First, we develop a MADS-based flexible integration approach where the integrated schema designer has several viable ways to construct a final integrated schema. For different related schema elements we provide the designer with four general policies and with a set of structural solutions or structural patterns within each policy. To always guarantee an integrated solution, we provide for a preservation policy with multi-representation structural pattern. To state the inter-schema mappings, we elaborate on a correspondence language with explicit spatial and temporal operators. Thus, our correspondence language has three facets: structural, spatial, and temporal, allowing to relate the thematic representation as well as the spatial and temporal features. With the inter-schema mappings, the designer can state correspondences between related populations, and define the conditions that rule the matching at the instance level. These matching rules can then be used in query rewriting procedures or to match the instances within the data integration process. We associate a set of putative structural patterns to each type of population correspondence, providing a designer with a patterns' selection for flexible integrated schema construction. Second, we enhance our integration method by employing validation services of the description logic formalism. It is not guaranteed that the designer can state all the inter-schema mappings manually, and that they are all correct. We add the validation phase to ensure validity and completeness of the inter-schema mappings set. Inter-schema mappings cannot be validated autonomously, i.e., they are validated against the data model and the schemas they link. Thus, to implement our validation approach, we translate the data model, the source schemas and the inter-schema mappings into a description logic formalism, preserving the spatial and temporal semantics of the MADS data model. Thus, our modeling approach in description logic insures that the model designer will correctly define spatial and temporal schema elements and inter-schema mappings. The added value of the complete translation (i.e., including the data model and the source schemas) is that we validate not only the inter-schema mappings, but also the compliance of the source schemas to the data model, and infer implicit relationships within them. As the result of the validation procedure, the schema designer obtains the complete and valid set of inter-schema mappings and a set of valid (flexible) schematic patterns to apply to construct an integrated schema that meets application requirements. To further our work, we model a framework in which a schema designer is able to follow our integration method and realize the schema integration task in an assisted way. We design two models, UML and SEAM models, of a system that provides for integration functionalities. The models describe a framework where several tools are employed together, each involved in the service it is best suited for. We define the functionalities and the cooperation between the composing elements of the framework and detail the logics of the integration process in an UML activity diagram and in a SEAM operation model

    Exploiting transitivity in probabilistic models for ontology learning

    Get PDF
    Nel natural language processing (NLP) catturare il significato delle parole è una delle sfide a cui i ricercatori sono largamente interessati. Le reti semantiche di parole o concetti, che strutturano in modo formale la conoscenza, sono largamente utilizzate in molte applicazioni. Per essere effettivamente utilizzate, in particolare nei metodi automatici di apprendimento, queste reti semantiche devono essere di grandi dimensioni o almeno strutturare conoscenza di domini molto specifici. Il nostro principale obiettivo è contribuire alla ricerca di metodi di apprendimento di reti semantiche concentrandosi in differenti aspetti. Proponiamo un nuovo modello probabilistico per creare o estendere reti semantiche che prende contemporaneamente in considerazine sia le evidenze estratte nel corpus sia la struttura della rete semantiche considerata nel training. In particolare il nostro modello durante l'apprendimento sfrutta le proprietà strutturali, come la transitività, delle relazioni che legano i nodi della nostra rete. La formulazione della probabilità che una data relazione tra due istanze appartiene alla rete semantica dipenderà da due probabilità: la probabilità diretta stimata delle evidenze del corpus e la probabilità indotta che deriva delle proprietà strutturali della relazione presa in considerazione. Il modello che proponiano introduce alcune innovazioni nella stima di queste probabilità. Proponiamo anche un modello che può essere usato per apprendere conoscenza in differenti domini di interesse senza un grande effort aggiuntivo per l'adattamento. In particolare, nell'approccio che proponiamo, si apprende un modello da un dominio generico e poi si sfrutta tale modello per estrarre nuova conoscenza in un dominio specifico. Infine proponiamo Semantic Turkey Ontology Learner (ST-OL): un sistema di apprendimento di ontologie incrementale. Mediante ontology editor, ST-OL fornisce un efficiente modo di interagire con l'utente finale e inserire le decisioni di tale utente nel loop dell'apprendimento. Inoltre il modello probabilistico integrato in ST-OL permette di sfruttare la transitività delle relazioni per indurre migliori modelli di estrazione. Mediante degli esperimenti dimostriamo che tutti i modelli che proponiamo danno un reale contributo ai differenti task che consideriamo migliorando le prestazioni.Capturing word meaning is one of the challenges of natural language processing (NLP). Formal models of meaning such as semantic networks of words or concepts are knowledge repositories used in a variety of applications. To be effectively used, these networks have to be large or, at least, adapted to specific domains. Our main goal is to contribute practically to the research on semantic networks learning models by covering different aspects of the task. We propose a novel probabilistic model for learning semantic networks that expands existing semantic networks taking into accounts both corpus-extracted evidences and the structure of the generated semantic networks. The model exploits structural properties of target relations such as transitivity during learning. The probability for a given relation instance to belong to the semantic networks of words depends both on its direct probability and on the induced probability derived from the structural properties of the target relation. Our model presents some innovations in estimating these probabilities. We also propose a model that can be used in different specific knowledge domains with a small effort for its adaptation. In this approach a model is learned from a generic domain that can be exploited to extract new informations in a specific domain. Finally, we propose an incremental ontology learning system: Semantic Turkey Ontology Learner (ST-OL). ST-OL addresses two principal issues. The first issue is an efficient way to interact with final users and, then, to put the final users decisions in the learning loop. We obtain this positive interaction using an ontology editor. The second issue is a probabilistic learning semantic networks of words model that exploits transitive relations for inducing better extraction models. ST-OL provides a graphical user interface and a human- computer interaction workflow supporting the incremental leaning loop of our learning semantic networks of words

    Capability-based adaptation of production systems in a changing environment

    Get PDF
    Today’s production systems have to cope with volatile production environments characterized by frequently changing customer requirements, an increasing number of product variants, small batch sizes, short product life-cycles, the rapid emergence of new technical solutions and increasing regulatory requirements aimed at sustainable manufacturing. These constantly changing requirements call for adaptive and rapidly responding production systems that can adjust to the required changes in processing functions, production capacity and the distribution of the orders. This adaptation is required on the physical, logical and parametric levels. Such adaptivity cannot be achieved without intelligent methodologies, information models and tools to facilitate the adaptation planning and reactive adaptation of the systems. In industry it has been recognized that, because of the often expensive and inefficient adaptation process, companies rarely decide to adapt their production lines. This is mainly due to a lack of sufficient information and documentation about the capabilities of the current system and its lifecycle, as well as a lack of detailed methods for planning the adaptation, which makes it impossible to accurately estimate its scale and cost. Currently, the adaptation of production systems is in practice a human driven process, which relies strongly on the expertise and tacit knowledge of the system integrators or the end-user of the system. This thesis develops a capability-based, computer-aided adaptation methodology, which supports both the human-controlled adaptation planning and the dynamic reactive adaptation of production systems. The methodology consists of three main elements. The first element is the adaptation schema, which illustrates the activities and information flows involved in the overall adaptation planning process and the resources used to support the planning. The adaptation schema forms the backbone of the methodology, guiding the use of other developed elements during both the adaptation planning and reactive adaptation. The second element, which is actually the core of the developed methodology, is the formal ontological resource description used to describe the resources based on their capabilities. The overall resource description utilizes a capability model, which divides the capabilities into simple and combined capabilities. The resources are assigned the simple capabilities they possess. When multiple resources are co-operating, their combined capability can be reasoned out based on the associations defined in the capability model. The adaptation methodology is based on the capability-based matching of product requirements and available system capabilities in the context of the adaptation process. Thus, the third main element developed in this thesis is the framework and rules for performing this capability matching. The approach allows automatic information filtering and the generation of system configuration scenarios for the given requirements, thus facilitating the rapid allocation of resources and the adaptation of systems. Human intelligence is used to validate the automatically-generated scenarios and to select the best one, based on the desired criteria. Based on these results, an approach to evaluating the compatibility of an existing production system with different product requirements has been formulated. This approach evaluates the impact any changes in these requirements may have on the production system. The impact of the changes is illustrated in the form of compatibility graphs, which enable comparison between different product scenarios in terms of the effort required to implement the system adaptation, and the extent to which the current system can be utilized to meet the new requirements. It thus aids in making decisions regarding product and production strategies and adaptation

    Exploiting the conceptual space in hybrid recommender systems: a semantic-based approach

    Full text link
    Tesis doctoral inédita. Universidad Autónoma de Madrid, Escuela Politécnica Superior, octubre de 200

    A methodology for the distributed and collaborative management of engineering knowledge

    Get PDF
    The problems of collaborative engineering design and management at the conceptual stage in a large network of dissimilar enterprises was investigated. This issue in engineering design is a result of the supply chain and virtual enterprise (VE) oriented industry that demands faster time to market and accurate cost/manufacturing analysis from conception. Current tools and techniques do not completely fulfil this requirement due to a lack of coherent inter-enterprise collaboration and a dearth of manufacturing knowledge available at the concept stage. Client-server and peer to peer systems were tested for communication, as well as various techniques for knowledge management and propagation including Product Lifecycle Management (PLM) and expert systems. As a result of system testing, and extensive literature review, several novel techniques were proposed and tested to improve the coherent management of knowledge and enable inter-enterprise collaboration. The techniques were trialled on two engineering project examples. An automotive Tier-1 supplier which designs products whose components are sub­contracted to a large supply chain and assembled for an Original Equipment Manufacturer (OEM) was used as a test scenario. The utility of the systems for integrating large VEs into a coherent project with unified specifications were demonstrated in a simple example, and problems associated with engineering document management overcome via re-usable, configurable, object oriented ontologies propagated throughout the VE imposing a coherent nomenclature and engineering product definition. All knowledge within the system maintains links from specification - concept - design - testing through to manufacturing stages, aiding the participating enterprises in maintaining their knowledge and experience for future projects. This potentially speeds the process of innovation by enabling companies to concentrate on value-added aspects of designs whilst ‘bread-and-butter’ expertise is reused. The second example, a manufacturer of rapid-construction steel bridges, demonstrated the manufacturing dimension of the methodology, where the early stage of design, and the generation of new concepts by reusing existing manufacturing knowledge bases was demonstrated. The solution consisted of a de-centralised super-peer net architecture to establish and maintain communications between enterprises in a VE. The enterprises are able to share knowledge in a common format and nomenclature via the building-block shareable super-ontology that can be tailored on a project by project basis, whilst maintaining the common nomenclature of the ‘super-ontology’ eliminating knowledge interpretation issues. The two-tier architecture developed as part of the solution glues together the peer-peer and super-ontologies to form a coherent system for internal knowledge management and product development as well as external virtual enterprise product development and knowledge management. In conclusion, the methodology developed for collaboration and knowledge management was shown to be more appropriate for use by smaller enterprises collaborating in a large Virtual Enterprise than PLM technology in terms of: usability, configurability, cost of system and individual control over intellectual property rights

    Semantic interpretation of events in lifelogging

    Get PDF
    The topic of this thesis is lifelogging, the automatic, passive recording of a person’s daily activities and in particular, on performing a semantic analysis and enrichment of lifelogged data. Our work centers on visual lifelogged data, such as taken from wearable cameras. Such wearable cameras generate an archive of a person’s day taken from a first-person viewpoint but one of the problems with this is the sheer volume of information that can be generated. In order to make this potentially very large volume of information more manageable, our analysis of this data is based on segmenting each day’s lifelog data into discrete and non-overlapping events corresponding to activities in the wearer’s day. To manage lifelog data at an event level, we define a set of concepts using an ontology which is appropriate to the wearer, applying automatic detection of concepts to these events and then semantically enriching each of the detected lifelog events making them an index into the events. Once this enrichment is complete we can use the lifelog to support semantic search for everyday media management, as a memory aid, or as part of medical analysis on the activities of daily living (ADL), and so on. In the thesis, we address the problem of how to select the concepts to be used for indexing events and we propose a semantic, density- based algorithm to cope with concept selection issues for lifelogging. We then apply activity detection to classify everyday activities by employing the selected concepts as high-level semantic features. Finally, the activity is modeled by multi-context representations and enriched by Semantic Web technologies. The thesis includes an experimental evaluation using real data from users and shows the performance of our algorithms in capturing the semantics of everyday concepts and their efficacy in activity recognition and semantic enrichment

    Intelligent Systems

    Get PDF
    This book is dedicated to intelligent systems of broad-spectrum application, such as personal and social biosafety or use of intelligent sensory micro-nanosystems such as "e-nose", "e-tongue" and "e-eye". In addition to that, effective acquiring information, knowledge management and improved knowledge transfer in any media, as well as modeling its information content using meta-and hyper heuristics and semantic reasoning all benefit from the systems covered in this book. Intelligent systems can also be applied in education and generating the intelligent distributed eLearning architecture, as well as in a large number of technical fields, such as industrial design, manufacturing and utilization, e.g., in precision agriculture, cartography, electric power distribution systems, intelligent building management systems, drilling operations etc. Furthermore, decision making using fuzzy logic models, computational recognition of comprehension uncertainty and the joint synthesis of goals and means of intelligent behavior biosystems, as well as diagnostic and human support in the healthcare environment have also been made easier
    corecore