431 research outputs found

    Ontology Localization

    Get PDF
    Nuestra meta principal en esta tesis es proponer una solución para construir una ontología multilingüe, a través de la localización automática de una ontología. La noción de localización viene del área de Desarrollo de Software que hace referencia a la adaptación de un producto de software a un ambiente no nativo. En la Ingeniería Ontológica, la localización de ontologías podría ser considerada como un subtipo de la localización de software en el cual el producto es un modelo compartido de un dominio particular, por ejemplo, una ontología, a ser usada por una cierta aplicación. En concreto, nuestro trabajo introduce una nueva propuesta para el problema de multilingüismo, describiendo los métodos, técnicas y herramientas para la localización de recursos ontológicos y cómo el multilingüismo puede ser representado en las ontologías. No es la meta de este trabajo apoyar una única propuesta para la localización de ontologías, sino más bien mostrar la variedad de métodos y técnicas que pueden ser readaptadas de otras áreas de conocimiento para reducir el costo y esfuerzo que significa enriquecer una ontología con información multilingüe. Estamos convencidos de que no hay un único método para la localización de ontologías. Sin embargo, nos concentramos en soluciones automáticas para la localización de estos recursos. La propuesta presentada en esta tesis provee una cobertura global de la actividad de localización para los profesionales ontológicos. En particular, este trabajo ofrece una explicación formal de nuestro proceso general de localización, definiendo las entradas, salidas, y los principales pasos identificados. Además, en la propuesta consideramos algunas dimensiones para localizar una ontología. Estas dimensiones nos permiten establecer una clasificación de técnicas de traducción basadas en métodos tomados de la disciplina de traducción por máquina. Para facilitar el análisis de estas técnicas de traducción, introducimos una estructura de evaluación que cubre sus aspectos principales. Finalmente, ofrecemos una vista intuitiva de todo el ciclo de vida de la localización de ontologías y esbozamos nuestro acercamiento para la definición de una arquitectura de sistema que soporte esta actividad. El modelo propuesto comprende los componentes del sistema, las propiedades visibles de esos componentes, las relaciones entre ellos, y provee además, una base desde la cual sistemas de localización de ontologías pueden ser desarrollados. Las principales contribuciones de este trabajo se resumen como sigue: - Una caracterización y definición de los problemas de localización de ontologías, basado en problemas encontrados en áreas relacionadas. La caracterización propuesta tiene en cuenta tres problemas diferentes de la localización: traducción, gestión de la información, y representación de la información multilingüe. - Una metodología prescriptiva para soportar la actividad de localización de ontologías, basada en las metodologías de localización usadas en Ingeniería del Software e Ingeniería del Conocimiento, tan general como es posible, tal que ésta pueda cubrir un amplio rango de escenarios. - Una clasificación de las técnicas de localización de ontologías, que puede servir para comparar (analíticamente) diferentes sistemas de localización de ontologías, así como también para diseñar nuevos sistemas, tomando ventaja de las soluciones del estado del arte. - Un método integrado para construir sistemas de localización de ontologías en un entorno distribuido y colaborativo, que tenga en cuenta los métodos y técnicas más apropiadas, dependiendo de: i) el dominio de la ontología a ser localizada, y ii) la cantidad de información lingüística requerida para la ontología final. - Un componente modular para soportar el almacenamiento de la información multilingüe asociada a cada término de la ontología. Nuestra propuesta sigue la tendencia actual en la integración de la información multilingüe en las ontologías que sugiere que el conocimiento de la ontología y la información lingüística (multilingüe) estén separados y sean independientes. - Un modelo basado en flujos de trabajo colaborativos para la representación del proceso normalmente seguido en diferentes organizaciones, para coordinar la actividad de localización en diferentes lenguajes naturales. - Una infraestructura integrada implementada dentro del NeOn Toolkit por medio de un conjunto de plug-ins y extensiones que soporten el proceso colaborativo de localización de ontologías

    A Causality-Aware Pattern Mining Scheme for Group Activity Recognition in a Pervasive Sensor Space

    Full text link
    Human activity recognition (HAR) is a key challenge in pervasive computing and its solutions have been presented based on various disciplines. Specifically, for HAR in a smart space without privacy and accessibility issues, data streams generated by deployed pervasive sensors are leveraged. In this paper, we focus on a group activity by which a group of users perform a collaborative task without user identification and propose an efficient group activity recognition scheme which extracts causality patterns from pervasive sensor event sequences generated by a group of users to support as good recognition accuracy as the state-of-the-art graphical model. To filter out irrelevant noise events from a given data stream, a set of rules is leveraged to highlight causally related events. Then, a pattern-tree algorithm extracts frequent causal patterns by means of a growing tree structure. Based on the extracted patterns, a weighted sum-based pattern matching algorithm computes the likelihoods of stored group activities to the given test event sequence by means of matched event pattern counts for group activity recognition. We evaluate the proposed scheme using the data collected from our testbed and CASAS datasets where users perform their tasks on a daily basis and validate its effectiveness in a real environment. Experiment results show that the proposed scheme performs higher recognition accuracy and with a small amount of runtime overhead than the existing schemes

    Aesthetic Virtues: Traits and Faculties

    Get PDF
    ArticleThis is the author accepted manuscript. The final version is available from Springer Verlag via the DOI in this record.Two varieties of aesthetic virtue are distinguished. Trait virtues are features of the agent’s character, and reflect an overarching concern for aesthetic goods such as beauty and novelty, while faculty virtues are excellences of artistic execution that permit the agent to succeed in her chosen domain. The distinction makes possible a fuller account of why art matters to us—it matters not only insofar as it is aesthetically good, but also in its capacity as an achievement that is creditable to an individual, and as a reflection or embodiment of virtuous motives

    Explaining Semantic Reasoning Using Argumentation

    Get PDF
    Multi-Agent Systems (MAS) are popular because they provide a paradigm that naturally meets the current demand to design and implement distributed intelligent systems. When developing a multi-agent application, it is common to use ontologies to provide the domain-specific knowledge and vocabulary necessary for agents to achieve the system goals. In this paper, we propose an approach in which agents can query semantic reasoners and use the received inferences to build explanations for such reasoning. Also, thanks to an internal representation of inference rules used to build explanations, in the form of argumentation schemes, agents are able to reason and make decisions based on the answers from the semantic reasoner. Furthermore, agents can communicate the built explanation to other agents and humans, using computational or natural language representations of arguments. Our approach paves the way towards multi-agent systems able to provide explanations from the reasoning carried out by semantic reasoners

    Total Constraint Management for Improving Construction Work Flow in Liquefied Natural Gas Industry

    Get PDF
    Australia has benefited and will continue to benefit significantly from Liquefied Natural Gas (LNG) investments underway. Managing these LNG projects is challenging as they become increasingly complex and technologically demanding. The primary goal of this thesis is to develop a Total Constraint Management (TCM) method to improve construction work flow during LNG construction. Five controlled experiments were conducted and results show that successful implementation of TCM can significantly improve construction productivity and reduce schedule overruns

    A process-based control for evolvable production systems

    Get PDF
    Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para a obtenção do grau de Mestre em Engenharia Electrotécnica e de ComputadoresNowadays, companies in a challenging environment are compelled to adapt to the rapid changes in the manufacturing business. The search for new processes to create products with short life-cycles at low cost, while keeping the same levels of productivity and quality is greater than ever. This has generated the need to create even more agile manufacturing systems, which could easily adapt to the market changes at a low cost. Advances in information technologies have allowed manufacturing systems to achieve new levels of agility, opening the doors to new approaches. These same advances helped companies in several sectors other than manufacturing to gain e ectiveness through the synchronization of the processes of their several departments by using Business Process Management tools. This thesis proposes a system that reacts and adapts itself to di erent production orders by means of recon guration. To reach this goal, the concept of Business Process Management was used. This concept, already used in many companies, allows them to model their inner behaviours with processes that can be changed according to their needs. A manufacturing system using this may become equally agile and alter its functioning in accordance with the needs of other departments of the same company. To create the system presented in this thesis it was used a multi-agent architecture based on process execution. Each agent contains a knowledge base, used by its processes,that stores internal or external information. This system may be used not only in the manufacturing shop oor, but also in any other areas within a company. This thesis also presents an application of the system to the shop oor, based on the Evolvable Production Systems concept, in which each agent represents a manufacturing resource that o ers a given set of services useful to the production process. The resources,by means of the agents, may aggregate among themselves to execute services together. Keywords: Manufacturing system, multi-agent system, ontology, process, BPM, EPS

    Trustworthiness in Social Big Data Incorporating Semantic Analysis, Machine Learning and Distributed Data Processing

    Get PDF
    This thesis presents several state-of-the-art approaches constructed for the purpose of (i) studying the trustworthiness of users in Online Social Network platforms, (ii) deriving concealed knowledge from their textual content, and (iii) classifying and predicting the domain knowledge of users and their content. The developed approaches are refined through proof-of-concept experiments, several benchmark comparisons, and appropriate and rigorous evaluation metrics to verify and validate their effectiveness and efficiency, and hence, those of the applied frameworks

    The impact of semantic knowledge management system on firms' innovation and competitiveness

    Get PDF
    D.B.A ThesisIn the knowledge economy, knowledge is increasingly becoming the primary factor of production and foundational component of innovation. Firms must improve their capabilities of handling knowledge in line with its recent explosive growth to stay competitive. This research addresses the effects semantic technology-based knowledge management system (Semantic KMS) can have on firms’ performance. Based on existing literature, a conceptual model covering Semantic KMS, KM, innovation, and competitiveness was designed to test the validity of the hypotheses. A total of 640 survey questionnaires were sent to the companies that practice KM actively. 178 usable responses were received. Pearson’s correlation, exploratory and confirmatory factor analyses and structural equation modeling were used to analyze the data. The results indicate that Semantic KMS is positively related to the KM effectiveness. Organizational KM is positively linked to innovation and competitiveness directly. In the context of KM, innovation's effect on competitiveness is not convincing. Moreover, the study could not identify that KM has any strong relationship with organizational competitiveness mediated through innovation. Being one of the first significant studies of Semantic KMS and its impact, the study adds to the growing literature on the use of semantic technology in various fields. It develops a new theoretical model which has never been tested before. The study used data collected from single respondent of each firm in a snapshot and did not consider feedback effects. It examined Semantic KMS as a holistic system, but in many cases, companies only deploy certain KM related tools supported by semantic technology. A different research approach could investigate the impacts of those tools on relevant business processes. This study demonstrates that deployment of semantic technology is beneficial for companies and allows them to take advantage of the use of advanced technologies in their KM quest. It brings significant benefits to the firm thanks to improved capabilities of the new KMS in knowledge discovery, aggregation, use, and sharing. The study also confirms that for a successful KM initiative, KM processes need to be optimized and supported by KMS. Semantic technology is a set of advanced tools used lately in many information systems. This study is one of the first in-depth research about their impacts on KMS. It will guide KM managers in their decision-making process when they consider developing or integrating newKMS tools. For academics, this research highlights the importance of investigating KM from the new technology perspective.

    Leveraging human-computer interaction and crowdsourcing for scholarly knowledge graph creation

    Get PDF
    The number of scholarly publications continues to grow each year, as well as the number of journals and active researchers. Therefore, methods and tools to organize scholarly knowledge are becoming increasingly important. Without such tools, it becomes increasingly difficult to conduct research in an efficient and effective manner. One of the fundamental issues scholarly communication is facing relates to the format in which the knowledge is shared. Scholarly communication relies primarily on narrative document-based formats that are specifically designed for human consumption. Machines cannot easily access and interpret such knowledge, leaving machines unable to provide powerful tools to organize scholarly knowledge effectively. In this thesis, we propose to leverage knowledge graphs to represent, curate, and use scholarly knowledge. The systematic knowledge representation leads to machine-actionable knowledge, which enables machines to process scholarly knowledge with minimal human intervention. To generate and curate the knowledge graph, we propose a machine learning assisted crowdsourcing approach, in particular Natural Language Processing (NLP). Currently, NLP techniques are not able to satisfactorily extract high-quality scholarly knowledge in an autonomous manner. With our proposed approach, we intertwine human and machine intelligence, thus exploiting the strengths of both approaches. First, we discuss structured scholarly knowledge, where we present the Open Research Knowledge Graph (ORKG). Specifically, we focus on the design and development of the ORKG user interface (i.e., the frontend). One of the key challenges is to provide an interface that is powerful enough to create rich knowledge descriptions yet intuitive enough for researchers without a technical background to create such descriptions. The ORKG serves as the technical foundation for the rest of the work. Second, we focus on comparable scholarly knowledge, where we introduce the concept of ORKG comparisons. ORKG comparisons provide machine-actionable overviews of related literature in a tabular form. Also, we present a methodology to leverage existing literature reviews to populate ORKG comparisons via a human-in-the-loop approach. Additionally, we show how ORKG comparisons can be used to form ORKG SmartReviews. The SmartReviews provide dynamic literature reviews in the form of living documents. They are an attempt address the main weaknesses of the current literature review practice and outline how the future of review publishing can look like. Third, we focus designing suitable tasks to generate scholarly knowledge in a crowdsourced setting. We present an intelligent user interface that enables researchers to annotate key sentences in scholarly publications with a set of discourse classes. During this process, researchers are assisted by suggestions coming from NLP tools. In addition, we present an approach to validate NLP-generated statements using microtasks in a crowdsourced setting. With this approach, we lower the barrier to entering data in the ORKG and transform content consumers into content creators. With the work presented, we strive to transform scholarly communication to improve machine-actionability of scholarly knowledge. The approaches and tools are deployed in a production environment. As a result, the majority of the presented approaches and tools are currently in active use by various research communities and already have an impact on scholarly communication.Die Zahl der wissenschaftlichen Veröffentlichungen nimmt jedes Jahr weiter zu, ebenso wie die Zahl der Zeitschriften und der aktiven Forscher. Daher werden Methoden und Werkzeuge zur Organisation von wissenschaftlichem Wissen immer wichtiger. Ohne solche Werkzeuge wird es immer schwieriger, Forschung effizient und effektiv zu betreiben. Eines der grundlegenden Probleme, mit denen die wissenschaftliche Kommunikation konfrontiert ist, betrifft das Format, in dem das Wissen publiziert wird. Die wissenschaftliche Kommunikation beruht in erster Linie auf narrativen, dokumentenbasierten Formaten, die speziell für Experten konzipiert sind. Maschinen können auf dieses Wissen nicht ohne weiteres zugreifen und es interpretieren, so dass Maschinen nicht in der Lage sind, leistungsfähige Werkzeuge zur effektiven Organisation von wissenschaftlichem Wissen bereitzustellen. In dieser Arbeit schlagen wir vor, Wissensgraphen zu nutzen, um wissenschaftliches Wissen darzustellen, zu kuratieren und zu nutzen. Die systematische Wissensrepräsentation führt zu maschinenverarbeitbarem Wissen. Dieses ermöglicht es Maschinen wissenschaftliches Wissen mit minimalem menschlichen Eingriff zu verarbeiten. Um den Wissensgraphen zu generieren und zu kuratieren, schlagen wir einen Crowdsourcing-Ansatz vor, der durch maschinelles Lernen unterstützt wird, insbesondere durch natürliche Sprachverarbeitung (NLP). Derzeit sind NLP-Techniken nicht in der Lage, qualitativ hochwertiges wissenschaftliches Wissen auf autonome Weise zu extrahieren. Mit unserem vorgeschlagenen Ansatz verknüpfen wir menschliche und maschinelle Intelligenz und nutzen so die Stärken beider Ansätze. Zunächst erörtern wir strukturiertes wissenschaftliches Wissen, wobei wir den Open Research Knowledge Graph (ORKG) vorstellen.Insbesondere konzentrieren wir uns auf das Design und die Entwicklung der ORKG-Benutzeroberfläche (das Frontend). Eine der größten Herausforderungen besteht darin, eine Schnittstelle bereitzustellen, die leistungsfähig genug ist, um umfangreiche Wissensbeschreibungen zu erstellen und gleichzeitig intuitiv genug ist für Forscher ohne technischen Hintergrund, um solche Beschreibungen zu erstellen. Der ORKG dient als technische Grundlage für die Arbeit. Zweitens konzentrieren wir uns auf vergleichbares wissenschaftliches Wissen, wofür wir das Konzept der ORKG-Vergleiche einführen. ORKG-Vergleiche bieten maschinell verwertbare Übersichten über verwandtes wissenschaftliches Wissen in tabellarischer Form. Außerdem stellen wir eine Methode vor, mit der vorhandene Literaturübersichten genutzt werden können, um ORKG-Vergleiche mit Hilfe eines Human-in-the-Loop-Ansatzes zu erstellen. Darüber hinaus zeigen wir, wie ORKG-Vergleiche verwendet werden können, um ORKG SmartReviews zu erstellen. Die SmartReviews bieten dynamische Literaturübersichten in Form von lebenden Dokumenten. Sie stellen einen Versuch dar, die Hauptschwächen der gegenwärtigen Praxis des Literaturreviews zu beheben und zu skizzieren, wie die Zukunft der Veröffentlichung von Reviews aussehen kann. Drittens konzentrieren wir uns auf die Gestaltung geeigneter Aufgaben zur Generierung von wissenschaftlichem Wissen in einer Crowdsourced-Umgebung. Wir stellen eine intelligente Benutzeroberfläche vor, die es Forschern ermöglicht, Schlüsselsätze in wissenschaftlichen Publikationen mittles Diskursklassen zu annotieren. In diesem Prozess werden Forschende mit Vorschlägen von NLP-Tools unterstützt. Darüber hinaus stellen wir einen Ansatz zur Validierung von NLP-generierten Aussagen mit Hilfe von Mikroaufgaben in einer Crowdsourced-Umgebung vor. Mit diesem Ansatz senken wir die Hürde für die Eingabe von Daten in den ORKG und setzen Inhaltskonsumenten als Inhaltsersteller ein. Mit der Arbeit streben wir eine Transformation der wissenschaftlichen Kommunikation an, um die maschinelle Verwertbarkeit von wissenschaftlichem Wissen zu verbessern. Die Ansätze und Werkzeuge werden in einer Produktionsumgebung eingesetzt. Daher werden die meisten der vorgestellten Ansätze und Werkzeuge derzeit von verschiedenen Forschungsgemeinschaften aktiv genutzt und haben bereits einen Einfluss auf die wissenschaftliche Kommunikation.EC/ERC/819536/E
    corecore