789 research outputs found

    Using Semantic Web technologies in the development of data warehouses: A systematic mapping

    Get PDF
    The exploration and use of Semantic Web technologies have attracted considerable attention from researchers examining data warehouse (DW) development. However, the impact of this research and the maturity level of its results are still unclear. The objective of this study is to examine recently published research articles that take into account the use of Semantic Web technologies in the DW arena with the intention of summarizing their results, classifying their contributions to the field according to publication type, evaluating the maturity level of the results, and identifying future research challenges. Three main conclusions were derived from this study: (a) there is a major technological gap that inhibits the wide adoption of Semantic Web technologies in the business domain;(b) there is limited evidence that the results of the analyzed studies are applicable and transferable to industrial use; and (c) interest in researching the relationship between DWs and Semantic Web has decreased because new paradigms, such as linked open data, have attracted the interest of researchers.This study was supported by the Universidad de La Frontera, Chile, PROY. DI15-0020. Universidad de la Frontera, Chile, Grant Numbers: DI15-0020 and DI17-0043

    Ontology Pattern-Based Data Integration

    Get PDF
    Data integration is concerned with providing a unified access to data residing at multiple sources. Such a unified access is realized by having a global schema and a set of mappings between the global schema and the local schemas of each data source, which specify how user queries at the global schema can be translated into queries at the local schemas. Data sources are typically developed and maintained independently, and thus, highly heterogeneous. This causes difficulties in integration because of the lack of interoperability in the aspect of architecture, data format, as well as syntax and semantics of the data. This dissertation represents a study on how small, self-contained ontologies, called ontology design patterns, can be employed to provide semantic interoperability in a cross-repository data integration system. The idea of this so-called ontology pattern- based data integration is that a collection of ontology design patterns can act as the global schema that still contains sufficient semantics, but is also flexible and simple enough to be used by linked data providers. On the one side, this differs from existing ontology-based solutions, which are based on large, monolithic ontologies that provide very rich semantics, but enforce too restrictive ontological choices, hence are shunned by many data providers. On the other side, this also differs from the purely linked data based solutions, which do offer simplicity and flexibility in data publishing, but too little in terms of semantic interoperability. We demonstrate the feasibility of this idea through the actual development of a large scale data integration project involving seven ocean science data repositories from five institutions in the U.S. In addition, we make two contributions as part of this dissertation work, which also play crucial roles in the aforementioned data integration project. First, we develop a collection of more than a dozen ontology design patterns that capture the key notions in the ocean science occurring in the participating data repositories. These patterns contain axiomatization of the key notions and were developed with an intensive involvement from the domain experts. Modeling of the patterns was done in a systematic workflow to ensure modularity, reusability, and flexibility of the whole pattern collection. Second, we propose the so-called pattern views that allow data providers to publish their data in very simple intermediate schema and show that they can greatly assist data providers to publish their data without requiring a thorough understanding of the axiomatization of the patterns

    Pattern-based design applied to cultural heritage knowledge graphs

    Full text link
    Ontology Design Patterns (ODPs) have become an established and recognised practice for guaranteeing good quality ontology engineering. There are several ODP repositories where ODPs are shared as well as ontology design methodologies recommending their reuse. Performing rigorous testing is recommended as well for supporting ontology maintenance and validating the resulting resource against its motivating requirements. Nevertheless, it is less than straightforward to find guidelines on how to apply such methodologies for developing domain-specific knowledge graphs. ArCo is the knowledge graph of Italian Cultural Heritage and has been developed by using eXtreme Design (XD), an ODP- and test-driven methodology. During its development, XD has been adapted to the need of the CH domain e.g. gathering requirements from an open, diverse community of consumers, a new ODP has been defined and many have been specialised to address specific CH requirements. This paper presents ArCo and describes how to apply XD to the development and validation of a CH knowledge graph, also detailing the (intellectual) process implemented for matching the encountered modelling problems to ODPs. Relevant contributions also include a novel web tool for supporting unit-testing of knowledge graphs, a rigorous evaluation of ArCo, and a discussion of methodological lessons learned during ArCo development

    Integrating Protein Data Resources through Semantic Web Services

    Get PDF
    Understanding the function of every protein is one major objective of bioinformatics. Currently, a large amount of information (e.g., sequence, structure and dynamics) is being produced by experiments and predictions that are associated with protein function. Integrating these diverse data about protein sequence, structure, dynamics and other protein features allows further exploration and establishment of the relationships between protein sequence, structure, dynamics and function, and thereby controlling the function of target proteins. However, information integration in protein data resources faces challenges at technology level for interfacing heterogeneous data formats and standards and at application level for semantic interpretation of dissimilar data and queries. In this research, a semantic web services infrastructure, called Web Services for Protein data resources (WSP), for flexible and user-oriented integration of protein data resources, is proposed. This infrastructure includes a method for modeling protein web services, a service publication algorithm, an efficient service discovery (matching) algorithm, and an optimal service chaining algorithm. Rather than relying on syntactic matching, the matching algorithm discovers services based on their similarity to the requested service. Therefore, users can locate services that semantically match their data requirements even if they are syntactically distinctive. Furthermore, WSP supports a workflow-based approach for service integration. The chaining algorithm is used to select and chain services, based on the criteria of service accuracy and data interoperability. The algorithm generates a web services workflow which automatically integrates the results from individual services.A number of experiments are conducted to evaluate the performance of the matching algorithm. The results reveal that the algorithm can discover services with reasonable performance. Also, a composite service, which integrates protein dynamics and conservation, is experimented using the WSP infrastructure

    Use of ontologies for metadata records analysis in big data

    Get PDF
    Big Data deals with the sets of information (structured, unstructured, or semi structured) so large that traditional ways and approaches (based on business intelligence decisions and database management systems) cannot be applied to them. Big Data is characterized by phenomenal acceleration of data accumulation and its complication. In different contexts Big Data often means both data of large volume and a set of tools and methods for their processing. Big Data sets are accompanied by metadata which contains a large amount of information about the data, including significant descriptive text information whose understanding by machines lead to better results of Big Data processing. Methods of artificial intelligence and intelligent Web-technologies improve the efficiency of all stages of Big Data processing. Most often this integration concerns the use of machine learning that provides the knowledge acquisition from Big Data and ontological analysis that formalizes for domain knowledge for Big Data analysis. In the paper, the authors present a method for analyzing the Big Data metadata which allows selecting those blocks of information among the heterogeneous sources and data repositories that are pertinent for solving the customer task. Much attention is paid to the matching of the text part of the metadata (metadata annotations) with the text describing the task. We suggest to use for these purposes the methods and instruments of natural language analysis and the Big Data ontology which contains knowledge about the specifics of this domain

    Movement Analytics: Current Status, Application to Manufacturing, and Future Prospects from an AI Perspective

    Full text link
    Data-driven decision making is becoming an integral part of manufacturing companies. Data is collected and commonly used to improve efficiency and produce high quality items for the customers. IoT-based and other forms of object tracking are an emerging tool for collecting movement data of objects/entities (e.g. human workers, moving vehicles, trolleys etc.) over space and time. Movement data can provide valuable insights like process bottlenecks, resource utilization, effective working time etc. that can be used for decision making and improving efficiency. Turning movement data into valuable information for industrial management and decision making requires analysis methods. We refer to this process as movement analytics. The purpose of this document is to review the current state of work for movement analytics both in manufacturing and more broadly. We survey relevant work from both a theoretical perspective and an application perspective. From the theoretical perspective, we put an emphasis on useful methods from two research areas: machine learning, and logic-based knowledge representation. We also review their combinations in view of movement analytics, and we discuss promising areas for future development and application. Furthermore, we touch on constraint optimization. From an application perspective, we review applications of these methods to movement analytics in a general sense and across various industries. We also describe currently available commercial off-the-shelf products for tracking in manufacturing, and we overview main concepts of digital twins and their applications

    Mapping Big Data into Knowledge Space with Cognitive Cyber-Infrastructure

    Full text link
    Big data research has attracted great attention in science, technology, industry and society. It is developing with the evolving scientific paradigm, the fourth industrial revolution, and the transformational innovation of technologies. However, its nature and fundamental challenge have not been recognized, and its own methodology has not been formed. This paper explores and answers the following questions: What is big data? What are the basic methods for representing, managing and analyzing big data? What is the relationship between big data and knowledge? Can we find a mapping from big data into knowledge space? What kind of infrastructure is required to support not only big data management and analysis but also knowledge discovery, sharing and management? What is the relationship between big data and science paradigm? What is the nature and fundamental challenge of big data computing? A multi-dimensional perspective is presented toward a methodology of big data computing.Comment: 59 page

    Context-aware Plan Repair in Environments shared by Multiple Agents

    Full text link
    [ES] La monitorización de la ejecución de un plan es crucial para un agente autónomo que realiza su labor en un entorno dinámico, pues influye en su capacidad de reaccionar ante los cambios. Mientras ejecuta su plan puede sufrir un fallo y, en su esfuerzo por solucionarlo, puede interferir sin saberlo con otros agentes que operan en su mismo entorno. Por otra parte, para actuar racionalmente es necesario que el agente sea consciente del contexto y pueda recopilar y ampliar su información a partir de lo que percibe para poder compensar su conocimiento previo parcial o incorrecto del problema y lograr el mejor resultado posible ante las nuevas situaciones que aparecen. El trabajo realizado en esta tesis permite a los agentes autónomos ejecutar sus planes en un entorno dinámico y adaptarse a eventos inesperados y circunstancias desconocidas. Pueden utilizar su percepción del contexto para proporcionar respuestas deliberativas conscientes y ser capaces así de aprovechar las oportunidades que surgen o reparar los fallos sin perturbar a otros agentes. Este trabajo se centra en el desarrollo de una arquitectura independiente del dominio capaz de manejar las necesidades de agentes con este tipo de comportamiento autónomo. Los tres pilares de la arquitectura propuesta los forman el sistema inteligente para la simulación de la ejecución en entornos dinámicos, la adquisición de conocimiento consciente del contexto para ampliar la base de datos del agente y la reparación de planes ante fallos u oportunidades tratando de interferir lo mínimo con los planes de otros agentes. El sistema inteligente de simulación de la ejecución permite al agente representar el plan en una línea de tiempo, actualizar periódicamente su estado interno con información del mundo real y disparar nuevos eventos en momentos concretos. Los eventos se procesan en el contexto del plan; si se detecta un error, el simulador reformula el problema de planificación, invoca de nuevo al planificador y reanuda la ejecución. El simulador es una aplicación de consola y ofrece una interfaz gráfica diseñada específicamente para una aplicación inteligente de turismo. El módulo de adquisición de conocimiento sensible al contexto utiliza operaciones semánticas para aumentar dinámicamente la lista predefinida de tipos de objetos de la tarea de planificación con nuevos tipos relevantes. Esto permite que el agente sea consciente de su entorno, enriquezca el modelo de su tarea y pueda razonar a partir de un conocimiento incompleto. Con todo esto se consigue potenciar la autonomía del sistema y la conciencia del contexto. La novedosa estrategia de reparación de planes le permite a un agente reparar su plan al detectar un fallo de manera responsable con el resto de agentes que comparten su mismo entorno de ejecución. El agente utiliza una nueva métrica, el compromiso del plan, como función heurística para guiar la búsqueda hacia un plan solución comprometido con el plan original, en el sentido de que se trata de respetar los compromisos adquiridos con otros agentes al mismo tiempo que se alcanzan los objetivos originales. En consecuencia, la comunidad de agentes sufrirá menos fallos por cambios bruscos en el entorno o requerirá menos tiempo para ejecutar las acciones correctoras si el fallo es inevitable. Estos tres módulos han sido desarrollados y evaluados en varias aplicaciones como un asistente turístico, una agencia de reparación de electrodomésticos y un asistente del hogar.[CA] El monitoratge de l'execució d'un pla és crucial per a un agent autònom que realitza la seua labor en un entorn dinàmic, perquè influeix en la seua capacitat de reaccionar davant els canvis. Mentre executa el seu pla pot patir una fallada i, en el seu esforç per solucionar-lo, pot interferir sense saber-ho amb altres agents que operen en el seu mateix entorn. D'altra banda, per a actuar racionalment és necessari que l'agent siga conscient del context i puga recopilar i ampliar la seua informació a partir del que percep per a poder compensar el seu coneixement previ parcial o incorrecte del problema i aconseguir el millor resultat possible davant les noves situacions que apareixen. El treball realitzat en aquesta tesi permet als agents autònoms executar els seus plans en un entorn dinàmic i adaptar-se a esdeveniments inesperats i circumstàncies desconegudes. Poden utilitzar la seua percepció del context per a proporcionar respostes deliberatives conscients i ser capaces així d'aprofitar les oportunitats que sorgeixen o reparar les fallades sense pertorbar a altres agents. Aquest treball se centra en el desenvolupament d'una arquitectura independent del domini capaç de manejar les necessitats d'agents amb aquesta mena de comportament autònom. Els tres pilars de l'arquitectura proposada els formen el sistema intel·ligent per a la simulació de l'execució en entorns dinàmics, l'adquisició de coneixement conscient del context per a ampliar la base de dades de l'agent i la reparació de plans davant fallades o oportunitats tractant d'interferir el mínim amb els plans d'altres agents. El sistema intel·ligent de simulació de l'execució permet a l'agent representar el pla en una línia de temps, actualitzar periòdicament el seu estat intern amb informació del món real i disparar nous esdeveniments en moments concrets. Els esdeveniments es processen en el context del pla; si es detecta un error, el simulador reformula el problema de planificació, invoca de nou al planificador i reprén l'execució. El simulador és una aplicació de consola i ofereix una interfície gràfica dissenyada específicament per a una aplicació intel·ligent de turisme. El mòdul d'adquisició de coneixement sensible al context utilitza operacions semàntiques per a augmentar dinàmicament la llista predefinida de tipus d'objectes de la tasca de planificació amb nous tipus rellevants. Això permet que l'agent siga conscient del seu entorn, enriquisca el model de la seua tasca i puga raonar a partir d'un coneixement incomplet. Amb tot això s'aconsegueix potenciar l'autonomia del sistema i la consciència del context. La nova estratègia de reparació de plans li permet a un agent reparar el seu pla en detectar una fallada de manera responsable amb la resta d'agents que comparteixen el seu mateix entorn d'execució. L'agent utilitza una nova mètrica, el compromís del pla, com a funció heurística per a guiar la cerca cap a un pla solució compromés amb el pla original, en el sentit que es tracta de respectar els compromisos adquirits amb altres agents al mateix temps que s'aconsegueixen els objectius originals. En conseqüència, la comunitat d'agents patirà menys fallades per canvis bruscos en l'entorn o requerirà menys temps per a executar les accions correctores si la fallada és inevitable. Aquests tres mòduls han sigut desenvolupats i avaluats en diverses aplicacions com un assistent turístic, una agència de reparació d'electrodomèstics i un assistent de la llar.[EN] Execution Monitoring is crucial for the success of an autonomous agent executing a plan in a dynamic environment as it influences its ability to react to changes. While executing its plan in a dynamic world, it may suffer a failure and, in its endeavour to fix the problem, it may unknowingly disrupt other agents operating in the same environment. Additionally, being rational requires the agent to be context-aware, gather information and extend what is known from what is perceived to compensate for partial or incorrect prior knowledge and achieve the best possible outcome in various novel situations. The work carried out in this PhD thesis allows the autonomous agents executing a plan in a dynamic environment to adapt to unexpected events and unfamiliar circumstances, utilise their perception of context and provide context-aware deliberative responses for seizing an opportunity or repairing a failure without disrupting other agents. This work is focused on developing a domain-independent architecture capable of handling the requirements of such autonomous behaviour. The architecture pillars are the intelligent system for execution simulation in a dynamic environment, the context-aware knowledge acquisition for planning applications and the plan commitment repair. The intelligent system for execution simulation in a dynamic environment allows the agent to transform the plan into a timeline, periodically update its internal state with real-world information and create timed events. Events are processed in the context of the plan; if a failure occurs, the simulator reformulates the planning problem, reinvokes a planner and resumes the execution. The simulator is a console application and has a GUI designed specifically for smart tourism. The context-aware knowledge acquisition module utilises semantic operations to dynamically augment the predefined list of object types of the planning task with relevant new object types. This allows the agent to be context-aware of the environment and the task and reason with incomplete knowledge, boosting the system's autonomy and context-awareness. The novel plan commitment repair strategy among multiple agents sharing the same execution environment allows the agent to repair its plan responsibly when a failure is detected. The agent utilises a new metric, plan commitment, as a heuristic to guide the search for the most committed repair plan to the original plan from the perspective of commitments made to other agents whilst achieving the original goals. Consequently, the community of agents will suffer fewer failures due to the sudden changes or will have less lost time if the failure is inevitable. All these developed modules were investigated and evaluated in several applications, such as a tourist assistant, a kitchen appliance repair agency and a living home assistant.Babli, M. (2023). Context-aware Plan Repair in Environments shared by Multiple Agents [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/19868
    corecore