681 research outputs found

    Documenting Knowledge Graph Embedding and Link Prediction using Knowledge Graphs

    Get PDF
    In recent years, sub-symbolic learning, i.e., Knowledge Graph Embedding (KGE) incorporated with Knowledge Graphs (KGs) has gained significant attention in various downstream tasks (e.g., Link Prediction (LP)). These techniques learn a latent vector representation of KG's semantical structure to infer missing links. Nonetheless, the KGE models remain a black box, and the decision-making process behind them is not clear. Thus, the trustability and reliability of the model's outcomes have been challenged. While many state-of-the-art approaches provide data-driven frameworks to address these issues, they do not always provide a complete understanding, and the interpretations are not machine-readable. That is why, in this work, we extend a hybrid interpretable framework, InterpretME, in the field of the KGE models, especially for translation distance models, which include TransE, TransH, TransR, and TransD. The experimental evaluation on various benchmark KGs supports the validity of this approach, which we term Trace KGE. Trace KGE, in particular, contributes to increased interpretability and understanding of the perplexing KGE model's behavior

    Metadata as a Methodological Commons: From Aboutness Description to Cognitive Modeling

    Get PDF
    ABSTRACTMetadata is data about data, which is generated mainly for resources organization and description, facilitating finding, identifying, selecting and obtaining information①. With the advancement of technologies, the acquisition of metadata has gradually become a critical step in data modeling and function operation, which leads to the formation of its methodological commons. A series of general operations has been developed to achieve structured description, semantic encoding and machine-understandable information, including entity definition, relation description, object analysis, attribute extraction, ontology modeling, data cleaning, disambiguation, alignment, mapping, relating, enriching, importing, exporting, service implementation, registry and discovery, monitoring etc. Those operations are not only necessary elements in semantic technologies (including linked data) and knowledge graph technology, but has also developed into the common operation and primary strategy in building independent and knowledge-based information systems.In this paper, a series of metadata-related methods are collectively referred to as ‘metadata methodological commons’, which has a lot of best practices reflected in the various standard specifications of the Semantic Web. In the future construction of a multi-modal metaverse based on Web 3.0, it shall play an important role, for example, in building digital twins through adopting knowledge models, or supporting the modeling of the entire virtual world, etc. Manual-based description and coding obviously cannot adapted to the UGC (User Generated Contents) and AIGC (AI Generated Contents)-based content production in the metaverse era. The automatic processing of semantic formalization must be considered as a sure way to adapt metadata methodological commons to meet the future needs of AI era

    Development of an Event Management Web Application For Students: A Focus on Back-end

    Get PDF
    Managing schedules can be challenging for students, with different calendars on various platforms leading to confusion and missed events. To address this problem, this thesis presents the development of an event management website designed to help students stay organized and motivated. With a focus on the application's back-end, this thesis explores the technology stack used to build the website and the implementation details of each chosen technology. By providing a detailed case study of the website development process, this thesis serves as a helpful resource for future developers looking to build their web applications

    Defining Safe Training Datasets for Machine Learning Models Using Ontologies

    Get PDF
    Machine Learning (ML) models have been gaining popularity in recent years in a wide variety of domains, including safety-critical domains. While ML models have shown high accuracy in their predictions, they are still considered black boxes, meaning that developers and users do not know how the models make their decisions. While this is simply a nuisance in some domains, in safetycritical domains, this makes ML models difficult to trust. To fully utilize ML models in safetycritical domains, there needs to be a method to improve trust in their safety and accuracy without human experts checking each decision. This research proposes a method to increase trust in ML models used in safety-critical domains by ensuring the safety and completeness of the model’s training dataset. Since most of the complexity of the model is built through training, ensuring the safety of the training dataset could help to increase the trust in the safety of the model. The method proposed in this research uses a domain ontology and an image quality characteristic ontology to validate the domain completeness and image quality robustness of a training dataset. This research also presents an experiment as a proof of concept for this method where ontologies are built for the emergency road vehicle domain

    Hybrid human-AI driven open personalized education

    Get PDF
    Attaining those skills that match labor market demand is getting increasingly complicated as prerequisite knowledge, skills, and abilities are evolving dynamically through an uncontrollable and seemingly unpredictable process. Furthermore, people's interests in gaining knowledge pertaining to their personal life (e.g., hobbies and life-hacks) are also increasing dramatically in recent decades. In this situation, anticipating and addressing the learning needs are fundamental challenges to twenty-first century education. The need for such technologies has escalated due to the COVID-19 pandemic, where online education became a key player in all types of training programs. The burgeoning availability of data, not only on the demand side but also on the supply side (in the form of open/free educational resources) coupled with smart technologies, may provide a fertile ground for addressing this challenge. Therefore, this thesis aims to contribute to the literature about the utilization of (open and free-online) educational resources toward goal-driven personalized informal learning, by developing a novel Human-AI based system, called eDoer. In this thesis, we discuss all the new knowledge that was created in order to complete the system development, which includes 1) prototype development and qualitative user validation, 2) decomposing the preliminary requirements into meaningful components, 3) implementation and validation of each component, and 4) a final requirement analysis followed by combining the implemented components in order develop and validate the planned system (eDoer). All in all, our proposed system 1) derives the skill requirements for a wide range of occupations (as skills and jobs are typical goals in informal learning) through an analysis of online job vacancy announcements, 2) decomposes skills into learning topics, 3) collects a variety of open/free online educational resources that address those topics, 4) checks the quality of those resources and topic relevance using our developed intelligent prediction models, 5) helps learners to set their learning goals, 6) recommends personalized learning pathways and learning content based on individual learning goals, and 7) provides assessment services for learners to monitor their progress towards their desired learning objectives. Accordingly, we created a learning dashboard focusing on three Data Science related jobs and conducted an initial validation of eDoer through a randomized experiment. Controlling for the effects of prior knowledge as assessed by the pretest, the randomized experiment provided tentative support for the hypothesis that learners who engaged with personal eDoer recommendations attain higher scores on the posttest than those who did not. The hypothesis that learners who received personalized content in terms of format, length, level of detail, and content type, would achieve higher scores than those receiving non-personalized content was not supported as a statistically significant result

    Development of a context knowledge system for mobile conversational agents

    Get PDF
    Un agente conversacional móvil o chatbot es un software que puede realizar tareas o servicios para un usuario o grupo en concreto. El objetivo principal de este Trabajo de Fin de Grado es desarrollar un sistema de conocimiento de contexto para agentes móviles, así como proporcionarle herramientas para que pueda adaptarse dinámicamente. Este sistema permitirá al usuario recibir sugerencias personalizadas de acciones basadas en su contexto y preferencias. Este proyecto se desarrolla en la modalidad A, que significa que está asociado a un departamento universitario. En este caso, este proyecto está vinculado al departamento de Grupo de Ingeniería del Software y de los Servicios (GESSI) de la Facultad de Informática de Barcelona, Universitat Politècnica de Catalunya. Este sistema expondrá integraciones de funciones entre diferentes aplicaciones de un dispositivo móvil, permitiendo al usuario realizar acciones en una aplicación y recibir sugerencias de acciones posibles para ser ejecutadas en otra, permitiéndole completar esa acción sin tener que abrir explícitamente la aplicación en cuestión.A mobile conversational agent or chatbot is software that can perform tasks or services for a particular user or group. The main goal of this Final Degree Project is to develop a context knowledge system for mobile agents, as well as provide it with tools that allow it to be adapted dynamically. This system will allow the user to receive personalised suggestions of actions based on their context and preferences. This project is developed in the A modality, which means it is associated with a university department. In this case, this project is linked to the Software and Service Engineering Group (GESSI) department from the Barcelona School of Informatics, Universitat Politècnica de Catalunya. This system will expose feature integrations between different applications of a mobile device, allowing the user to perform actions in one application and receive suggestions of possible actions to be executed in another application, letting them complete that suggestion without having to explicitly open the application

    IoT Data Processing for Smart City and Semantic Web Applications

    Full text link
    The world has been experiencing rapid urbanization over the last few decades, putting a strain on existing city infrastructure such as waste management, water supply management, public transport and electricity consumption. We are also seeing increasing pollution levels in cities threatening the environment, natural resources and health conditions. However, we must realize that the real growth lies in urbanization as it provides many opportunities to individuals for better employment, healthcare and better education. However, it is imperative to limit the ill effects of rapid urbanization through integrated action plans to enable the development of growing cities. This gave rise to the concept of a smart city in which all available information associated with a city will be utilized systematically for better city management. The proposed system architecture is divided in subsystems and is discussed in individual chapters. The first chapter introduces and gives overview to the reader of the complete system architecture. The second chapter discusses the data monitoring system and data lake system based on the oneM2M standards. DMS employs oneM2M as a middleware layer to achieve interoperability, and DLS uses a multi-tenant architecture with multiple logical databases, enabling efficient and reliable data management. The third chapter discusses energy monitoring and electric vehicle charging systems developed to illustrate the applicability of the oneM2M standards. The fourth chapter discusses the Data Exchange System based on the Indian Urban Data Exchange framework. DES uses IUDX standard data schema and open APIs to avoid data silos and enable secure data sharing. The fifth chapter discusses the 5D-IoT framework that provides uniform data quality assessment of sensor data with meaningful data descriptions
    • …
    corecore