19 research outputs found

    A Semantic Web of Know-How: Linked Data for Community-Centric Tasks

    Full text link
    This paper proposes a novel framework for representing community know-how on the Semantic Web. Procedural knowledge generated by web communities typically takes the form of natural language instructions or videos and is largely unstructured. The absence of semantic structure impedes the deployment of many useful applications, in particular the ability to discover and integrate know-how automatically. We discuss the characteristics of community know-how and argue that existing knowledge representation frameworks fail to represent it adequately. We present a novel framework for representing the semantic structure of community know-how and demonstrate the feasibility of our approach by providing a concrete implementation which includes a method for automatically acquiring procedural knowledge for real-world tasks.Comment: 6th International Workshop on Web Intelligence & Communities (WIC14), Proceedings of the companion publication of the 23rd International Conference on World Wide Web (WWW 2014

    Foundational Ontologies meet Ontology Matching: A Survey

    Get PDF
    Ontology matching is a research area aimed at finding ways to make different ontologies interoperable. Solutions to the problem have been proposed from different disciplines, including databases, natural language processing, and machine learning. The role of foundational ontologies for ontology matching is an important one. It is multifaceted and with room for development. This paper presents an overview of the different tasks involved in ontology matching that consider foundational ontologies. We discuss the strengths and weaknesses of existing proposals and highlight the challenges to be addressed in the future

    The evolution of ontology in AEC: A two-decade synthesis, application domains, and future directions

    Get PDF
    Ontologies play a pivotal role in knowledge representation, particularly beneficial for the Architecture, Engineering, and Construction (AEC) sector due to its inherent data diversity and intricacy. Despite the growing interest in ontology and data integration research, especially with the advent of knowledge graphs and digital twins, a noticeable lack of consolidated academic synthesis still needs to be addressed. This review paper aims to bridge that gap, meticulously analysing 142 journal articles from 2000 to 2021 on the application of ontologies in the AEC sector. The research is segmented through systematic evaluation into ten application domains within the construction realm- process, cost, operation/maintenance, health/safety, sustainability, monitoring/control, intelligent cities, heritage building information modelling (HBIM), compliance, and miscellaneous. This categorisation aids in pinpointing ontologies suitable for various research objectives. Furthermore, the paper highlights prevalent limitations within current ontology studies in the AEC sector. It offers strategic recommendations, presenting a well-defined path for future research to address these gaps

    Un environnement de spécification et de découverte pour la réutilisation des composants logiciels dans le développement des logiciels distribués

    Get PDF
    Notre travail vise à élaborer une solution efficace pour la découverte et la réutilisation des composants logiciels dans les environnements de développement existants et couramment utilisés. Nous proposons une ontologie pour décrire et découvrir des composants logiciels élémentaires. La description couvre à la fois les propriétés fonctionnelles et les propriétés non fonctionnelles des composants logiciels exprimées comme des paramètres de QoS. Notre processus de recherche est basé sur la fonction qui calcule la distance sémantique entre la signature d'un composant et la signature d'une requête donnée, réalisant ainsi une comparaison judicieuse. Nous employons également la notion de " subsumption " pour comparer l'entrée-sortie de la requête et des composants. Après sélection des composants adéquats, les propriétés non fonctionnelles sont employées comme un facteur distinctif pour raffiner le résultat de publication des composants résultats. Nous proposons une approche de découverte des composants composite si aucun composant élémentaire n'est trouvé, cette approche basée sur l'ontologie commune. Pour intégrer le composant résultat dans le projet en cours de développement, nous avons développé l'ontologie d'intégration et les deux services " input/output convertor " et " output Matching ".Our work aims to develop an effective solution for the discovery and the reuse of software components in existing and commonly used development environments. We propose an ontology for describing and discovering atomic software components. The description covers both the functional and non functional properties which are expressed as QoS parameters. Our search process is based on the function that calculates the semantic distance between the component interface signature and the signature of a given query, thus achieving an appropriate comparison. We also use the notion of "subsumption" to compare the input/output of the query and the components input/output. After selecting the appropriate components, the non-functional properties are used to refine the search result. We propose an approach for discovering composite components if any atomic component is found, this approach based on the shared ontology. To integrate the component results in the project under development, we developed the ontology integration and two services " input/output convertor " and " output Matching "

    Fuzz, Penetration, and AI Testing for SoC Security Verification: Challenges and Solutions

    Get PDF
    The ever-increasing usage and application of system-on-chips (SoCs) has resulted in the tremendous modernization of these architectures. For a modern SoC design, with the inclusion of numerous complex and heterogeneous intellectual properties (IPs), and its privacy-preserving declaration, there exists a wide variety of highly sensitive assets. These assets must be protected from any unauthorized access and against a diverse set of attacks. Attacks for obtaining such assets could be accomplished through different sources, including malicious IPs, malicious or vulnerable firmware/software, unreliable and insecure interconnection and communication protocol, and side-channel vulnerabilities through power/performance profiles. Any unauthorized access to such highly sensitive assets may result in either a breach of company secrets for original equipment manufactures (OEM) or identity theft for the end-user. Unlike the enormous advances in functional testing and verification of the SoC architecture, security verification is still on the rise, and little endeavor has been carried out by academia and industry. Unfortunately, there exists a huge gap between the modernization of the SoC architectures and their security verification approaches. With the lack of automated SoC security verification in modern electronic design automation (EDA) tools, we provide a comprehensive overview of the requirements that must be realized as the fundamentals of the SoC security verification process in this paper. By reviewing these requirements, including the creation of a unified language for SoC security verification, the definition of security policies, formulation of the security verification, etc., we put forward a realization of the utilization of self-refinement techniques, such as fuzz, penetration, and AI testing, for security verification purposes. We evaluate all the challenges and resolution possibilities, and we provide the potential approaches for the realization of SoC security verification via these self-refinement techniques

    Semantic Web Services Provisioning

    Get PDF
    Semantic Web Services constitute an important research area, where vari ous underlying frameworks, such as WSMO and OWL-S, define Semantic Web ontologies to describe Web services, so they can be automatically discovered, composed, and invoked. Service discovery has been traditionally interpreted as a functional filter in current Semantic Web Services frameworks, frequently performed by Description Logics reasoners. However, semantic provisioning has to be performed taking Quality-of-Service (QOS) into account, defining user preferences that enable QOS-aware Semantic Web Service selection. Nowadays, the research focus is actually on QOS-aware processes, so cur rent proposals are developing the field by providing QOS support to semantic provisioning, especially in selection processes. These processes lead to opti mization problems, where the best service among a set of services has to be selected, so Description Logics cannot be used in this context. Furthermore, user preferences has to be semantically defined so they can be used within selection processes. There are several proposals that extend Semantic Web Services frameworks allowing QOS-aware semantic provisioning. However, proposed selection techniques are very coupled with their proposed extensions, most of them being implemented ad hoc. Thus, there is a semantic gap between functional descriptions (usually using WSMO or OWL-S) and user preferences, which are specific for each proposal, using different ontologies or even non-semantic de scriptions, and depending on its corresponding ad hoc selection technique. In this report, we give an overview of most important Semantic Web Ser vices frameworks, showing a comparison between them. Then, a thorough analysis of state-of-the art proposals on QOS-aware semantic provisioning and user preferences descriptions is presented, discussing about their applicabil ity, advantages, and defects. Results from this analysis motivate our research work, which has been already materialized in two early contributions.Los servicios web semánticos constituyen un importante campo de inves tigación, en el cual distintos frameworks, como por ejemplo WSMO y OWL-S, definen ontologías de la web semántica para describir servicios web, de for ma que estos puedan ser descubiertos, compuestos e invocados de manera automática. El descubrimiento de servicios ha sido interpretado tradicional mente como un filtro funcional en los frameworks actuales de servicios web semánticos, usando para ello razonadores de lógica descriptiva. Sin embargo, las tareas de aprovisionamiento semántico deberían tener en cuenta la calidad del servicio, definiendo para ello preferencias de usuario de manera que sea posible realizar una selección de servicios web semánticos sensible a la cali dad. Actualmente, el foco de la investigación está en procesos sensibles a la ca lidad, por lo que las propuestas actuales están trabajando en este campo intro duciendo el soporte adecuado a la calidad del servicio dentro del aprovisio namiento semántico, y principalmente en las tareas de selección. Estas tareas desembocan en problemas de optimización, donde el mejor servicio de entre un concjunto debe ser seleccionado, por lo que las lógicas descriptivas no pue den ser usadas en este contexto. Además, las preferencias de usuario deben ser definidas semánticamente, de forma que puedan ser usadas en las tareas de selección. Existen bastantes propuestas que extienden los frameworks de servicios web semánticos para habilitar el aprovisionamiento sensible a la calidad. Sin embargo, las técnicas de selección propuestas están altamente acopladas con dichas extensiones, donde la mayoría de ellas implementan algoritmos ad hoc. Por tanto, existe un salto semántico entre las descripciones funcionales (nor malmente usando WSMO o OWL-S) y las preferencias de usuario, las cuales son definidas específicamente por cada propuesta, usando ontologías distin tas o incluso descripciones no semánticas que dependen de la correspondiente técnica de selección ad hoc

    A process model in platform independent and neutral formal representation for design engineering automation

    Get PDF
    An engineering design process as part of product development (PD) needs to satisfy ever-changing customer demands by striking a balance between time, cost and quality. In order to achieve a faster lead-time, improved quality and reduced PD costs for increased profits, automation methods have been developed with the help of virtual engineering. There are various methods of achieving Design Engineering Automation (DEA) with Computer-Aided (CAx) tools such as CAD/CAE/CAM, Product Lifecycle Management (PLM) and Knowledge Based Engineering (KBE). For example, Computer Aided Design (CAD) tools enable Geometry Automation (GA), PLM systems allow for sharing and exchange of product knowledge throughout the PD lifecycle. Traditional automation methods are specific to individual products and are hard-coded and bound by the proprietary tool format. Also, existing CAx tools and PLM systems offer bespoke islands of automation as compared to KBE. KBE as a design method incorporates complete design intent by including re-usable geometric, non-geometric product knowledge as well as engineering process knowledge for DEA including various processes such as mechanical design, analysis and manufacturing. It has been recognised, through an extensive literature review, that a research gap exists in the form of a generic and structured method of knowledge modelling, both informal and formal modelling, of mechanical design process with manufacturing knowledge (DFM/DFA) as part of model based systems engineering (MBSE) for DEA with a KBE approach. There is a lack of a structured technique for knowledge modelling, which can provide a standardised method to use platform independent and neutral formal standards for DEA with generative modelling for mechanical product design process and DFM with preserved semantics. The neutral formal representation through computer or machine understandable format provides open standard usage. This thesis provides a contribution to knowledge by addressing this gap in two-steps: • In the first step, a coherent process model, GPM-DEA is developed as part of MBSE which can be used for modelling of mechanical design with manufacturing knowledge utilising hybrid approach, based on strengths of existing modelling standards such as IDEF0, UML, SysML and addition of constructs as per author’s Metamodel. The structured process model is highly granular with complex interdependencies such as activities, object, function, rule association and includes the effect of the process model on the product at both component and geometric attributes. • In the second step, a method is provided to map the schema of the process model to equivalent platform independent and neutral formal standards using OWL/SWRL ontology for system development using Protégé tool, enabling machine interpretability with semantic clarity for DEA with generative modelling by building queries and reasoning on set of generic SWRL functions developed by the author. Model development has been performed with the aid of literature analysis and pilot use-cases. Experimental verification with test use-cases has confirmed the reasoning and querying capability on formal axioms in generating accurate results. Some of the other key strengths are that knowledgebase is generic, scalable and extensible, hence provides re-usability and wider design space exploration. The generative modelling capability allows the model to generate activities and objects based on functional requirements of the mechanical design process with DFM/DFA and rules based on logic. With the help of application programming interface, a platform specific DEA system such as a KBE tool or a CAD tool enabling GA and a web page incorporating engineering knowledge for decision support can consume relevant part of the knowledgebase

    Towards semantics-driven modelling and simulation of context-aware manufacturing systems

    Get PDF
    Systems modelling and simulation are two important facets for thoroughly and effectively analysing manufacturing processes. The ever-growing complexity of the latter, the increasing amount of knowledge, and the use of Semantic Web techniques adhering meaning to data have led researchers to explore and combine together methodologies by exploiting their best features with the purpose of supporting manufacturing system's modelling and simulation applications. In the past two decades, the use of ontologies has proven to be highly effective for context modelling and knowledge management. Nevertheless, they are not meant for any kind of model simulations. The latter, instead, can be achieved by using a well-known workflow-oriented mathematical modelling language such as Petri Net (PN), which brings in modelling and analytical features suitable for creating a digital copy of an industrial system (also known as "digital twin"). The theoretical framework presented in this dissertation aims to exploit W3C standards, such as Semantic Web Rule Language (SWRL) and Web Ontology Language (OWL), to transform each piece of knowledge regarding a manufacturing system into Petri Net modelling primitives. In so doing, it supports the semantics-driven instantiation, analysis and simulation of what we call semantically-enriched PN-based manufacturing system digital twins. The approach proposed by this exploratory research is therefore based on the exploitation of the best features introduced by state-of-the-art developments in W3C standards for Linked Data, such as OWL and SWRL, together with a multipurpose graphical and mathematical modelling tool known as Petri Net. The former is used for gathering, classifying and properly storing industrial data and therefore enhances our PN-based digital copy of an industrial system with advanced reasoning features. This makes both the system modelling and analysis phases more effective and, above all, paves the way towards a completely new field, where semantically-enriched PN-based manufacturing system digital twins represent one of the drivers of the digital transformation already in place in all companies facing the industrial revolution. As a result, it has been possible to outline a list of indications that will help future efforts in the application of complex digital twin support oriented solutions, which in turn is based on semantically-enriched manufacturing information systems. Through the application cases, five key topics have been tackled, namely: (i) semantic enrichment of industrial data using the most recent ontological models in order to enhance its value and enable new uses; (ii) context-awareness, or context-adaptiveness, aiming to enable the system to capture and use information about the context of operations; (iii) reusability, which is a core concept through which we want to emphasize the importance of reusing existing assets in some form within the industrial modelling process, such as industrial process knowledge, process data, system modelling primitives, and the like; (iv) the ultimate goal of semantic Interoperability, which can be accomplished by adding data about the metadata, linking each data element to a controlled, shared vocabulary; finally, (v) the impact on modelling and simulation applications, which shows how we could automate the translation process of industrial knowledge into a digital manufacturing system and empower it with quantitative and qualitative analytical technics
    corecore