10,586 research outputs found

    Save up to 99% of your time in mapping validation

    Get PDF
    Identifying semantic correspondences between different vocabularies has been recognized as a fundamental step towards achieving interoperability. Several manual and automatic techniques have been recently proposed. Fully manual approaches are very precise, but extremely costly. Conversely, automatic approaches tend to fail when domain specific background knowledge is needed. Consequently, they typically require a manual validation step. Yet, when the number of computed correspondences is very large, the validation phase can be very expensive. In order to reduce the problems above, we propose to compute the minimal set of correspondences, that we call the minimal mapping, which are sufficient to compute all the other ones. We show that by concentrating on such correspondences we can save up to 99% of the manual checks required for validation

    Building product suggestions for a BIM model based on rule sets and a semantic reasoning engine

    Get PDF
    The architecture, engineering and construction (AEC) industry today relies on different information systems and computational tools built to support and assist in the building design and construction. However, these systems and tools typically provide this support in isolation from each other. A good combination of these systems and tools is beneficial for a better coordination and information management. Semantic web technologies and a Linked Data approach can be used to fulfil this aim. In this paper, we indicate how these technologies can be applied for one particular objective, namely to check a building information model (BIM) and make suggestions for that model regarding the building elements. These suggestions are based on information obtained from different data sources, including a BIM model, regulations and catalogues of locally available building components. In this paper, we briefly discuss the results obtained in the application of this approach in a case study based on structural safety requirements

    Standardization of power system protection settings using IEC 61850 for improved interoperability

    Get PDF
    One of the potential benefits of smart grid development is that data becomes more open and available for use by multiple applications. Many existing protection relays use proprietary formats for storing protection settings. This paper proposes to apply the IEC 61850 data model and System Configuration description Language (SCL), which are formally defined, to represent protection settings. Protection setting files in proprietary formats are parsed using rule-based reasoning, mapped to the IEC 61850 data model, and exported as SCL files. An important application of using SCL-based protection setting files is to achieve protection setting interoperability, which could bring multiple compelling benefits, such as significantly streamlining the IED configuration process and releasing utilities from being “locked in” to one particular vendor. For this purpose, this paper proposes a uniform configuration process for future IEDs. The challenges involved in the implementation of the proposed approach are discussed and possible solutions are presented

    FVQA: Fact-based Visual Question Answering

    Full text link
    Visual Question Answering (VQA) has attracted a lot of attention in both Computer Vision and Natural Language Processing communities, not least because it offers insight into the relationships between two important sources of information. Current datasets, and the models built upon them, have focused on questions which are answerable by direct analysis of the question and image alone. The set of such questions that require no external information to answer is interesting, but very limited. It excludes questions which require common sense, or basic factual knowledge to answer, for example. Here we introduce FVQA, a VQA dataset which requires, and supports, much deeper reasoning. FVQA only contains questions which require external information to answer. We thus extend a conventional visual question answering dataset, which contains image-question-answerg triplets, through additional image-question-answer-supporting fact tuples. The supporting fact is represented as a structural triplet, such as . We evaluate several baseline models on the FVQA dataset, and describe a novel model which is capable of reasoning about an image on the basis of supporting facts.Comment: 16 page

    Using First Order Inductive Learning as an Alternative to a Simulator in a Game Artificial Intelligence

    Get PDF
    Currently many game artificial intelligences attempt to determine their next moves by using a simulator to predict the effect of actions in the world. However, writing such a simulator is time-consuming, and the simulator must be changed substantially whenever a detail in the game design is modified. As such, this research project set out to determine if a version of the first order inductive learning algorithm could be used to learn rules that could then be used in place of a simulator. By eliminating the need to write a simulator for each game by hand, the entire Darmok 2 project could more easily adapt to additional real-time strategy games. Over time, Darmok 2 would also be able to provide better competition for human players by training the artificial intelligences to play against the style of a specific player. Most importantly, Darmok 2 might also be able to create a general solution for creating game artificial intelligences, which could save game development companies a substantial amount of money, time, and effort.Ram, Ashwin - Faculty Mentor ; Ontañón, Santi - Committee Member/Second Reade

    An information retrieval approach to ontology mapping

    Get PDF
    In this paper, we present a heuristic mapping method and a prototype mapping system that support the process of semi-automatic ontology mapping for the purpose of improving semantic interoperability in heterogeneous systems. The approach is based on the idea of semantic enrichment, i.e., using instance information of the ontology to enrich the original ontology and calculate similarities between concepts in two ontologies. The functional settings for the mapping system are discussed and the evaluation of the prototype implementation of the approach is reported. \ud \u
    corecore