1,122 research outputs found

    Connected Information Management

    Get PDF
    Society is currently inundated with more information than ever, making efficient management a necessity. Alas, most of current information management suffers from several levels of disconnectedness: Applications partition data into segregated islands, small notes don’t fit into traditional application categories, navigating the data is different for each kind of data; data is either available at a certain computer or only online, but rarely both. Connected information management (CoIM) is an approach to information management that avoids these ways of disconnectedness. The core idea of CoIM is to keep all information in a central repository, with generic means for organization such as tagging. The heterogeneity of data is taken into account by offering specialized editors. The central repository eliminates the islands of application-specific data and is formally grounded by a CoIM model. The foundation for structured data is an RDF repository. The RDF editing meta-model (REMM) enables form-based editing of this data, similar to database applications such as MS access. Further kinds of data are supported by extending RDF, as follows. Wiki text is stored as RDF and can both contain structured text and be combined with structured data. Files are also supported by the CoIM model and are kept externally. Notes can be quickly captured and annotated with meta-data. Generic means for organization and navigation apply to all kinds of data. Ubiquitous availability of data is ensured via two CoIM implementations, the web application HYENA/Web and the desktop application HYENA/Eclipse. All data can be synchronized between these applications. The applications were used to validate the CoIM ideas

    Knowledge Graph Building Blocks: An easy-to-use Framework for developing FAIREr Knowledge Graphs

    Full text link
    Knowledge graphs and ontologies provide promising technical solutions for implementing the FAIR Principles for Findable, Accessible, Interoperable, and Reusable data and metadata. However, they also come with their own challenges. Nine such challenges are discussed and associated with the criterion of cognitive interoperability and specific FAIREr principles (FAIR + Explorability raised) that they fail to meet. We introduce an easy-to-use, open source knowledge graph framework that is based on knowledge graph building blocks (KGBBs). KGBBs are small information modules for knowledge-processing, each based on a specific type of semantic unit. By interrelating several KGBBs, one can specify a KGBB-driven FAIREr knowledge graph. Besides implementing semantic units, the KGBB Framework clearly distinguishes and decouples an internal in-memory data model from data storage, data display, and data access/export models. We argue that this decoupling is essential for solving many problems of knowledge management systems. We discuss the architecture of the KGBB Framework as we envision it, comprising (i) an openly accessible KGBB-Repository for different types of KGBBs, (ii) a KGBB-Engine for managing and operating FAIREr knowledge graphs (including automatic provenance tracking, editing changelog, and versioning of semantic units); (iii) a repository for KGBB-Functions; (iv) a low-code KGBB-Editor with which domain experts can create new KGBBs and specify their own FAIREr knowledge graph without having to think about semantic modelling. We conclude with discussing the nine challenges and how the KGBB Framework provides solutions for the issues they raise. While most of what we discuss here is entirely conceptual, we can point to two prototypes that demonstrate the principle feasibility of using semantic units and KGBBs to manage and structure knowledge graphs

    ПРОБЛЕМИ ПОВТОРНОГО ВИКОРИСТАННЯ ЗНАНЬ У ПРОЦЕСІ ПРОЄКТУВАННЯ ПРОГРАМНИХ СИСТЕМ

    Get PDF
    Останнім часом значна увага приділяється створенню баз знань, що містять мільйони фактів про різні об’єкти реального світу. Одним із ключових аспектів управління знаннями є повторне використання знань, які були набуті раніше. Предмет дослідження – процеси повторного використання знань і створення програмних систем на основі баз знань. Інтерпретація знань є одним із підходів до повторного їх застосування, що полягає у виведенні нових знань на основі наявних фактів у базі знань. Метою дослідження є підвищення ефективності повторного використання знань в програмних системах на основі баз знань способом автоматичного видобування правил. Для досягнення поставленої мети виконано такі завдання: досліджено підходи до структурування наявних у базі даних фактів; проведено якісний аналіз можливості застосування автоматичних методів побудови правил і виведення; розглянуто задачу прогнозування зв’язку між парою сутностей, що визначає наявність відношення для фактів; запропоновано узагальнений підхід для подання фактів, що дає змогу використовувати ефективні алгоритми пошуку правил. Для вирішення перелічених завдань застосовано такі методи: алгебра скінченних предикатів і предикатних операцій для подання знань, методи прогнозування зв’язку між парою сутностей на основі репрезентативного навчання для автоматичного видобування правил. Здобуто такі результати: розглянуто підхід до формування правил, що дає змогу структурувати наявні факти як сукупність двійкових предикатів та застосувати автоматичні методи побудови правил і виведення; зроблено висновок, що обмеженням повторного використання знань є структура бази знань і програмне забезпечення, яке використовується для її підтримки; сформульовано принципи побудови специфічних предикатів-концентраторів для подання атрибутів, що дає змогу узагальнити предикатне подання фактів та застосовувати автоматичні методи видобування правил, що підвищує ефективність повторного використання знань. Висновки: застосування методу й механізму ідентифікації на основі предикатних операцій і специфічних предикатів, що автоматично видобувають атрибути з бази знань, разом з оцінкою якості виведених правил дали змогу запропонувати узагальнений підхід для подання фактів і використати ефективні алгоритми пошуку правил, що допоможе підвищити ефективність повторного застосування знань у програмних системах

    The round trip problem : a solution for the process handbook

    Get PDF
    Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1995.Includes bibliographical references (p. 68-69).by Frank Yeean Chan.M.Eng

    Iterchanging Discrete Event Simulationprocess Interaction Modelsusing The Web Ontology Language - Owl

    Get PDF
    Discrete event simulation development requires significant investments in time and resources. Descriptions of discrete event simulation models are associated with world views, including the process interaction orientation. Historically, these models have been encoded using high-level programming languages or special purpose, typically vendor-specific, simulation languages. These approaches complicate simulation model reuse and interchange. The current document-centric World Wide Web is evolving into a Semantic Web that communicates information using ontologies. The Web Ontology Language OWL, was used to encode a Process Interaction Modeling Ontology for Discrete Event Simulations (PIMODES). The PIMODES ontology was developed using ontology engineering processes. Software was developed to demonstrate the feasibility of interchanging models from commercial simulation packages using PIMODES as an intermediate representation. The purpose of PIMODES is to provide a vendor-neutral open representation to support model interchange. Model interchange enables reuse and provides an opportunity to improve simulation quality, reduce development costs, and reduce development times

    Model morphisms (MoMo) to enable language independent information models and interoperable business networks

    Get PDF
    MSc. Dissertation presented at Faculdade de Ciências e Tecnologia of Universidade Nova de Lisboa to obtain the Master degree in Electrical and Computer EngineeringWith the event of globalisation, the opportunities for collaboration became more evident with the effect of enlarging business networks. In such conditions, a key for enterprise success is a reliable communication with all the partners. Therefore, organisations have been searching for flexible integrated environments to better manage their services and product life cycle, where their software applications could be easily integrated independently of the platform in use. However, with so many different information models and implementation standards being used, interoperability problems arise. Moreover,organisations are themselves at different technological maturity levels, and the solution that might be good for one, can be too advanced for another, or vice-versa. This dissertation responds to the above needs, proposing a high level meta-model to be used at the entire business network, enabling to abstract individual models from their specificities and increasing language independency and interoperability, while keeping all the enterprise legacy software‟s integrity intact. The strategy presented allows an incremental mapping construction, to achieve a gradual integration. To accomplish this, the author proposes Model Driven Architecture (MDA) based technologies for the development of traceable transformations and execution of automatic Model Morphisms

    Piazza: Data Management Infrastructure for Semantic Web Applications

    Get PDF
    The Semantic Web envisions a World Wide Web in which data is described with rich semantics and applications can pose complex queries. To this point, researchers have defined new languages for specifying meanings for concepts and developed techniques for reasoning about them, using RDF as the data model. To flourish, the Semantic Web needs to be able to accommodate the huge amounts of existing data and the applications operating on them. To achieve this, we are faced with two problems. First, most of the world\u27s data is available not in RDF but in XML; XML and the applications consuming it rely not only on the domain structure of the data, but also on its document structure. Hence, to provide interoperability between such sources, we must map between both their domain structures and their document structures. Second, data management practitioners often prefer to exchange data through local point-to-point data translations, rather than mapping to common mediated schemas or ontologies. This paper describes the Piazza system, which addresses these challenges. Piazza offers a language for mediating between data sources on the Semantic Web, which maps both the domain structure and document structure. Piazza also enables interoperation of XML data with RDF data that is accompanied by rich OWL ontologies. Mappings in Piazza are provided at a local scale between small sets of nodes, and our query answering algorithm is able to chain sets mappings together to obtain relevant data from across the Piazza network. We also describe an implemented scenario in Piazza and the lessons we learned from it

    A framework for semantic checking of information systems

    Get PDF
    Dissertação para obtenção do Grau de Mestre em Engenharia Electrotécnica e de ComputadoresIn this day and age, enterprises often find that their business benefits greatly if they collaborate with others in order to be more competitive and productive. However these collaborations often come with some costs since the worldwide diversity of communities has led to the development of various knowledge representation elements, namely ontologies that, in most cases, are not semantically equivalent. Consequently, even though some enterprises may operate in the same domain, they can have different representations of that same knowledge. However, even after solving this issue and establishing a semantic alignment with other systems, they do not remain unchanged. Subsequently, a regular check of its semantic alignment is needed. To aid in the resolution of this semantic interoperability problem, the author proposes a framework that intends to provide generic solutions and a mean to validate the semantic consistency of ontologies in various scenarios, thus maintaining the interoperability state between the enrolled systems

    APPLICATIONS OF GRAPH THEORY FOR REUSE OF MODEL BASED SYSTEMS ENGINEERING DESIGN DATA

    Get PDF
    This dissertation contributes to systems engineering (SE) by introducing and demonstrating a novel graph-based design repository (GBDR) tool. GBDR enables engineers to leverage system design information from a heterogenous set of system models created using multiple model based systems engineering (MBSE) software tools as an integrated body of knowledge. Specifically, the research provides a set of approaches that allow the use of system models described in Systems Modeling Language and Lifecycle Modeling Language as an integrated body of design information. The coalesced body of system design information serves to support concept ideation and analysis within SE. The research accomplishes this by using a graph database to store system model information imported from digital artifacts created by MBSE tools and applying principles from graph theory and semantic web technologies to identify likely connections and equivalent concepts across system models, modeling languages, and metamodels. The research demonstrates that the presented tool can import, store, synthesize, search, display, distribute, and export information from multiple MBSE tools. As a practical demonstration, feasible subsystem design alternatives for a small unmanned aircraft system government reference architecture are identified from within a set of existing system models.OSD CAPECivilian, Office of the Secretary of DefenseApproved for public release. Distribution is unlimited

    Pristup integraciji tehničkih prostora zasnovan na preslikavanjima iinženjerstvu vođenom modelima

    Get PDF
    In order to automate development of integration adapters in industrial settings, a model-driven approach to adapter specification is devised. In this approach, a domain-specific modeling language is created to allow specification of mappings between integrated technical spaces. Also proposed is the mapping automation engine that comprises reuse and alignment algorithms. Based on mapping specifications, executable adapters are automatically generated and executed. Results of approach evaluations indicate that it is possible to use a model-driven approach to successfully integrate technical spaces and increase the automation by reusing domainspecific mappings from previously created adapters.За потребе повећања степена аутоматизације развоја адаптера за интеграцију у индустријском окружењу, осмишљен је моделом вођен приступ развоју адаптера. У оквиру овог приступа развијен је наменски језик за спецификацију пресликавања између техничких простора који су предмет интеграције. Приступ обухвата и алгоритме за поравнање и поновно искориштење претходно креираних пресликавања са циљем аутоматизације процеса спецификације. На основу креираних пресликавања, могуће je аутоматски генерисати извршиви код адаптера. У испитивањима приступа, показано је да је могуће успешно применити моделом вођен приступ у интеграцији техничких простора као и да је могуће успешно повећати степен аутоматизације поновним искоришћењем претходно креираних пресликавања.Za potrebe povećanja stepena automatizacije razvoja adaptera za integraciju u industrijskom okruženju, osmišljen je modelom vođen pristup razvoju adaptera. U okviru ovog pristupa razvijen je namenski jezik za specifikaciju preslikavanja između tehničkih prostora koji su predmet integracije. Pristup obuhvata i algoritme za poravnanje i ponovno iskorištenje prethodno kreiranih preslikavanja sa ciljem automatizacije procesa specifikacije. Na osnovu kreiranih preslikavanja, moguće je automatski generisati izvršivi kod adaptera. U ispitivanjima pristupa, pokazano je da je moguće uspešno primeniti modelom vođen pristup u integraciji tehničkih prostora kao i da je moguće uspešno povećati stepen automatizacije ponovnim iskorišćenjem prethodno kreiranih preslikavanja
    corecore