19 research outputs found

    Information sharing performance management: a semantic interoperability assessment in the maritime surveillance domain

    Get PDF
    Information Sharing (IS) is essential for organizations to obtain information in a cost-effective way. If the existing information is not shared among the organizations that hold it, the alternative is to develop the necessary capabilities to acquire, store, process and manage it, which will lead to duplicated costs, especially unwanted if governmental organizations are concerned. The European Commission has elected IS among public administrations as a priority, has launched several IS initiatives, such as the EUCISE2020 project within the roadmap for developing the maritime Common Information Sharing Environment (CISE), and has defined the levels of interoperability essential for IS, which entail Semantic Interoperability (SI). An open question is how can IS performance be managed? Specifically, how can IS as-is, and to-be states and targets be defined, and how can organizations progress be monitored and controlled? In this paper, we propose 11 indicators for assessing SI that contribute to answering these questions. They have been demonstrated and evaluated with the data collected through a questionnaire, based on the CISE information model proposed during the CoopP project, which was answered by five public authorities that require maritime surveillance information and are committed to share information with each other.Postprint (published version

    A process model template for the support of IT-based logistics planning in the context of Chinese ports

    Get PDF
    In den letzten 10 Jahren hat die Verwaltung chinesischer HĂ€fen große Fortschritte bei IT-Systemen gemacht. Aber im Vergleich zu anderen IndustrielĂ€ndern sind chinesische HĂ€fen in Bereichen wie Design, Entwicklung und Implmentierung des IT-Systems noch am Anfang. Die meisten Probleme sind, dass das aktuelle IT-System nicht genĂŒgende Austausch von Informationen und Kommunikationsmöglichkeiten liefern kann. Die Situation lĂ€sst sich durch isolierte Informatiosinseln, Redudanz der Systemstruktur, ineffiziente Entwicklung und in einigen FĂ€llen sogar fehleranfĂ€llige Entwicklung kennzeichnen. In dieser Magisterarbeit wurde ein neues „Design-Prozess-Modell“ fĂŒr die logische Modellierung des gesamten Informationssystems in chinesischen HĂ€fen entworfen. Das „Design-Prozess-Modell“ gilt nicht nur als ein Standard-Prozess-Modell fĂŒr die Unternehmen, das IT-System zu entwerfen, es ist sondern auch eine Sammelung von einigen Methoden, Mustern und Regeln fĂŒr die Designer, das Design-Konzept anzuwenden. Der Hauptzweck des geplanten "Design-Prozess-Modell" besteht darin, ein kohĂ€renteres, besser strukturiertes und dokumentiertes System fĂŒr die Entwicklung des IT-Systems zur VerfĂŒgung zu stellen und logische Beziehungen und ZusammenhĂ€nge zwischen den verschiedenen Modellen zu gewĂ€hrleist. Folgende Schritten sollen im „Design Process Model“ inkludiert werden. 1) Identifikation der eigneten Modelle fĂŒr Entwicklung des IT Systems 2) Spezifikation von transformation rules zwischen unterschiedliche modele. 3) Semantik, Syntax und Notifikation des Vorgangsmodels zu formulieren. 4) Entwicklung der eigneten “Software Development Management Approach“.In recent years, Chinese harbor administrations have made great progress in IT development. But in the aspect of design, development and implementation, China is still in its infancy compared with other industrialized countries. Most problematic is that current information systems cannot provide enough information sharing and communication capabilities. The situation is best characterized by isolated information islands, system structure redundancies, and inefficient development and in some cases even failure-prone development. In this thesis, a tailored “design process model” is utilized for the logical modeling of more holistic information systems in Chinese harbors. The “design process model” is not only envisaged to serve as a standard process model to design the IT system in the enterprise, but also a collection of some methods, patterns and rules to help designers to finish the design concept. It does therefore go beyond a model and does incorporate some components of a framework. The main purpose of the planned “design process model” is to create more coherent and better-structured and documented system models for IT system development automatically and to ensure logical relationships and coherences between different models. To build the model the approach has to include the following steps: 1) Identification of a suitable model for developing IT systems 2) Specification of transformation rules between the different models. 3) Formulation of Semantics, syntax and notification for the process model. 4) Development of a sustainable software development management approach

    A Knowledge Enriched Computational Model to Support Lifecycle Activities of Computational Models in Smart Manufacturing

    Get PDF
    Due to the needs in supporting lifecycle activities of computational models in Smart Manufacturing (SM), a Knowledge Enriched Computational Model (KECM) is proposed in this dissertation to capture and integrate domain knowledge with standardized computational models. The KECM captures domain knowledge into information model(s), physics-based model(s), and rationales. To support model development in a distributed environment, the KECM can be used as the medium for formal information sharing between model developers. A case study has been developed to demonstrate the utilization of the KECM in supporting the construction of a Bayesian Network model. To support the deployment of computational models in SM systems, the KECM can be used for data integration between computational models and SM systems. A case study has been developed to show the deployment of a Constraint Programming optimization model into a Business To Manufacturing Markup Language (B2MML) -based system. In another situation where multiple computational models need to be deployed, the KECM can be used to support the combination of computational models. A case study has been developed to show the combination of an Agent-based model and a Decision Tree model using the KECM. To support model retrieval, a semantics-based method is suggested in this dissertation. As an example, a dispatching rule model retrieval problem has been addressed with a semantics-based approach. The semantics-based approach has been verified and it demonstrates good capability in using the KECM to retrieve computational models

    Forwarding and Control Element Separation (ForCES) Forwarding Element Model

    Full text link

    Configurable nD-visualization for complex Building Information Models

    Get PDF
    With the ongoing development of building information modelling (BIM) towards a comprehensive coverage of all construction project information in a semantically explicit way, visual representations became decoupled from the building information models. While traditional construction drawings implicitly contained the visual representation besides the information, nowadays they are generated on the fly, hard-coded in software applications dedicated to other tasks such as analysis, simulation, structural design or communication. Due to the abstract nature of information models and the increasing amount of digital information captured during construction projects, visual representations are essential for humans in order to access the information, to understand it, and to engage with it. At the same time digital media open up the new field of interactive visualizations. The full potential of BIM can only be unlocked with customized task-specific visualizations, with engineers and architects actively involved in the design and development process of these visualizations. The visualizations must be reusable and reliably reproducible during communication processes. Further, to support creative problem solving, it must be possible to modify and refine them. This thesis aims at reconnecting building information models and their visual representations: on a theoretic level, on the level of methods and in terms of tool support. First, the research seeks to improve the knowledge about visualization generation in conjunction with current BIM developments such as the multimodel. The approach is based on the reference model of the visualization pipeline and addresses structural as well as quantitative aspects of the visualization generation. Second, based on the theoretic foundation, a method is derived to construct visual representations from given visualization specifications. To this end, the idea of a domain-specific language (DSL) is employed. Finally, a software prototype proofs the concept. Using the visualization framework, visual representations can be generated from a specific building information model and a specific visualization description.Mit der fortschreitenden Entwicklung des Building Information Modelling (BIM) hin zu einer umfassenden Erfassung aller Bauprojektinformationen in einer semantisch expliziten Weise werden Visualisierungen von den GebĂ€udeinformationen entkoppelt. WĂ€hrend traditionelle Architektur- und Bauzeichnungen die visuellen ReprĂ€Ìˆsentationen implizit als TrĂ€ger der Informationen enthalten, werden sie heute on-the-fly generiert. Die Details ihrer Generierung sind festgeschrieben in Softwareanwendungen, welche eigentlich fĂŒr andere Aufgaben wie Analyse, Simulation, Entwurf oder Kommunikation ausgelegt sind. Angesichts der abstrakten Natur von Informationsmodellen und der steigenden Menge digitaler Informationen, die im Verlauf von Bauprojekten erfasst werden, sind visuelle ReprĂ€sentationen essentiell, um sich die Information erschließen, sie verstehen, durchdringen und mit ihnen arbeiten zu können. Gleichzeitig entwickelt sich durch die digitalen Medien eine neues Feld der interaktiven Visualisierungen. Das volle Potential von BIM kann nur mit angepassten aufgabenspezifischen Visualisierungen erschlossen werden, bei denen Ingenieur*innen und Architekt*innen aktiv in den Entwurf und die Entwicklung dieser Visualisierungen einbezogen werden. Die Visualisierungen mĂŒssen wiederverwendbar sein und in Kommunikationsprozessen zuverlĂ€ssig reproduziert werden können. Außerdem muss es möglich sein, Visualisierungen zu modifizieren und neu zu definieren, um das kreative Problemlösen zu unterstĂŒtzen. Die vorliegende Arbeit zielt darauf ab, GebĂ€udemodelle und ihre visuellen ReprĂ€sentationen wieder zu verbinden: auf der theoretischen Ebene, auf der Ebene der Methoden und hinsichtlich der unterstĂŒtzenden Werkzeuge. Auf der theoretischen Ebene trĂ€gt die Arbeit zunĂ€chst dazu bei, das Wissen um die Erstellung von Visualisierungen im Kontext von Bauprojekten zu erweitern. Der verfolgte Ansatz basiert auf dem Referenzmodell der Visualisierungspipeline und geht dabei sowohl auf strukturelle als auch auf quantitative Aspekte des Visualisierungsprozesses ein. Zweitens wird eine Methode entwickelt, die visuelle ReprĂ€sentationen auf Basis gegebener Visualisierungsspezifikationen generieren kann. Schließlich belegt ein Softwareprototyp die Realisierbarkeit des Konzepts. Mit dem entwickelten Framework können visuelle ReprĂ€sentationen aus jeweils einem spezifischen GebĂ€udemodell und einer spezifischen Visualisierungsbeschreibung generiert werden

    Semantic Interoperability in Internet of Things

    Get PDF
    With every passing day, we are connecting more devices to the Internet. These devices are of various types, ranging from personal devices (e.g., cell phones, computers, televisions, game consoles, home appliances controllers) to industrial devices (e.g. industrial robots, navigation equipment, medical equipment, self-driving vehicles, digitized monitoring of machines). The interaction of such devices over the Internet without the human intervention introduced a new concept of connectivity named Internet of Things. Where the Internet of Things opens the possibilities for new services, it also brings a few problems alongside. One of those problems is to understand the intended meaning and context of the communication. A successful communication is comprised of two parts, to exchange data between the communicating parties and a common agreement on the meanings of the data. This whole process of exchanging data and perceiving the intended meaning of the data is called Interoperability. The latter part of the interoperability process, where one needs to perceive the intended meaning of the data is called Semantic Interoperability. The present communication methods in the Internet of Things are good enough to successfully exchange data, but, they do not provide enough information to realize the intended meaning of the data. Thus, we need a solution to provide Semantic Interoperability in the Internet of Things. A major problem with the Internet of Things is that the majority of the IoT devices are resource constrained. Therefore, the required solution should solve the Semantic Interoperability problem while considering the limitations of the constrained devices. This thesis describes and implements a solution to solve the Semantic Interoperability problem in the Internet of Things

    Personal Knowledge Models with Semantic Technologies

    Get PDF
    Conceptual Data Structures (CDS) is a unified meta-model for representing knowledge cues in varying degrees of granularity, structuredness, and formality. CDS consists of: (1) A simple, expressive data-model; (2) A relation ontology which unifies the relations found in cognitive models of personal knowledge management tools, e. g., documents, mind-maps, hypertext, or semantic wikis. (3) An interchange format for structured text. Implemented prototypes have been evaluated

    A Process Model for the Integrated Reasoning about Quantitative IT Infrastructure Attributes

    Get PDF
    IT infrastructures can be quantitatively described by attributes, like performance or energy efficiency. Ever-changing user demands and economic attempts require varying short-term and long-term decisions regarding the alignment of an IT infrastructure and particularly its attributes to this dynamic surrounding. Potentially conflicting attribute goals and the central role of IT infrastructures presuppose decision making based upon reasoning, the process of forming inferences from facts or premises. The focus on specific IT infrastructure parts or a fixed (small) attribute set disqualify existing reasoning approaches for this intent, as they neither cover the (complex) interplay of all IT infrastructure components simultaneously, nor do they address inter- and intra-attribute correlations sufficiently. This thesis presents a process model for the integrated reasoning about quantitative IT infrastructure attributes. The process model’s main idea is to formalize the compilation of an individual reasoning function, a mathematical mapping of parametric influencing factors and modifications on an attribute vector. Compilation bases upon model integration to benefit from the multitude of existing specialized, elaborated, and well-established attribute models. The achieved reasoning function consumes an individual tuple of IT infrastructure components, attributes, and external influencing factors to expose a broad applicability. The process model formalizes a reasoning intent in three phases. First, reasoning goals and parameters are collected in a reasoning suite, and formalized in a reasoning function skeleton. Second, the skeleton is iteratively refined, guided by the reasoning suite. Third, the achieved reasoning function is employed for What-if analyses, optimization, or descriptive statistics to conduct the concrete reasoning. The process model provides five template classes that collectively formalize all phases in order to foster reproducibility and to reduce error-proneness. Process model validation is threefold. A controlled experiment reasons about a Raspberry Pi cluster’s performance and energy efficiency to illustrate feasibility. Besides, a requirements analysis on a world-class supercomputer and on the European-wide execution of hydro meteorology simulations as well as a related work examination disclose the process model’s level of innovation. Potential future work employs prepared automation capabilities, integrates human factors, and uses reasoning results for the automatic generation of modification recommendations.IT-Infrastrukturen können mit Attributen, wie Leistung und Energieeffizienz, quantitativ beschrieben werden. NutzungsbedarfsĂ€nderungen und ökonomische Bestrebungen erfordern Kurz- und Langfristentscheidungen zur Anpassung einer IT-Infrastruktur und insbesondere ihre Attribute an dieses dynamische Umfeld. Potentielle Attribut-Zielkonflikte sowie die zentrale Rolle von IT-Infrastrukturen erfordern eine Entscheidungsfindung mittels Reasoning, einem Prozess, der RĂŒckschlĂŒsse (rein) aus Fakten und PrĂ€missen zieht. Die Fokussierung auf spezifische Teile einer IT-Infrastruktur sowie die BeschrĂ€nkung auf (sehr) wenige Attribute disqualifizieren bestehende Reasoning-AnsĂ€tze fĂŒr dieses Vorhaben, da sie weder das komplexe Zusammenspiel von IT-Infrastruktur-Komponenten, noch AbhĂ€ngigkeiten zwischen und innerhalb einzelner Attribute ausreichend berĂŒcksichtigen können. Diese Arbeit prĂ€sentiert ein Prozessmodell fĂŒr das integrierte Reasoning ĂŒber quantitative IT-Infrastruktur-Attribute. Die grundlegende Idee des Prozessmodells ist die Herleitung einer individuellen Reasoning-Funktion, einer mathematischen Abbildung von Einfluss- und Modifikationsparametern auf einen Attributvektor. Die Herleitung basiert auf der Integration bestehender (Attribut-)Modelle, um von deren Spezialisierung, Reife und Verbreitung profitieren zu können. Die erzielte Reasoning-Funktion verarbeitet ein individuelles Tupel aus IT-Infrastruktur-Komponenten, Attributen und externen Einflussfaktoren, um eine breite Anwendbarkeit zu gewĂ€hrleisten. Das Prozessmodell formalisiert ein Reasoning-Vorhaben in drei Phasen. ZunĂ€chst werden die Reasoning-Ziele und -Parameter in einer Reasoning-Suite gesammelt und in einem Reasoning-Funktions-GerĂŒst formalisiert. Anschließend wird das GerĂŒst entsprechend den Vorgaben der Reasoning-Suite iterativ verfeinert. Abschließend wird die hergeleitete Reasoning-Funktion verwendet, um mittels “What-if”–Analysen, Optimierungsverfahren oder deskriptiver Statistik das Reasoning durchzufĂŒhren. Das Prozessmodell enthĂ€lt fĂŒnf Template-Klassen, die den Prozess formalisieren, um Reproduzierbarkeit zu gewĂ€hrleisten und FehleranfĂ€lligkeit zu reduzieren. Das Prozessmodell wird auf drei Arten validiert. Ein kontrolliertes Experiment zeigt die DurchfĂŒhrbarkeit des Prozessmodells anhand des Reasonings zur Leistung und Energieeffizienz eines Raspberry Pi Clusters. Eine Anforderungsanalyse an einem Superrechner und an der europaweiten AusfĂŒhrung von Hydro-Meteorologie-Modellen erlĂ€utert gemeinsam mit der Betrachtung verwandter Arbeiten den Innovationsgrad des Prozessmodells. Potentielle Erweiterungen nutzen die vorbereiteten AutomatisierungsansĂ€tze, integrieren menschliche Faktoren, und generieren Modifikationsempfehlungen basierend auf Reasoning-Ergebnissen
    corecore