6,700 research outputs found

    Towards a Conceptual Framework for Persistent Use: A Technical Plan to Achieve Semantic Interoperability within Electronic Health Record Systems

    Get PDF
    Semantic interoperability within the health care sector requires that patient data be fully available and shared without ambiguity across participating health facilities. Ongoing discussions to achieve interoperability within the health care industry continue to emphasize the need for healthcare facilities to successfully adopt and implement Electronic Health Record (EHR) systems. Reluctance by the healthcare industry to implement these EHRs for the purpose of achieving interoperability has led to the proposed research problem where it was determined that there is no existing single data standardization structure that can effectively share and interpret patient data within heterogeneous systems. \ \ The proposed research proposes a master data standardization and translation (MDST) model – XDataRDF -- which incorporates the use of the Resource Description Framework (RDF) that will allow for the seamless exchange of healthcare data among multiple facilities. Using RDF will allow multiple data models and vocabularies to be easily combined and interrelated within a single environment thereby reducing data definition ambiguity.

    An ontological modelling of multi-attribute criticality analysis to guide Prognostics and Health Management program development

    Get PDF
    Digital technologies are becoming more pervasive and industrial companies are exploiting them to enhance the potentialities related to Prognostics and Health Management (PHM). Indeed, PHM allows to evaluate the health state of the physical assets as well as to predict their future behaviour. To be effective in developing PHM programs, the most critical assets should be identified so to direct modelling efforts. Several techniques could be adopted to evaluate asset criticality; in industrial practice, criticality analysis is amongst the most utilised. Despite the advancement of artificial intelligence for data analysis and predictions, the criticality analysis, which is built upon both quantitative and qualitative data, has not been improved accordingly. It is the goal of this work to propose an ontological formalisation of a multi-attribute criticality analysis in order to i) fix the semantics behind the terms involved in the analysis, ii) standardize and uniform the way criticality analysis is performed, and iii) take advantage of the reasoning capabilities to automatically evaluate asset criticality and associate a suitable maintenance strategy. The developed ontology, called MOCA, is tested in a food company featuring a global footprint. The application shows that MOCA can accomplish the prefixed goals; specifically, high priority assets towards which direct PHM programs are identified. In the long run, ontologies could serve as a unique knowledge base that integrate multiple data and information across facilities in a consistent way. As such, they will enable advanced analytics to take place, allowing to move towards cognitive Cyber Physical Systems that enhance business performance for companies spread worldwide

    HybridMDSD: Multi-Domain Engineering with Model-Driven Software Development using Ontological Foundations

    Get PDF
    Software development is a complex task. Executable applications comprise a mutlitude of diverse components that are developed with various frameworks, libraries, or communication platforms. The technical complexity in development retains resources, hampers efficient problem solving, and thus increases the overall cost of software production. Another significant challenge in market-driven software engineering is the variety of customer needs. It necessitates a maximum of flexibility in software implementations to facilitate the deployment of different products that are based on one single core. To reduce technical complexity, the paradigm of Model-Driven Software Development (MDSD) facilitates the abstract specification of software based on modeling languages. Corresponding models are used to generate actual programming code without the need for creating manually written, error-prone assets. Modeling languages that are tailored towards a particular domain are called domain-specific languages (DSLs). Domain-specific modeling (DSM) approximates technical solutions with intentional problems and fosters the unfolding of specialized expertise. To cope with feature diversity in applications, the Software Product Line Engineering (SPLE) community provides means for the management of variability in software products, such as feature models and appropriate tools for mapping features to implementation assets. Model-driven development, domain-specific modeling, and the dedicated management of variability in SPLE are vital for the success of software enterprises. Yet, these paradigms exist in isolation and need to be integrated in order to exhaust the advantages of every single approach. In this thesis, we propose a way to do so. We introduce the paradigm of Multi-Domain Engineering (MDE) which means model-driven development with multiple domain-specific languages in variability-intensive scenarios. MDE strongly emphasize the advantages of MDSD with multiple DSLs as a neccessity for efficiency in software development and treats the paradigm of SPLE as indispensable means to achieve a maximum degree of reuse and flexibility. We present HybridMDSD as our solution approach to implement the MDE paradigm. The core idea of HybidMDSD is to capture the semantics of particular DSLs based on properly defined semantics for software models contained in a central upper ontology. Then, the resulting semantic foundation can be used to establish references between arbitrary domain-specific models (DSMs) and sophisticated instance level reasoning ensures integrity and allows to handle partiucular change adaptation scenarios. Moreover, we present an approach to automatically generate composition code that integrates generated assets from separate DSLs. All necessary development tasks are arranged in a comprehensive development process. Finally, we validate the introduced approach with a profound prototypical implementation and an industrial-scale case study.Softwareentwicklung ist komplex: ausfĂŒhrbare Anwendungen beinhalten und vereinen eine Vielzahl an Komponenten, die mit unterschiedlichen Frameworks, Bibliotheken oder Kommunikationsplattformen entwickelt werden. Die technische KomplexitĂ€t in der Entwicklung bindet Ressourcen, verhindert effiziente Problemlösung und fĂŒhrt zu insgesamt hohen Kosten bei der Produktion von Software. ZusĂ€tzliche Herausforderungen entstehen durch die Vielfalt und Unterschiedlichkeit an KundenwĂŒnschen, die der Entwicklung ein hohes Maß an FlexibilitĂ€t in Software-Implementierungen abverlangen und die Auslieferung verschiedener Produkte auf Grundlage einer Basis-Implementierung nötig machen. Zur Reduktion der technischen KomplexitĂ€t bietet sich das Paradigma der modellgetriebenen Softwareentwicklung (MDSD) an. Software-Spezifikationen in Form abstrakter Modelle werden hier verwendet um Programmcode zu generieren, was die fehleranfĂ€llige, manuelle Programmierung Ă€hnlicher Komponenten ĂŒberflĂŒssig macht. Modellierungssprachen, die auf eine bestimmte ProblemdomĂ€ne zugeschnitten sind, nennt man domĂ€nenspezifische Sprachen (DSLs). DomĂ€nenspezifische Modellierung (DSM) vereint technische Lösungen mit intentionalen Problemen und ermöglicht die Entfaltung spezialisierter Expertise. Um der Funktionsvielfalt in Software Herr zu werden, bietet der Forschungszweig der Softwareproduktlinienentwicklung (SPLE) verschiedene Mittel zur Verwaltung von VariabilitĂ€t in Software-Produkten an. Hierzu zĂ€hlen Feature-Modelle sowie passende Werkzeuge, um Features auf Implementierungsbestandteile abzubilden. Modellgetriebene Entwicklung, domĂ€nenspezifische Modellierung und eine spezielle Handhabung von VariabilitĂ€t in Softwareproduktlinien sind von entscheidender Bedeutung fĂŒr den Erfolg von Softwarefirmen. Zur Zeit bestehen diese Paradigmen losgelöst voneinander und mĂŒssen integriert werden, damit die Vorteile jedes einzelnen fĂŒr die Gesamtheit der Softwareentwicklung entfaltet werden können. In dieser Arbeit wird ein Ansatz vorgestellt, der dies ermöglicht. Es wird das Multi-Domain Engineering Paradigma (MDE) eingefĂŒhrt, welches die modellgetriebene Softwareentwicklung mit mehreren domĂ€nenspezifischen Sprachen in variabilitĂ€tszentrierten Szenarien beschreibt. MDE stellt die Vorteile modellgetriebener Entwicklung mit mehreren DSLs als eine Notwendigkeit fĂŒr Effizienz in der Entwicklung heraus und betrachtet das SPLE-Paradigma als unabdingbares Mittel um ein Maximum an Wiederverwendbarkeit und FlexibilitĂ€t zu erzielen. In der Arbeit wird ein Ansatz zur Implementierung des MDE-Paradigmas, mit dem Namen HybridMDSD, vorgestellt

    Knowledge Representation in Engineering 4.0

    Get PDF
    This dissertation was developed in the context of the BMBF and EU/ECSEL funded projects GENIAL! and Arrowhead Tools. In these projects the chair examines methods of specifications and cooperations in the automotive value chain from OEM-Tier1-Tier2. Goal of the projects is to improve communication and collaborative planning, especially in early development stages. Besides SysML, the use of agreed vocabularies and on- tologies for modeling requirements, overall context, variants, and many other items, is targeted. This thesis proposes a web database, where data from the collaborative requirements elicitation is combined with an ontology-based approach that uses reasoning capabilities. For this purpose, state-of-the-art ontologies have been investigated and integrated that entail domains like hardware/software, roadmapping, IoT, context, innovation and oth- ers. New ontologies have been designed like a HW / SW allocation ontology and a domain-specific "eFuse ontology" as well as some prototypes. The result is a modular ontology suite and the GENIAL! Basic Ontology that allows us to model automotive and microelectronic functions, components, properties and dependencies based on the ISO26262 standard among these elements. Furthermore, context knowledge that influences design decisions such as future trends in legislation, society, environment, etc. is included. These knowledge bases are integrated in a novel tool that allows for collabo- rative innovation planning and requirements communication along the automotive value chain. To start off the work of the project, an architecture and prototype tool was developed. Designing ontologies and knowing how to use them proved to be a non-trivial task, requiring a lot of context and background knowledge. Some of this background knowledge has been selected for presentation and was utilized either in designing models or for later immersion. Examples are basic foundations like design guidelines for ontologies, ontology categories and a continuum of expressiveness of languages and advanced content like multi-level theory, foundational ontologies and reasoning. Finally, at the end, we demonstrate the overall framework, and show the ontology with reasoning, database and APPEL/SysMD (AGILA ProPErty and Dependency Descrip- tion Language / System MarkDown) and constraints of the hardware / software knowledge base. There, by example, we explore and solve roadmap constraints that are coupled with a car model through a constraint solver.Diese Dissertation wurde im Kontext des von BMBF und EU / ECSEL gefördertem Projektes GENIAL! und Arrowhead Tools entwickelt. In diesen Projekten untersucht der Lehrstuhl Methoden zur Spezifikationen und Kooperation in der Automotive Wertschöp- fungskette, von OEM zu Tier1 und Tier2. Ziel der Arbeit ist es die Kommunikation und gemeinsame Planung, speziell in den frĂŒhen Entwicklungsphasen zu verbessern. Neben SysML ist die Benutzung von vereinbarten Vokabularen und Ontologien in der Modellierung von Requirements, des Gesamtkontextes, Varianten und vielen anderen Elementen angezielt. Ontologien sind dabei eine Möglichkeit, um das Vermeiden von MissverstĂ€ndnissen und Fehlplanungen zu unterstĂŒtzen. Dieser Ansatz schlĂ€gt eine Web- datenbank vor, wobei Ontologien das Teilen von Wissen und das logische Schlussfolgern von implizitem Wissen und Regeln unterstĂŒtzen. Diese Arbeit beschreibt Ontologien fĂŒr die DomĂ€ne des Engineering 4.0, oder spezifischer, fĂŒr die DomĂ€ne, die fĂŒr das deutsche Projekt GENIAL! benötigt wurde. Dies betrifft DomĂ€nen, wie Hardware und Software, Roadmapping, Kontext, Innovation, IoT und andere. Neue Ontologien wurden entworfen, wie beispielsweise die Hardware-Software Allokations-Ontologie und eine domĂ€nen-spezifische "eFuse Ontologie". Das Ergebnis war eine modulare Ontologie-Bibliothek mit der GENIAL! Basic Ontology, die es erlaubt, automotive und mikroelektronische Komponenten, Funktionen, Eigenschaften und deren AbhĂ€ngigkeiten basierend auf dem ISO26262 Standard zu entwerfen. Des weiteren ist Kontextwissen, welches Entwurfsentscheidungen beinflusst, inkludiert. Diese Wissensbasen sind in einem neuartigen Tool integriert, dass es ermöglicht, Roadmapwissen und Anforderungen durch die Automobil- Wertschöpfungskette hinweg auszutauschen. On tologien zu entwerfen und zu wissen, wie man diese benutzt, war dabei keine triviale Aufgabe und benötigte viel Hintergrund- und Kontextwissen. AusgewĂ€hlte Grundlagen hierfĂŒr sind Richtlinien, wie man Ontologien entwirft, Ontologiekategorien, sowie das Spektrum an Sprachen und Formen von Wissensrepresentationen. Des weiteren sind fort- geschrittene Methoden erlĂ€utert, z.B wie man mit Ontologien Schlußfolgerungen trifft. Am Schluss wird das Overall Framework demonstriert, und die Ontologie mit Reason- ing, Datenbank und APPEL/SysMD (AGILA ProPErty and Dependency Description Language / System MarkDown) und Constraints der Hardware / Software Wissensbasis gezeigt. Dabei werden exemplarisch Roadmap Constraints mit dem Automodell verbunden und durch den Constraint Solver gelöst und exploriert

    A Hierarchical Core Reference Ontology for New Technology Insertion Design in Long Life Cycle, Complex Mission Critical Systems

    Get PDF
    Organizations, including government, commercial and others, face numerous challenges in maintaining and upgrading long life-cycle, complex, mission critical systems. Maintaining and upgrading these systems requires the insertion and integration of new technology to avoid obsolescence of hardware software, and human skills, to improve performance, to maintain and improve security, and to extend useful life. This is particularly true of information technology (IT) intensive systems. The lack of a coherent body of knowledge to organize new technology insertion theory and practice is a significant contributor to this difficulty. This research organized the existing design, technology road mapping, obsolescence, and sustainability literature into an ontology of theory and application as the foundation for a technology design and technology insertion design hierarchical core reference ontology and laid the foundation for body of knowledge that better integrates the new technology insertion problem into the technology design architecture

    Human-Intelligence and Machine-Intelligence Decision Governance Formal Ontology

    Get PDF
    Since the beginning of the human race, decision making and rational thinking played a pivotal role for mankind to either exist and succeed or fail and become extinct. Self-awareness, cognitive thinking, creativity, and emotional magnitude allowed us to advance civilization and to take further steps toward achieving previously unreachable goals. From the invention of wheels to rockets and telegraph to satellite, all technological ventures went through many upgrades and updates. Recently, increasing computer CPU power and memory capacity contributed to smarter and faster computing appliances that, in turn, have accelerated the integration into and use of artificial intelligence (AI) in organizational processes and everyday life. Artificial intelligence can now be found in a wide range of organizational systems including healthcare and medical diagnosis, automated stock trading, robotic production, telecommunications, space explorations, and homeland security. Self-driving cars and drones are just the latest extensions of AI. This thrust of AI into organizations and daily life rests on the AI community’s unstated assumption of its ability to completely replicate human learning and intelligence in AI. Unfortunately, even today the AI community is not close to completely coding and emulating human intelligence into machines. Despite the revolution of digital and technology in the applications level, there has been little to no research in addressing the question of decision making governance in human-intelligent and machine-intelligent (HI-MI) systems. There also exists no foundational, core reference, or domain ontologies for HI-MI decision governance systems. Further, in absence of an expert reference base or body of knowledge (BoK) integrated with an ontological framework, decision makers must rely on best practices or standards that differ from organization to organization and government to government, contributing to systems failure in complex mission critical situations. It is still debatable whether and when human or machine decision capacity should govern or when a joint human-intelligence and machine-intelligence (HI-MI) decision capacity is required in any given decision situation. To address this deficiency, this research establishes a formal, top level foundational ontology of HI-MI decision governance in parallel with a grounded theory based body of knowledge which forms the theoretical foundation of a systemic HI-MI decision governance framework
    • 

    corecore