42 research outputs found

    Semantically defined Analytics for Industrial Equipment Diagnostics

    Get PDF
    In this age of digitalization, industries everywhere accumulate massive amount of data such that it has become the lifeblood of the global economy. This data may come from various heterogeneous systems, equipment, components, sensors, systems and applications in many varieties (diversity of sources), velocities (high rate of changes) and volumes (sheer data size). Despite significant advances in the ability to collect, store, manage and filter data, the real value lies in the analytics. Raw data is meaningless, unless it is properly processed to actionable (business) insights. Those that know how to harness data effectively, have a decisive competitive advantage, through raising performance by making faster and smart decisions, improving short and long-term strategic planning, offering more user-centric products and services and fostering innovation. Two distinct paradigms in practice can be discerned within the field of analytics: semantic-driven (deductive) and data-driven (inductive). The first emphasizes logic as a way of representing the domain knowledge encoded in rules or ontologies and are often carefully curated and maintained. However, these models are often highly complex, and require intensive knowledge processing capabilities. Data-driven analytics employ machine learning (ML) to directly learn a model from the data with minimal human intervention. However, these models are tuned to trained data and context, making it difficult to adapt. Industries today that want to create value from data must master these paradigms in combination. However, there is great need in data analytics to seamlessly combine semantic-driven and data-driven processing techniques in an efficient and scalable architecture that allows extracting actionable insights from an extreme variety of data. In this thesis, we address these needs by providing: • A unified representation of domain-specific and analytical semantics, in form of ontology models called TechOnto Ontology Stack. It is highly expressive, platform-independent formalism to capture conceptual semantics of industrial systems such as technical system hierarchies, component partonomies etc and its analytical functional semantics. • A new ontology language Semantically defined Analytical Language (SAL) on top of the ontology model that extends existing DatalogMTL (a Horn fragment of Metric Temporal Logic) with analytical functions as first class citizens. • A method to generate semantic workflows using our SAL language. It helps in authoring, reusing and maintaining complex analytical tasks and workflows in an abstract fashion. • A multi-layer architecture that fuses knowledge- and data-driven analytics into a federated and distributed solution. To our knowledge, the work in this thesis is one of the first works to introduce and investigate the use of the semantically defined analytics in an ontology-based data access setting for industrial analytical applications. The reason behind focusing our work and evaluation on industrial data is due to (i) the adoption of semantic technology by the industries in general, and (ii) the common need in literature and in practice to allow domain expertise to drive the data analytics on semantically interoperable sources, while still harnessing the power of analytics to enable real-time data insights. Given the evaluation results of three use-case studies, our approach surpass state-of-the-art approaches for most application scenarios.Im Zeitalter der Digitalisierung sammeln die Industrien überall massive Daten-mengen, die zum Lebenselixier der Weltwirtschaft geworden sind. Diese Daten können aus verschiedenen heterogenen Systemen, Geräten, Komponenten, Sensoren, Systemen und Anwendungen in vielen Varianten (Vielfalt der Quellen), Geschwindigkeiten (hohe Änderungsrate) und Volumina (reine Datengröße) stammen. Trotz erheblicher Fortschritte in der Fähigkeit, Daten zu sammeln, zu speichern, zu verwalten und zu filtern, liegt der eigentliche Wert in der Analytik. Rohdaten sind bedeutungslos, es sei denn, sie werden ordnungsgemäß zu verwertbaren (Geschäfts-)Erkenntnissen verarbeitet. Wer weiß, wie man Daten effektiv nutzt, hat einen entscheidenden Wettbewerbsvorteil, indem er die Leistung steigert, indem er schnellere und intelligentere Entscheidungen trifft, die kurz- und langfristige strategische Planung verbessert, mehr benutzerorientierte Produkte und Dienstleistungen anbietet und Innovationen fördert. In der Praxis lassen sich im Bereich der Analytik zwei unterschiedliche Paradigmen unterscheiden: semantisch (deduktiv) und Daten getrieben (induktiv). Die erste betont die Logik als eine Möglichkeit, das in Regeln oder Ontologien kodierte Domänen-wissen darzustellen, und wird oft sorgfältig kuratiert und gepflegt. Diese Modelle sind jedoch oft sehr komplex und erfordern eine intensive Wissensverarbeitung. Datengesteuerte Analysen verwenden maschinelles Lernen (ML), um mit minimalem menschlichen Eingriff direkt ein Modell aus den Daten zu lernen. Diese Modelle sind jedoch auf trainierte Daten und Kontext abgestimmt, was die Anpassung erschwert. Branchen, die heute Wert aus Daten schaffen wollen, müssen diese Paradigmen in Kombination meistern. Es besteht jedoch ein großer Bedarf in der Daten-analytik, semantisch und datengesteuerte Verarbeitungstechniken nahtlos in einer effizienten und skalierbaren Architektur zu kombinieren, die es ermöglicht, aus einer extremen Datenvielfalt verwertbare Erkenntnisse zu gewinnen. In dieser Arbeit, die wir auf diese Bedürfnisse durch die Bereitstellung: • Eine einheitliche Darstellung der Domänen-spezifischen und analytischen Semantik in Form von Ontologie Modellen, genannt TechOnto Ontology Stack. Es ist ein hoch-expressiver, plattformunabhängiger Formalismus, die konzeptionelle Semantik industrieller Systeme wie technischer Systemhierarchien, Komponenten-partonomien usw. und deren analytische funktionale Semantik zu erfassen. • Eine neue Ontologie-Sprache Semantically defined Analytical Language (SAL) auf Basis des Ontologie-Modells das bestehende DatalogMTL (ein Horn fragment der metrischen temporären Logik) um analytische Funktionen als erstklassige Bürger erweitert. • Eine Methode zur Erzeugung semantischer workflows mit unserer SAL-Sprache. Es hilft bei der Erstellung, Wiederverwendung und Wartung komplexer analytischer Aufgaben und workflows auf abstrakte Weise. • Eine mehrschichtige Architektur, die Wissens- und datengesteuerte Analysen zu einer föderierten und verteilten Lösung verschmilzt. Nach unserem Wissen, die Arbeit in dieser Arbeit ist eines der ersten Werke zur Einführung und Untersuchung der Verwendung der semantisch definierten Analytik in einer Ontologie-basierten Datenzugriff Einstellung für industrielle analytische Anwendungen. Der Grund für die Fokussierung unserer Arbeit und Evaluierung auf industrielle Daten ist auf (i) die Übernahme semantischer Technologien durch die Industrie im Allgemeinen und (ii) den gemeinsamen Bedarf in der Literatur und in der Praxis zurückzuführen, der es der Fachkompetenz ermöglicht, die Datenanalyse auf semantisch inter-operablen Quellen voranzutreiben, und nutzen gleichzeitig die Leistungsfähigkeit der Analytik, um Echtzeit-Daten-einblicke zu ermöglichen. Aufgrund der Evaluierungsergebnisse von drei Anwendungsfällen Übertritt unser Ansatz für die meisten Anwendungsszenarien Modernste Ansätze

    Automated Knowledge Base Quality Assessment and Validation based on Evolution Analysis

    Get PDF
    In recent years, numerous efforts have been put towards sharing Knowledge Bases (KB) in the Linked Open Data (LOD) cloud. These KBs are being used for various tasks, including performing data analytics or building question answering systems. Such KBs evolve continuously: their data (instances) and schemas can be updated, extended, revised and refactored. However, unlike in more controlled types of knowledge bases, the evolution of KBs exposed in the LOD cloud is usually unrestrained, what may cause data to suffer from a variety of quality issues, both at a semantic level and at a pragmatic level. This situation affects negatively data stakeholders – consumers, curators, etc. –. Data quality is commonly related to the perception of the fitness for use, for a certain application or use case. Therefore, ensuring the quality of the data of a knowledge base that evolves is vital. Since data is derived from autonomous, evolving, and increasingly large data providers, it is impractical to do manual data curation, and at the same time, it is very challenging to do a continuous automatic assessment of data quality. Ensuring the quality of a KB is a non-trivial task since they are based on a combination of structured information supported by models, ontologies, and vocabularies, as well as queryable endpoints, links, and mappings. Thus, in this thesis, we explored two main areas in assessing KB quality: (i) quality assessment using KB evolution analysis, and (ii) validation using machine learning models. The evolution of a KB can be analyzed using fine-grained “change” detection at low-level or using “dynamics” of a dataset at high-level. In this thesis, we present a novel knowledge base quality assessment approach using evolution analysis. The proposed approach uses data profiling on consecutive knowledge base releases to compute quality measures that allow detecting quality issues. However, the first step in building the quality assessment approach was to identify the quality characteristics. Using high-level change detection as measurement functions, in this thesis we present four quality characteristics: Persistency, Historical Persistency, Consistency and Completeness. Persistency and historical persistency measures concern the degree of changes and lifespan of any entity type. Consistency and completeness measures identify properties with incomplete information and contradictory facts. The approach has been assessed both quantitatively and qualitatively on a series of releases from two knowledge bases, eleven releases of DBpedia and eight releases of 3cixty Nice. However, high-level changes, being coarse-grained, cannot capture all possible quality issues. In this context, we present a validation strategy whose rationale is twofold. First, using manual validation from qualitative analysis to identify causes of quality issues. Then, use RDF data profiling information to generate integrity constraints. The validation approach relies on the idea of inducing RDF shape by exploiting SHALL constraint components. In particular, this approach will learn, what are the integrity constraints that can be applied to a large KB by instructing a process of statistical analysis, which is followed by a learning model. We illustrate the performance of our validation approach by using five learning models over three sub-tasks, namely minimum cardinality, maximum cardinality, and range constraint. The techniques of quality assessment and validation developed during this work are automatic and can be applied to different knowledge bases independently of the domain. Furthermore, the measures are based on simple statistical operations that make the solution both flexible and scalable

    Computer-Aided Validation of Formal Conceptual Models

    Get PDF
    Conceptual modelling is the process of the software life cycle concerned with the identification and specification of requirements for the system to be built. The use of formal specification languages provides more precise and concise specifications. Nevertheless, there is still a need for techniques to support the validation of formal specifications against the informal user requirements. A limitation of formal specifications is that they cannot readily be understood by users unless they have been specially trained. However, user validation can be facilitated by exploiting the executable aspects of formal specification languages. This thesis presents a systematic approach and workbench environment to support the construction and validation through animation of TROLL specifications. Our approach is an iterative requirements definition process consisting of the formal specification of requirements, the automatic transformation of the specification into an executable form, and the interactive animation of the executable version to validate user requirements. To provide objects with persistence in the animation environment, we analyse how the static structure of TROLL objects can be mapped into relational tables. In order to execute the specification, we analyse the operational meaning of state transitions in TROLL, determine an execution model, and describe the transformation of the specifications into C++ code. We present a prototype implementation of the workbench environment.Die konzeptionelle Modellierung ist die Phase im Softwareentwurf, die sich mit der Identifikation und der Spezifikation von Systemanforderungen befasst. Formale Spezifikationssprachen ermöglichen präzisere und eindeutigere Spezifikationen. Trotzdem werden Techniken zur Validierung von formalen Spezifikationen bezüglich der informellen Benutzeranforderungen weiterhin benötigt. Ein Nachteil von formalen Spezifikationen ist, dass sie für Benutzer ohne entsprechende Vorkenntnisse nicht leicht verständlich sind. Die Einbeziehung der Benutzer in den Validierungsprozess kann jedoch durch die Ausführung der Spezifikation vereinfacht werden. Diese Arbeit liefert einen systematischen Ansatz und eine Entwicklungsumgebung für die Konstruktion von TROLL-Spezifikationen und deren Validierung durch Animation. Unser Ansatz basiert auf einem iterativen Prozess zur Anforderungsdefinition bestehend aus der formalen Spezifikation von Anforderungen, der automatischen Übersetzung der Spezifikation in eine ausführbare Form, und der interaktiven Animation um die Benutzeranforderungen zu validieren. Um die Objektzustände in der Animationsumgebung persistent zu halten, wird untersucht, wie die statische Struktur von TROLL-Objekten in relationale Tabellen umgesetzt werden kann. Um die Spezifikationen auszuführen, wird die operationale Bedeutung von TROLL-Zustandsübergängen analysiert und ein Ausführungsmodell festgelegt. Anschließend wird die Übersetzung von den Spezifikationen in C++ beschrieben. Wir zeigen eine prototypische Implementierung der Animationsumgebung

    Environmental Information Systems and Community-Based Resource Management in Ghana: An Investigation of Institutional Policy and Implementation in Context

    Get PDF
    This study employed a case-study approach and cross-case analysis to investigate the impact of Environmental Information Systems (EIS) and Local Knowledge Systems (LKS) on agro-forestry management and biodiversity conservation. Questionnaire-based interviews with service providers, resource managers and focus group discussions with farmers associated with the United Nations Capacity 21, the Netherlands Tropenbos International (TBI) and the United Nations Project on People Land Management and Conservation (UNPLEC), projects yielded in-depth information on agro-forestry practices in southern Ghana. The findings of the survey revealed that computer-based information systems have been used to identify areas of resource degradation. This has served as a sanitization tool to organize and intensify tree-planting exercises and agroforestry management activities in the affected areas. Evaluation of individual cases and cross-case analysis of EIS projects in Ghana showed parallels and divergences in the modus operandi of EIS implementation at national and district levels. The Capacity 21 project initiated the District Environmental Resource Information System (DERIS). The project procured datasets (eg. satellite images, software, computers and printers) in 8 pilot districts including Sekyere West and Assin Fosu Districts and offered training and skill development programmes under the auspices of the Centre for Environmental Remote Sensing and Geographic Information Services (CERSGIS) to equip focal district planning officers to use tools and datasets to analyze the state of the environment and the extent of resource degradation as well as other development-related activities. This fostered cooperation between the national coordinator of the project, district planners and local farmers to organize regular tree-planting exercises and workshops on alternative livelihood activities which have helped to lessen pressure on the environment to some extent. This approach exhibits a greater degree of top-down planning and implementation. The field survey revealed that PLEC used computer-based information systems during the earlier stages of the project to demarcate demonstration sites and capture spatio-temporal variations in agro-ecological conditions. However, during the subsequent phases, the PLEC project relied heavily and predominantly on local agro-ecological knowledge from a diverse group of farmers to assess resource conditions, and promoted the use of various traditional and exotic agro-forestry and agro-diversity management techniques in the Manya Krobo and Suhum Kraboa Coaltar Districts. The PLEC approach was more bottom-up in its philosophy and practice by allowing natural and social scientists to learn from farmers, and the scientists in turn offered technical advice which enabled farmers to improve their local farming techniques and maximize their farm productivity, while at the same time enhancing the capacity of the biophysical environment to support conventional and alternative livelihood activities continually. The Tropenbos International (TBI) project exhibits elements of both top-down and bottom-up implementation approaches. It recognizes the significant role of tailor-made information (computer-based systems and socio-economic studies mainly from the Forest Services Commission and the University of Ghana, respectively) and skill in forest management. The TBI GORTMAN project streamlined the capacity for information collection in the Goaso and Offinso districts. The findings revealed that farmers associated with the three projects apply various knowledge systems and techniques in agroforestry management. These include, mixed cultivation of domestic, economic and medicinal trees as well as food crops. Reasons such as windbreak, construction materials, medicine, food, fuelwood and nutrient enhancement were cited by farmers for practicing agroforestry. Common food crops found on farms include cocoyam, okro, maize, plantain and yams, among others. These crops are the mainstay of family food and income sources. Other livelihood activities include beekeeping, snail rearing and grasscutter raising and livestock breeding. The diversities of agroforestry practices have engendered decades of farm management practices and resource conservation measures. Another challenge of agroforestry management which is common to all the three projects is that farmers are victims of indiscriminate felling of trees on their farms by timber companies which destroys their crops. Farmers repeatedly cited logistical (tools, seedlings etc) challenges and financial constraints as factors that hamper effective application of knowledge systems in agroforestry management. This is a dominant problem that PLEC and TBI farmers face. Capacity 21 farmers benefited initially from logistical supplies but it was short-lived. In view of these problems, the study recommended measures for improving environmental information systems and local knowledge systems applications in agroforestry management and agrodiversity conservation in southern Ghana

    Army-NASA aircrew/aircraft integration program (A3I) software detailed design document, phase 3

    Get PDF
    The capabilities and design approach of the MIDAS (Man-machine Integration Design and Analysis System) computer-aided engineering (CAE) workstation under development by the Army-NASA Aircrew/Aircraft Integration Program is detailed. This workstation uses graphic, symbolic, and numeric prototyping tools and human performance models as part of an integrated design/analysis environment for crewstation human engineering. Developed incrementally, the requirements and design for Phase 3 (Dec. 1987 to Jun. 1989) are described. Software tools/models developed or significantly modified during this phase included: an interactive 3-D graphic cockpit design editor; multiple-perspective graphic views to observe simulation scenarios; symbolic methods to model the mission decomposition, equipment functions, pilot tasking and loading, as well as control the simulation; a 3-D dynamic anthropometric model; an intermachine communications package; and a training assessment component. These components were successfully used during Phase 3 to demonstrate the complex interactions and human engineering findings involved with a proposed cockpit communications design change in a simulated AH-64A Apache helicopter/mission that maps to empirical data from a similar study and AH-1 Cobra flight test

    Processes and Tools for Decision Support

    Get PDF
    The change in description of Decision Support Systems (DSS) from "concept" through "movement" to "bandwagon" clearly illustrates the growing interest in the managerial as well as in the research field for decision support systems in their different manifestations. One may expect that a conference on processes and tools for decision support brings together people from practice and research, whose experiences give insight in the direction the bandwagon is likely to go. This brings us to the question how expertise on processes and tools for decision support can be stored in such a way that one can get advice when developing future DSS. This asks for the definition and construction of a knowledge base, into which the expertise can be brought. In section 2 of this book, the authors address possible frameworks for such a knowledge base. In section 3 they place the papers presented into the chosen framework. Finally an attempt to make inferences from the contributed papers is made

    Working Notes from the 1992 AAAI Workshop on Automating Software Design. Theme: Domain Specific Software Design

    Get PDF
    The goal of this workshop is to identify different architectural approaches to building domain-specific software design systems and to explore issues unique to domain-specific (vs. general-purpose) software design. Some general issues that cut across the particular software design domain include: (1) knowledge representation, acquisition, and maintenance; (2) specialized software design techniques; and (3) user interaction and user interface
    corecore