90 research outputs found

    Design Objectives for Evolvable Knowledge Graphs

    Get PDF
    Knowledge graphs (KGs) structure knowledge to enable the development of intelligent systems across several application domains. In industrial maintenance, comprehensive knowledge of the factory, machinery, and components is indispensable. This study defines the objectives for evolvable KGs, building upon our prior research, where we initially identified the problem in industrial maintenance. Our contributions include two main aspects: firstly, the categorization of learning within the KG construction process and the identification of design objectives for the KG process focusing on supporting industrial maintenance. The categorization highlights the specific requirements for KG design, emphasizing the importance of planning for maintenance and reuse

    Knowledge-centric autonomic systems

    Get PDF
    Autonomic computing revolutionised the commonplace understanding of proactiveness in the digital world by introducing self-managing systems. Built on top of IBM’s structural and functional recommendations for implementing intelligent control, autonomic systems are meant to pursue high level goals, while adequately responding to changes in the environment, with a minimum amount of human intervention. One of the lead challenges related to implementing this type of behaviour in practical situations stems from the way autonomic systems manage their inner representation of the world. Specifically, all the components involved in the control loop have shared access to the system’s knowledge, which, for a seamless cooperation, needs to be kept consistent at all times.A possible solution lies with another popular technology of the 21st century, the Semantic Web,and the knowledge representation media it fosters, ontologies. These formal yet flexible descriptions of the problem domain are equipped with reasoners, inference tools that, among other functions, check knowledge consistency. The immediate application of reasoners in an autonomic context is to ensure that all components share and operate on a logically correct and coherent “view” of the world. At the same time, ontology change management is a difficult task to complete with semantic technologies alone, especially if little to no human supervision is available. This invites the idea of delegating change management to an autonomic manager, as the intelligent control loop it implements is engineered specifically for that purpose.Despite the inherent compatibility between autonomic computing and semantic technologies,their integration is non-trivial and insufficiently investigated in the literature. This gap represents the main motivation for this thesis. Moreover, existing attempts at provisioning autonomic architectures with semantic engines represent bespoke solutions for specific problems (load balancing in autonomic networking, deconflicting high level policies, informing the process of correlating diverse enterprise data are just a few examples). The main drawback of these efforts is that they only provide limited scope for reuse and cross-domain analysis (design guidelines, useful architectural models that would scale well across different applications and modular components that could be integrated in other systems seem to be poorly represented). This work proposes KAS (Knowledge-centric Autonomic System), a hybrid architecture combining semantic tools such as: • an ontology to capture domain knowledge,• a reasoner to maintain domain knowledge consistent as well as infer new knowledge, • a semantic querying engine,• a tool for semantic annotation analysis with a customised autonomic control loop featuring: • a novel algorithm for extracting knowledge authored by the domain expert, • “software sensors” to monitor user requests and environment changes, • a new algorithm for analysing the monitored changes, matching them against known patterns and producing plans for taking the necessary actions, • “software effectors” to implement the planned changes and modify the ontology accordingly. The purpose of KAS is to act as a blueprint for the implementation of autonomic systems harvesting semantic power to improve self-management. To this end, two KAS instances were built and deployed in two different problem domains, namely self-adaptive document rendering and autonomic decision2support for career management. The former case study is intended as a desktop application, whereas the latter is a large scale, web-based system built to capture and manage knowledge sourced by an entire (relevant) community. The two problems are representative for their own application classes –namely desktop tools required to respond in real time and, respectively, online decision support platforms expected to process large volumes of data undergoing continuous transformation – therefore, they were selected to demonstrate the cross-domain applicability (that state of the art approaches tend to lack) of the proposed architecture. Moreover, analysing KAS behaviour in these two applications enabled the distillation of design guidelines and of lessons learnt from practical implementation experience while building on and adapting state of the art tools and methodologies from both fields.KAS is described and analysed from design through to implementation. The design is evaluated using ATAM (Architecture Trade off Analysis Method) whereas the performance of the two practical realisations is measured both globally as well as deconstructed in an attempt to isolate the impact of each autonomic and semantic component. This last type of evaluation employs state of the art metrics for each of the two domains. The experimental findings show that both instances of the proposed hybrid architecture successfully meet the prescribed high-level goals and that the semantic components have a positive influence on the system’s autonomic behaviour

    An Industrial Data Analysis and Supervision Framework for Predictive Manufacturing Systems

    Get PDF
    Due to the advancements in the Information and Communication Technologies field in the modern interconnected world, the manufacturing industry is becoming a more and more data rich environment, with large volumes of data being generated on a daily basis, thus presenting a new set of opportunities to be explored towards improving the efficiency and quality of production processes. This can be done through the development of the so called Predictive Manufacturing Systems. These systems aim to improve manufacturing processes through a combination of concepts such as Cyber-Physical Production Systems, Machine Learning and real-time Data Analytics in order to predict future states and events in production. This can be used in a wide array of applications, including predictive maintenance policies, improving quality control through the early detection of faults and defects or optimize energy consumption, to name a few. Therefore, the research efforts presented in this document focus on the design and development of a generic framework to guide the implementation of predictive manufacturing systems through a set of common requirements and components. This approach aims to enable manufacturers to extract, analyse, interpret and transform their data into actionable knowledge that can be leveraged into a business advantage. To this end a list of goals, functional and non-functional requirements is defined for these systems based on a thorough literature review and empirical knowledge. Subsequently the Intelligent Data Analysis and Real-Time Supervision (IDARTS) framework is proposed, along with a detailed description of each of its main components. Finally, a pilot implementation is presented for each of this components, followed by the demonstration of the proposed framework in three different scenarios including several use cases in varied real-world industrial areas. In this way the proposed work aims to provide a common foundation for the full realization of Predictive Manufacturing Systems

    Bounded generativity: contextualising interdependencies between architecture, ecosystem, and environment in digital product innovation

    Get PDF
    Existing theorisation on digital product innovation remains predicated on a particular architectural form (modularity) and mode (unbounded generativity) of organising at scale participation of heterogeneous actors in an ecosystem. Despite the widely accepted role of product architectures in organising digital product innovation there has been limited academic engagement beyond the dynamics of modular design and its proximate context of the ecosystem. While contextualist research within information systems acknowledges the existence of wider systemic conditions underlying IS innovation, this has not received adequate attention within digital product innovation. This thesis builds on existing literature to understand the nature of interdependencies between the architecture, its proximate context of the ecosystem, and the distant context of the wider environment with the aim of developing a contextualised theory of digital product innovation for an alternative architectural form. To augment and extend existing theory, this research studies the design and development of an agent-based simulation model for forced displacement. It uses Kleine’s Choice Framework, adapted for this study, to understand how different conditions of possibility within the proximate and distant contexts shape operational and substantive choices within a digital product’s ongoing development. It follows a process research approach to unpack the sequence of events, its constituent elements, and causal trajectories over time. It is based on an in-depth case study constructed through year-long field work with the development team along with the study of associated documents and reports. The research contributes to the theory on digital product innovation by unpacking how this trilateral interdependency creates opportunity structures at different stages of the development process which shape and bound the generative potential of digital products. This thesis demonstrates how this occurs through complementary resource-relationship configurations which negotiate the systemic conditions of multiple environmental drivers and technical conditions of a hybrid digital architecture

    Runtime Quantitative Verification of Self-Adaptive Systems

    Get PDF
    Software systems used in mission- and business-critical applications in domains including defence, healthcare, and finance must comply with strict dependability, performance, and other Quality-of-Service (QoS) requirements. Self-adaptive systems achieve this compliance under changing environmental conditions, evolving requirements and system failures by using closed-loop control to modify their behaviour and structure in response to these events. Runtime quantitative verification (RQV) is a mathematically-based approach that implements the closed-loop control of self-adaptive systems. Using runtime observations of a system and its environment, RQV updates stochastic models whose formal analysis underpins the adaptation decisions made within the control loop. The approach can identify and, under certain conditions, predict violation of QoS requirements, and can drive self-adaptation in ways guaranteed to restore or maintain compliance with these requirements. Despite its merits, RQV has significant computation and memory overheads, which restrict its applicability to small systems and to adaptations affecting only the configuration parameters of the system. In this thesis, we introduce RQV variants that improve the efficiency and scalability of the approach and extend its applicability to larger and more complex self-adaptive software systems, and to adaptations that modify the structure of a system. First, we integrate RQV with established efficiency improvement techniques from other software engineering areas. We use caching of recent analysis results, limited lookahead to precompute suitable adaptations for potential future changes, and nearly-optimal reconfiguration to eliminate the need for an exhaustive analysis of the entire reconfiguration space. Second, we introduce an RQV variant that incorporates evolutionary algorithms into the RQV process facilitating the efficient search through large reconfiguration spaces and enabling adaptations that include structural changes. Third, we propose an RQV-driven approach that decentralises the control loops in distributed self-adaptive systems. Finally, we devise an RQV-based methodology for the engineering of trustworthy self-adaptive systems. We evaluate the proposed RQV variants using prototype self-adaptive systems from several application domains, including an embedded system for unmanned underwater vehicles and a foreign exchange service-based system. Our results, subject to the adaptation scenarios used in the evaluation, demonstrate the effectiveness and generality of the new RQV variants

    Designing a Library of Components for Textual Scholarship

    Get PDF
    Il presente lavoro affronta e descrive temi legati all'applicazione di nuove tecnologie, di metodologie informatiche e di progettazione software volti allo sviluppo di strumenti innovativi per le Digital Humanities (DH), un’area di studio caratterizzata da una forte interdisciplinarità e da una continua evoluzione. In particolare, questo contributo definisce alcuni specifici requisiti relativi al dominio del Literary Computing e al settore del Digital Textual Scholarship. Conseguentemente, il contesto principale di elaborazione tratta documenti scritti in latino, greco e arabo, nonché testi in lingue moderne contenenti temi storici e filologici. L'attività di ricerca si concentra sulla progettazione di una libreria modulare (TSLib) in grado di operare su fonti ad elevato valore culturale, al fine di editarle, elaborarle, confrontarle, analizzarle, visualizzarle e ricercarle. La tesi si articola in cinque capitoli. Il capitolo 1 riassume il contesto del dominio applicativo e fornisce un quadro generale degli obiettivi e dei benefici della ricerca. Il capitolo 2 illustra alcuni importanti lavori e iniziative analoghe, insieme a una breve panoramica dei risultati più significativi ottenuti nel settore delle DH. Il capitolo 3 ripercorre accuratamente e motiva il processo di progettazione messo a punto. Esso inizia con la descrizione dei principi tecnici adottati e mostra come essi vengono applicati al dominio d'interesse. Il capitolo continua definendo i requisiti, l'architettura e il modello del metodo proposto. Sono così evidenziati e discussi gli aspetti concernenti i design patterns e la progettazione delle Application Programming Interfaces (APIs). La parte finale del lavoro (capitolo 4) illustra i risultati ottenuti da concreti progetti di ricerca che, da un lato, hanno contribuito alla progettazione della libreria e, dall'altro, hanno avuto modo di sfruttarne gli sviluppi. Sono stati quindi discussi diversi temi: (a) l'acquisizione e la codifica del testo, (b) l'allineamento e la gestione delle varianti testuali, (c) le annotazioni multilivello. La tesi si conclude con alcune riflessioni e considerazioni indicando anche possibili percorsi d'indagine futuri (capitolo 5)

    Dynamic Complex Event Processing for Industrial Monitoring Systems

    Get PDF
    Using Complex Event Processing (CEP) as part of monitoring systems is a state-of-the-art approach in the manufacturing industry that still requires development. The industry is increasingly moving towards implementing Service Oriented Architecture (SOA) based systems to respond to increasing demands of interoperability amongst other operations in a business organisation. Complex event processors are used as part of monitoring systems but current complex event processors are usually system specific. This thesis aims to propose and demonstrate a more dynamic approach for implementing an industrial monitoring system using complex event processing. Service Oriented Architecture uses event-based messaging to communicate between different devices and systems. This creates large amounts of data in the monitored system. In order to infer important information from this vast body of data the CEP is used to query through the events. These queries are predefined and cannot be changed during runtime. The CEP holds the main logic of the monitoring system and thus dictates what the system actually monitors. Monitoring system requires the possibility to change the monitoring logic. This is why a method of dynamically adding queries will be proposed in this thesis. In order for a SOA-based monitoring system to be dynamic the CEP needs to be dynamic. This thesis proposes a CEP solution with generic implementation, dynamic query definition during runtime and the possibility to use recursive user defined functions that allow reusing query templates in different solutions. The developed CEP is tested with two different implementation use cases. First one a simulated use case that tests the monitoring system performance with large amounts of events. Second one a manufacturing line implementation to demonstrate the monitoring system in an actual manufacturing environment. Tests were run on both use cases to gain information on how the CEP performs and to demonstrate the functionality of the developed monitoring system. The developed CEP was used as a part of oil lubrication use case for IMC-AESOP project. IMC-AESOP project was an EU project researching how to apply state-of-the-art SOA-based systems to the industrial automation field

    Service-oriented design of environmental information systems

    Get PDF
    Service-orientation has an increasing impact upon the design process and the architecture of environmental information systems. This thesis specifies the SERVUS design methodology for geospatial applications based upon standards of the Open Geospatial Consortium. SERVUS guides the system architect to rephrase use case requirements as a network of semantically-annotated requested resources and to iteratively match them with offered resources that mirror the capabilities of existing services
    • …
    corecore