782 research outputs found

    Knowledge-centric autonomic systems

    Get PDF
    Autonomic computing revolutionised the commonplace understanding of proactiveness in the digital world by introducing self-managing systems. Built on top of IBM’s structural and functional recommendations for implementing intelligent control, autonomic systems are meant to pursue high level goals, while adequately responding to changes in the environment, with a minimum amount of human intervention. One of the lead challenges related to implementing this type of behaviour in practical situations stems from the way autonomic systems manage their inner representation of the world. Specifically, all the components involved in the control loop have shared access to the system’s knowledge, which, for a seamless cooperation, needs to be kept consistent at all times.A possible solution lies with another popular technology of the 21st century, the Semantic Web,and the knowledge representation media it fosters, ontologies. These formal yet flexible descriptions of the problem domain are equipped with reasoners, inference tools that, among other functions, check knowledge consistency. The immediate application of reasoners in an autonomic context is to ensure that all components share and operate on a logically correct and coherent “view” of the world. At the same time, ontology change management is a difficult task to complete with semantic technologies alone, especially if little to no human supervision is available. This invites the idea of delegating change management to an autonomic manager, as the intelligent control loop it implements is engineered specifically for that purpose.Despite the inherent compatibility between autonomic computing and semantic technologies,their integration is non-trivial and insufficiently investigated in the literature. This gap represents the main motivation for this thesis. Moreover, existing attempts at provisioning autonomic architectures with semantic engines represent bespoke solutions for specific problems (load balancing in autonomic networking, deconflicting high level policies, informing the process of correlating diverse enterprise data are just a few examples). The main drawback of these efforts is that they only provide limited scope for reuse and cross-domain analysis (design guidelines, useful architectural models that would scale well across different applications and modular components that could be integrated in other systems seem to be poorly represented). This work proposes KAS (Knowledge-centric Autonomic System), a hybrid architecture combining semantic tools such as: • an ontology to capture domain knowledge,• a reasoner to maintain domain knowledge consistent as well as infer new knowledge, • a semantic querying engine,• a tool for semantic annotation analysis with a customised autonomic control loop featuring: • a novel algorithm for extracting knowledge authored by the domain expert, • “software sensors” to monitor user requests and environment changes, • a new algorithm for analysing the monitored changes, matching them against known patterns and producing plans for taking the necessary actions, • “software effectors” to implement the planned changes and modify the ontology accordingly. The purpose of KAS is to act as a blueprint for the implementation of autonomic systems harvesting semantic power to improve self-management. To this end, two KAS instances were built and deployed in two different problem domains, namely self-adaptive document rendering and autonomic decision2support for career management. The former case study is intended as a desktop application, whereas the latter is a large scale, web-based system built to capture and manage knowledge sourced by an entire (relevant) community. The two problems are representative for their own application classes –namely desktop tools required to respond in real time and, respectively, online decision support platforms expected to process large volumes of data undergoing continuous transformation – therefore, they were selected to demonstrate the cross-domain applicability (that state of the art approaches tend to lack) of the proposed architecture. Moreover, analysing KAS behaviour in these two applications enabled the distillation of design guidelines and of lessons learnt from practical implementation experience while building on and adapting state of the art tools and methodologies from both fields.KAS is described and analysed from design through to implementation. The design is evaluated using ATAM (Architecture Trade off Analysis Method) whereas the performance of the two practical realisations is measured both globally as well as deconstructed in an attempt to isolate the impact of each autonomic and semantic component. This last type of evaluation employs state of the art metrics for each of the two domains. The experimental findings show that both instances of the proposed hybrid architecture successfully meet the prescribed high-level goals and that the semantic components have a positive influence on the system’s autonomic behaviour

    Visual exploration of semantic-web-based knowledge structures

    Get PDF
    Humans have a curious nature and seek a better understanding of the world. Data, in- formation, and knowledge became assets of our modern society through the information technology revolution in the form of the internet. However, with the growing size of accumulated data, new challenges emerge, such as searching and navigating in these large collections of data, information, and knowledge. The current developments in academic and industrial contexts target the corresponding challenges using Semantic Web techno- logies. The Semantic Web is an extension of the Web and provides machine-readable representations of knowledge for various domains. These machine-readable representations allow intelligent machine agents to understand the meaning of the data and information; and enable additional inference of new knowledge. Generally, the Semantic Web is designed for information exchange and its processing and does not focus on presenting such semantically enriched data to humans. Visualizations support exploration, navigation, and understanding of data by exploiting humans’ ability to comprehend complex data through visual representations. In the context of Semantic- Web-Based knowledge structures, various visualization methods and tools are available, and new ones are being developed every year. However, suitable visualizations are highly dependent on individual use cases and targeted user groups. In this thesis, we investigate visual exploration techniques for Semantic-Web-Based knowledge structures by addressing the following challenges: i) how to engage various user groups in modeling such semantic representations; ii) how to facilitate understanding using customizable visual representations; and iii) how to ease the creation of visualizations for various data sources and different use cases. The achieved results indicate that visual modeling techniques facilitate the engagement of various user groups in ontology modeling. Customizable visualizations enable users to adjust visualizations to the current needs and provide different views on the data. Additionally, customizable visualization pipelines enable rapid visualization generation for various use cases, data sources, and user group

    ORE - A Tool for Repairing and Enriching Knowledge Bases

    Full text link

    Integrating specification and test requirements as constraints in verification strategies for 2D and 3D analog and mixed signal designs

    Get PDF
    Analog and Mixed Signal (AMS) designs are essential components of today’s modern Integrated Circuits (ICs) used in the interface between real world signals and the digital world. They present, however, significant verification challenges. Out-of-specification failures in these systems have steadily increased, and have reached record highs in recent years. Increasing design complexity, incomplete/wrong specifications (responsible for 47% of all non functional ICs) as well as additional challenges faced when testing these systems are obvious reasons. A particular example is the escalating impact of realistic test conditions with respect to physical (interface between the device under test (DUT) and the test instruments, input-signal conditions, input impedance, etc.), functional (noise, jitter) and environmental (temperature) constraints. Unfortunately, the impact of such constraints could result in a significant loss of performance and design failure even if the design itself was flawless. Current industrial verification methodologies, each addressing specific verification challenges, have been shown to be useful for detecting and eliminating design failures. Nevertheless, decreases in first pass silicon success rates illustrate the lack of cohesive, efficient techniques to allow a predictable verification process that leads to the highest possible confidence in the correctness of AMS designs. In this PhD thesis, we propose a constraint-driven verification methodology for monitoring specifications of AMS designs. The methodology is based on the early insertion of test(s) associated with each design specification. It exploits specific constraints introduced by these planned tests as well as by the specifications themselves, as they are extracted and used during the verification process, thus reducing the risk of costly errors caused by incomplete, ambiguous or missing details in the specification documents. To fully analyze the impact of these constraints on the overall AMS design behavior, we developed a two-phase algorithm that automatically integrates them into the AMS design behavioral model and performs the specifications monitoring in a Matlab simulation environment. The effectiveness of this methodology is demonstrated for two-dimensional (2D) and three-dimensional (3D) ICs. Our results show that our approach can predict out-of-specification failures, corner cases that were not covered using previous verification methodologies. On one hand, we show that specifications satisfied without specification and test-related constraints have failed in the presence of these additional constraints. On the other hand, we show that some specifications may degrade or even cannot be verified without adding specific specification and test-related constraints

    Code generation for RESTful APIs in HEADREST

    Get PDF
    Tese de mestrado, Engenharia Informática (Engenharia de Software) Universidade de Lisboa, Faculdade de Ciências, 2018Os serviços web com APIs que aderem ao estilo arquitetural REST, conhecidos por serviços web RESTful, são atualmente muito populares. Estes serviços seguem um estilo cliente-servidor, com interações sem estado baseadas nos verbos disponibilizados pela norma HTTP. Como meio de especificar formalmente a interação entre os clientes e fornecedores de serviços REST, várias linguagens de definição de interfaces (IDL) têm sido propostas. No entanto, na sua maioria, limitam-se ao nível sintático das interfaces que especificam e à descrição das estruturas de dados e dos pontos de interação. A linguagem HEADREST foi desenvolvida como uma IDL que permite ultrapassar estas limitações, suportando a descrição das APIs RESTful também ao nível semântico. Através de tipos e asserções é possível em HEADREST não só definir a estrutura dos dados trocados mas também correlacionar o output com input e o estado do servidor. Uma das principais vantagens de ter descrições formais de APIs RESTful é a capacidade de gerar código boilerplate tanto para clientes como fornecedores. Este trabalho endereça o problema de geração de código para as APIs RESTful descritas com HEADREST e investiga de que forma as técnicas de geração de código existentes para os aspectos sintáticos das APIs RESTful podem ser estendidas para levar em conta também as propriedades comportamentais que podem ser descritas em HEADREST. Tendo em conta que a linguagem HEADREST adota muitos conceitos da Open API Specification (OAS), o trabalho desenvolvido capitaliza nas técnicas de geração de código desenvolvidas para a OAS e envolveu o desenvolvimento de protótipos de geração de código cliente e servidor a partir de especificações HEADREST.Web services with APIs that adhere to the REST architectural style, known as RESTful web services, have become popular. These services follow a client-server style, with stateless interactions based on standard HTTP verbs. In an effort to formally specify the interaction between clients and providers of RESTful services, various interface definition languages (IDL) have been proposed. However, for the most part, they limit themselves to the syntactic level of the interfaces and the description of the data structures and the interaction points. The HEADREST language was developed as an IDL that addresses these limitations, supporting the description of the RESTful APIs also at the semantical level. Through the use of types and assertions we not only define the structure of the data transmitted but also relate output with input and the state of the server. One of the main advantages of having formal descriptions of RESTful APIs is the ability to generate a lot of boilerplate code for both clients and servers. This work addresses the problem of code generation for RESTful APIs described in HEADREST and aims to investigate how the existing code generation techniques for the syntactical aspects of RESTful APIs can be extended to take into account also the behavioural properties that can be described in HEADREST. Given that HEADREST adopts many concepts from the Open API Specification (OAS), this work capitalised on the code generation tools available for OAS and encompassed the development of a prototypical implementation of a code generator for clients and servers from HEADREST specifications

    Abstraction : a notion for reverse engineering.

    Get PDF

    Dynamic Logic for an Intermediate Language: Verification, Interaction and Refinement

    Get PDF
    This thesis is about ensuring that software behaves as it is supposed to behave. More precisely, it is concerned with the deductive verification of the compliance of software implementations with their formal specification. Two successful ideas in program verification are integrated into a new approach: dynamic logic and intermediate verification language. The well-established technique of refinement is used to decompose the difficult task of program verification into two easier tasks
    • …
    corecore