55 research outputs found

    A formal modeling approach to ontology engineering

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Developing a computational framework for explanation generation in knowledge-based systems and its application in automated feature recognition

    Get PDF
    A Knowledge-Based System (KBS) is essentially an intelligent computer system which explicitly or tacitly possesses a knowledge repository that helps the system solve problems. Researches focusing on building KBSs for industrial applications to improve design quality and shorten research cycle are increasingly attracting interests. For the early models, explanability is considered as one of the major benefits of using KBSs since that most of them are generally rule-based systems and the explanation can be generated based on the rule traces of the reasoning behaviors. With the development of KBS, the definition of knowledge base is becoming much more general than just using rules, and the techniques used to solve problems in KBS are far more than just rule-based reasoning. Many Artificial Intelligence (AI) techniques are introduced, such as neural network, genetic algorithm, etc. The effectiveness and efficiency of KBS are thus improved. However, as a trade-off, the explanability of KBS is weakened. More and more KBSs are conceived as black-box systems that do not run transparently to users, resulting in loss of trusts for the KBSs. Developing an explanation model for modern KBSs has a positive impact on user acceptance of the KBSs and the advices they provided. This thesis proposes a novel computational framework for explanation generation in KBS. Different with existing models which are usually built inside a KBS and generate explanations based on the actual decision making process, the explanation model in our framework stands outside the KBS and attempts to generate explanations through the production of an alternative justification that is unrelated to the actual decision making process used by the system. In this case, the knowledge and reasoning approaches in the explanation model can be optimized specially for explanation generation. The quality of explanation is thus improved. Another contribution in this study is that the system aims to cover three types of explanations (where most of the existing models only focus on the first two): 1) decision explanation, which helps users understand how a KBS reached its conclusion; 2) domain explanation, which provides detailed descriptions of the concepts and relationships within the domain; 3) software diagnostic, which diagnoses user observations of unexpected behaviors of the system or some relevant domain phenomena. The framework is demonstrated with a case of Automated Feature Recognition (AFR). The resulting explanatory system uses Semantic Web languages to implement an individual knowledge base only for explanatory purpose, and integrates a novel reasoning approach for generating explanations. The system is tested with an industrial STEP file, and delivers good quality explanations for user queries about how a certain feature is recognized

    Knowledge-centric autonomic systems

    Get PDF
    Autonomic computing revolutionised the commonplace understanding of proactiveness in the digital world by introducing self-managing systems. Built on top of IBM’s structural and functional recommendations for implementing intelligent control, autonomic systems are meant to pursue high level goals, while adequately responding to changes in the environment, with a minimum amount of human intervention. One of the lead challenges related to implementing this type of behaviour in practical situations stems from the way autonomic systems manage their inner representation of the world. Specifically, all the components involved in the control loop have shared access to the system’s knowledge, which, for a seamless cooperation, needs to be kept consistent at all times.A possible solution lies with another popular technology of the 21st century, the Semantic Web,and the knowledge representation media it fosters, ontologies. These formal yet flexible descriptions of the problem domain are equipped with reasoners, inference tools that, among other functions, check knowledge consistency. The immediate application of reasoners in an autonomic context is to ensure that all components share and operate on a logically correct and coherent “view” of the world. At the same time, ontology change management is a difficult task to complete with semantic technologies alone, especially if little to no human supervision is available. This invites the idea of delegating change management to an autonomic manager, as the intelligent control loop it implements is engineered specifically for that purpose.Despite the inherent compatibility between autonomic computing and semantic technologies,their integration is non-trivial and insufficiently investigated in the literature. This gap represents the main motivation for this thesis. Moreover, existing attempts at provisioning autonomic architectures with semantic engines represent bespoke solutions for specific problems (load balancing in autonomic networking, deconflicting high level policies, informing the process of correlating diverse enterprise data are just a few examples). The main drawback of these efforts is that they only provide limited scope for reuse and cross-domain analysis (design guidelines, useful architectural models that would scale well across different applications and modular components that could be integrated in other systems seem to be poorly represented). This work proposes KAS (Knowledge-centric Autonomic System), a hybrid architecture combining semantic tools such as: • an ontology to capture domain knowledge,• a reasoner to maintain domain knowledge consistent as well as infer new knowledge, • a semantic querying engine,• a tool for semantic annotation analysis with a customised autonomic control loop featuring: • a novel algorithm for extracting knowledge authored by the domain expert, • “software sensors” to monitor user requests and environment changes, • a new algorithm for analysing the monitored changes, matching them against known patterns and producing plans for taking the necessary actions, • “software effectors” to implement the planned changes and modify the ontology accordingly. The purpose of KAS is to act as a blueprint for the implementation of autonomic systems harvesting semantic power to improve self-management. To this end, two KAS instances were built and deployed in two different problem domains, namely self-adaptive document rendering and autonomic decision2support for career management. The former case study is intended as a desktop application, whereas the latter is a large scale, web-based system built to capture and manage knowledge sourced by an entire (relevant) community. The two problems are representative for their own application classes –namely desktop tools required to respond in real time and, respectively, online decision support platforms expected to process large volumes of data undergoing continuous transformation – therefore, they were selected to demonstrate the cross-domain applicability (that state of the art approaches tend to lack) of the proposed architecture. Moreover, analysing KAS behaviour in these two applications enabled the distillation of design guidelines and of lessons learnt from practical implementation experience while building on and adapting state of the art tools and methodologies from both fields.KAS is described and analysed from design through to implementation. The design is evaluated using ATAM (Architecture Trade off Analysis Method) whereas the performance of the two practical realisations is measured both globally as well as deconstructed in an attempt to isolate the impact of each autonomic and semantic component. This last type of evaluation employs state of the art metrics for each of the two domains. The experimental findings show that both instances of the proposed hybrid architecture successfully meet the prescribed high-level goals and that the semantic components have a positive influence on the system’s autonomic behaviour

    Survey over Existing Query and Transformation Languages

    Get PDF
    A widely acknowledged obstacle for realizing the vision of the Semantic Web is the inability of many current Semantic Web approaches to cope with data available in such diverging representation formalisms as XML, RDF, or Topic Maps. A common query language is the first step to allow transparent access to data in any of these formats. To further the understanding of the requirements and approaches proposed for query languages in the conventional as well as the Semantic Web, this report surveys a large number of query languages for accessing XML, RDF, or Topic Maps. This is the first systematic survey to consider query languages from all these areas. From the detailed survey of these query languages, a common classification scheme is derived that is useful for understanding and differentiating languages within and among all three areas

    A Goal-Directed and Policy-Based Approach to System Management

    Get PDF
    This thesis presents a domain-independent approach to dynamic system management using goals and policies. A goal is a general, high-level aim a system must continually work toward achieving. A policy is a statement of how a system should behave for a given set of detectable events and conditions. Combined, goals may be realised through the selection and execution of policies that contribute to their aims. In this manner, a system may be managed using a goal-directed, policy-based approach. The approach is a collection of related techniques and tools: a policy language and policy system, goal definition and refinement via policy selection, and conflict filtering among policies. Central to these themes, ontologies are used to model application domains, and incorporate domain knowledge within the system. The ACCENT policy system (Advanced Component Control Enhancing Network Technologies, http://www.cs.stir.ac.uk/accent) is used as a base for the approach, while goals and policies are defined using an extension of APPEL (Adaptable and Programmable Policy Environment and Language, http://www.cs.stir.ac.uk/appel). The approach differs from existing work in that it reduces system state, goals and policies to a numerical rather than logical form. This is more user-friendly as the goal domain may be expressed without any knowledge of formal methods. All developed techniques and tools are entirely domain-independent, allowing for reuse with other event-driven systems. The ability to express a system aim as a goal provides more powerful and proactive high-level management than was previously possible using policies alone. The approach is demonstrated and evaluated within this thesis for the domains of Internet telephony and sensor network/wind turbine management

    Ontology-based infrastructure for intelligent applications

    Get PDF
    Ontologies currently are a hot topic in the areas of knowledge management and enterprise application integration. In this thesis, we investigate how ontologies can also be used as an infrastructure for developing applications that intelligently support a user with various tasks. Based on recent developments in the area of the Semantic Web, we provide three major contributions. We introduce inference engines, which allow the execution of business logic that is specified in a declarative way, while putting strong emphasis on scalability and ease of use. Secondly, we suggest various solutions for interfacing applications that are developed under this new paradigm with existing IT infrastructure. This includes the first running solution, to our knowledge, for combining the emerging areas of the Semantic Web Services. Finally, we introduce a set of intelligent applications, which is built on top of onologies and Semantic Web standards, providing a proof of concept that the engineering effort can largely be based on standard components.Ontologien sind derzeit ein viel diskutiertes Thema in Bereichen wie Wissensmanagement oder Enterprise Application Integration. Diese Arbeit stellt dar, wie Ontologien als Infrastruktur zur Entwicklung neuartiger Applikationen verwendet werden können, die den User bei verschiedenen Arbeiten unterstützen. Aufbauend auf den im Rahmen des Semantischen Webs entstandenen Spezifikationen, werden drei wesentliche Beiträge geleistet. Zum einen stellen wir Inferenzmaschinen vor, die das Ausführen von deklarativ spezifizierter Applikationslogik erlauben, wobei besonderes Augenmerk auf die Skalierbarkeit gelegt wird. Zum anderen schlagen wir mehrere Lösungen zum Anschluss solcher Systeme an bestehende IT Infrastruktur vor. Dies beinhaltet den, unseres Wissens nach, ersten lauffähigen Prototyp der die beiden aufstrebenden Felder des Semantischen Webs und Web Services verbindet. Schließlich stellen wir einige intelligente Applikationen vor, die auf Ontologien basieren und somit großteils von Werkzeugen automatisch generiert werden können

    Adaptive and Reactive Rich Internet Applications

    Get PDF
    In this thesis we present the client-side approach of Adaptive and Reactive Rich Internet Applications as the main result of our research into how to bring in time adaptivity to Rich Internet Applications. Our approach leverages previous work on adaptive hypermedia, event processing and other research disciplines. We present a holistic framework covering the design-time as well as the runtime aspects of Adaptive and Reactive Rich Internet Applications focusing especially on the run-time aspects

    Model driven design and data integration in semantic web information systems

    Get PDF
    The Web is quickly evolving in many ways. It has evolved from a Web of documents into a Web of applications in which a growing number of designers offer new and interactive Web applications with people all over the world. However, application design and implementation remain complex, error-prone and laborious. In parallel there is also an evolution from a Web of documents into a Web of `knowledge' as a growing number of data owners are sharing their data sources with a growing audience. This brings the potential new applications for these data sources, including scenarios in which these datasets are reused and integrated with other existing and new data sources. However, the heterogeneity of these data sources in syntax, semantics and structure represents a great challenge for application designers. The Semantic Web is a collection of standards and technologies that offer solutions for at least the syntactic and some structural issues. If offers semantic freedom and flexibility, but this leaves the issue of semantic interoperability. In this thesis we present Hera-S, an evolution of the Model Driven Web Engineering (MDWE) method Hera. MDWEs allow designers to create data centric applications using models instead of programming. Hera-S especially targets Semantic Web sources and provides a flexible method for designing personalized adaptive Web applications. Hera-S defines several models that together define the target Web application. Moreover we implemented a framework called Hydragen, which is able to execute the Hera-S models to run the desired Web application. Hera-S' core is the Application Model (AM) in which the main logic of the application is defined, i.e. defining the groups of data elements that form logical units or subunits, the personalization conditions, and the relationships between the units. Hera-S also uses a so-called Domain Model (DM) that describes the content and its structure. However, this DM is not Hera-S specific, but instead allows any Semantic Web source representation as its DM, as long as its content can be queried by the standardized Semantic Web query language SPARQL. The same holds for the User Model (UM). The UM can be used for personalization conditions, but also as a source of user-related content if necessary. In fact, the difference between DM and UM is conceptual as their implementation within Hydragen is the same. Hera-S also defines a presentation model (PM) which defines presentation details of elements like order and style. In order to help designers with building their Web applications we have introduced a toolset, Hera Studio, which allows to build the different models graphically. Hera Studio also provides some additional functionality like model checking and deployment of the models in Hydragen. Both Hera-S and its implementation Hydragen are designed to be flexible regarding the user of models. In order to achieve this Hydragen is a stateless engine that queries for relevant information from the models at every page request. This allows the models and data to be changed in the datastore during runtime. We show that one way to exploit this flexibility is by applying aspect-orientation to the AM. Aspect-orientation allows us to dynamically inject functionality that pervades the entire application. Another way to exploit Hera-S' flexibility is in reusing specialized components, e.g. for presentation generation. We present a configuration of Hydragen in which we replace our native presentation generation functionality by the AMACONT engine. AMACONT provides more extensive multi-level presentation generation and adaptation capabilities as well aspect-orientation and a form of semantic based adaptation. Hera-S was designed to allow the (re-)use of any (Semantic) Web datasource. It even opens up the possibility for data integration at the back end, by using an extendible storage layer in our database of choice Sesame. However, even though theoretically possible it still leaves much of the actual data integration issue. As this is a recurring issue in many domains, a broader challenge than for Hera-S design only, we decided to look at this issue in isolation. We present a framework called Relco which provides a language to express data transformation operations as well as a collection of techniques that can be used to (semi-)automatically find relationships between concepts in different ontologies. This is done with a combination of syntactic, semantic and collaboration techniques, which together provide strong clues for which concepts are most likely related. In order to prove the applicability of Relco we explore five application scenarios in different domains for which data integration is a central aspect. This includes a cultural heritage portal, Explorer, for which data from several datasources was integrated and was made available by a mapview, a timeline and a graph view. Explorer also allows users to provide metadata for objects via a tagging mechanism. Another application is SenSee: an electronic TV-guide and recommender. TV-guide data was integrated and enriched with semantically structured data from several sources. Recommendations are computed by exploiting the underlying semantic structure. ViTa was a project in which several techniques for tagging and searching educational videos were evaluated. This includes scenarios in which user tags are related with an ontology, or other tags, using the Relco framework. The MobiLife project targeted the facilitation of a new generation of mobile applications that would use context-based personalization. This can be done using a context-based user profiling platform that can also be used for user model data exchange between mobile applications using technologies like Relco. The final application scenario that is shown is from the GRAPPLE project which targeted the integration of adaptive technology into current learning management systems. A large part of this integration is achieved by using a user modeling component framework in which any application can store user model information, but which can also be used for the exchange of user model data

    Complex Event Processing with XChangeEQ

    Get PDF
    The emergence of event-driven architectures, automation of business processes, drastic cost-reductions in sensor technology, and a growing need to monitor IT systems (as well as other systems) due to legal, contractual, or operational considerations lead to an increasing generation of events. This development is accompanied by a growing demand for managing and processing events in an automated and systematic way. Complex Event Processing (CEP) encompasses the (automatable) tasks involved in making sense of all events in a system by deriving higher-level knowledge from lower-level events while the events occur, i.e., in a timely, online fashion and permanently. At the core of CEP are queries which monitor streams of "simple" events for so-called complex events, that is, events or situations that manifest themselves in certain combinations of several events occurring (or not occurring) over time and that cannot be detected from looking only at single events. Querying events is fundamentally different from traditional querying and reasoning with database or Web data, since event queries are standing queries that are evaluated permanently over time against incoming streams of event data. In order to express complex events that are of interest to a particular application or user in a convenient, concise, cost-effective and maintainable manner, special purpose Event Query Languages (EQLs) are needed. This thesis investigates practical and theoretical issues related to querying complex events, covering the spectrum from language design over declarative semantics to operational semantics for incremental query evaluation. Its central topic is the development of the high-level event query language XChangeEQ. In contrast to previous data stream and event query languages, XChangeEQ's language design recognizes the four querying dimensions of data extractions, event composition, temporal relationships, and, for non-monotonic queries involving negation or aggregation, event accumulation. XChangeEQ deals with complex structured data in event messages, thus addressing the need to query events communicated in XML formats over the Web. It supports deductive rules as an abstraction and reasoning mechanism for events. To achieve a full coverage of the four querying dimensions, it builds upon a separation of concerns of the four querying dimensions, which makes it easy-to-use and highly expressive. A recurrent theme in the formal foundations of XChangeEQ is that, despite the fundamental differences between traditional database queries and event queries, many well-known results from databases and logic programming are, with some importance changes, applicable to event queries. Declarative semantics for XChangeEQ are given as a (Tarski-style) model theory with accompanying fixpoint theory. This approach accounts well for (1) data in events and (2) deductive rules defining new events from existing ones, two aspects often neglected in previous work of semantics of EQLs. For the evaluation of event queries, this work introduces operational semantics based on an extended and tailored form of relational algebra and query plans with materialization points. Materialization points account for storing and maintaining information about those received events that are relevant for, i.e., can contribute to, future query answers, as well as for an incremental evaluation that avoids recomputing certain intermediate results. Efficient state maintenance in incremental evaluation is approached by "differentiating" algebra expressions, i.e., by deriving expressions for computing only the changes to materialization points. Knowing how long an event is relevant is a prerequisite for performing garbage collection during event query evaluation and also of central importance for developing cost-based query planners. To this end, this thesis introduces a notion of relevance of events (to a given query plan) and develops methods for determining temporal relevance, a particularly useful form based on time-related information
    corecore