357 research outputs found

    Escaping the Trap of too Precise Topic Queries

    Full text link
    At the very center of digital mathematics libraries lie controlled vocabularies which qualify the {\it topic} of the documents. These topics are used when submitting a document to a digital mathematics library and to perform searches in a library. The latter are refined by the use of these topics as they allow a precise classification of the mathematics area this document addresses. However, there is a major risk that users employ too precise topics to specify their queries: they may be employing a topic that is only "close-by" but missing to match the right resource. We call this the {\it topic trap}. Indeed, since 2009, this issue has appeared frequently on the i2geo.net platform. Other mathematics portals experience the same phenomenon. An approach to solve this issue is to introduce tolerance in the way queries are understood by the user. In particular, the approach of including fuzzy matches but this introduces noise which may prevent the user of understanding the function of the search engine. In this paper, we propose a way to escape the topic trap by employing the navigation between related topics and the count of search results for each topic. This supports the user in that search for close-by topics is a click away from a previous search. This approach was realized with the i2geo search engine and is described in detail where the relation of being {\it related} is computed by employing textual analysis of the definitions of the concepts fetched from the Wikipedia encyclopedia.Comment: 12 pages, Conference on Intelligent Computer Mathematics 2013 Bath, U

    Using the event calculus for tracking the normative state of contracts

    Get PDF
    In this work, we have been principally concerned with the representation of contracts so that their normative state may be tracked in an automated fashion over their deployment lifetime. The normative state of a contract, at a particular time, is the aggregation of instances of normative relations that hold between contract parties at that time, plus the current values of contract variables. The effects of contract events on the normative state of a contract are specified using an XML formalisation of the Event Calculus, called ecXML. We use an example mail service agreement from the domain of web services to ground the discussion of our work. We give a characterisation of the agreement according to the normative concepts of: obligation, power and permission, and show how the ecXML representation may be used to track the state of the agreement, according to a narrative of contract events. We also give a description of a state tracking architecture, and a contract deployment tool, both of which have been implemented in the course of our work.

    Automatic generation of language-based tools using the LISA system

    Get PDF
    Many tools have been constructed using different formal methods to process various parts of a language specification (e.g. scanner generators, parser generators and compiler generators). The automatic generation of a complete compiler was the primary goal of such systems, but researchers recognised the possibility that many other language-based tools could be generated from formal language specifications. Such tools can be generated automatically whenever they can be described by a generic fixed part that traverses the appropriate data structures generated by a specific variable part, which can be systematically derivable from the language specifications. The paper identifies generic and specific parts for various language-based tools. Several language-based tools are presented in the paper, which are automatically generated using an attribute grammar-based compiler generator called LISA. The generated tools that are described in the paper include editors, inspectors, debuggers and visualisers/animators. Because of their complexity of construction, special emphasis is given to visualisers/animators, and the unique contribution of our approach toward generating such tools.GRICES -MCTE

    An architecture for the autonomic curation of crowdsourced knowledge

    Get PDF
    Human knowledge curators are intrinsically better than their digital counterparts at providing relevant answers to queries. That is mainly due to the fact that an experienced biological brain will account for relevant community expertise as well as exploit the underlying connections between knowledge pieces when offering suggestions pertinent to a specific question, whereas most automated database managers will not. We address this problem by proposing an architecture for the autonomic curation of crowdsourced knowledge, that is underpinned by semantic technologies. The architecture is instantiated in the career data domain, thus yielding Aviator, a collaborative platform capable of producing complete, intuitive and relevant answers to career related queries, in a time effective manner. In addition to providing numeric and use case based evidence to support these research claims, this extended work also contains a detailed architectural analysis of Aviator to outline its suitability for automatically curating knowledge to a high standard of quality

    Actor based behavioural simulation as an aid for organisational decision making

    Get PDF
    Decision-making is a critical activity for most of the modern organizations to stay competitive in rapidly changing business environment. Effective organisational decision-making requires deep understanding of various organisational aspects such as its goals, structure, business-as-usual operational processes, environment where it operates, and inherent characteristics of the change drivers that may impact the organisation. The size of a modern organisation, its socio-technical characteristics, inherent uncertainty, volatile operating environment, and prohibitively high cost of the incorrect decisions make decision-making a challenging endeavor. While the enterprise modelling and simulation technologies have evolved into a mature discipline for understanding a range of engineering, defense and control systems, their application in organisational decision-making is considerably low. Current organisational decision-making approaches that are prevalent in practice are largely qualitative. Moreover, they mostly rely on human experts who are often aided with the primitive technologies such as spreadsheets and visual diagrams. This thesis argues that the existing modelling and simulation technologies are neither suitable to represent organisation and decision artifacts in a comprehensive and machine-interpretable form nor do they comprehensively address the analysis needs. An approach that advances the modelling abstraction and analysis machinery for organisational decision-making is proposed. In particular, this thesis proposes a domain specific language to represent relevant aspects of an organisation for decision-making, establishes the relevance of a bottom-up simulation technique as a means for analysis, and introduces a method to utilise the proposed modelling abstraction, analysis technique, and analysis machinery in an effective and convenient manner

    Domain-specific languages in Prolog for declarative expert knowledge in rules and ontologies

    Get PDF
    Declarative if–then rules have proven very useful in many applications of expert sys- tems. They can be managed in deductive databases and evaluated using the well-known forward-chaining approach. For domain-experts, however, the syntax of rules becomes complicated quickly, and already many different knowledge representation formalisms ex- ist. Expert knowledge is often acquired in story form using interviews. In this paper, we discuss its representation by defining domain-specific languages (Dsls) for declarative ex- pert rules. They can be embedded in Prolog systems in internal Dsls using term expan- sion and as external Dsls using definite clause grammars and quasi-quotations – for more sophisticated syntaxes. Based on the declarative rules and the integration with the Prolog-based deductive database system DDbase, multiple rules acquired in practical case studies can be combined, compared, graphically analysed by domain-experts, and evaluated, resulting in an extensi- ble system for expert knowledge. As a result, the actual modeling Dsl becomes executable; the declarative forward-chaining evaluation of deductive databases can be understood by the domain experts. Our Dsl for rules can be further improved by integrating ontologies and rule annotations

    Linked Data and you: Bringing music research software into the Semantic Web

    Get PDF
    The promise of the Semantic Web is to democratize access to data, allowing anyone to make use of and contribute back to the global store of knowledge. Within the scope of the OMRAS2 Music Information Retrieval project, we have made use of and contributed to Semantic Web technologies for purposes ranging from the publication of music recording metadata to the online dissemination of results from audio analysis algorithms. In this paper, we assess the extent to which our tools and frameworks can assist in research and facilitate distrib- uted work among audio and music researchers, and enumerate and motivate further steps to improve collaborative efforts in music informatics using the Semantic Web. To this end, we review some of the tools developed by the OMRAS2 project, examine the extent to which our work reflects the Semantic Web paradigm, and discuss some of the remaining work needed to fulfil the promise of online music informatics research

    Declarative Rules for Annotated Expert Knowledge in Change Management

    Get PDF
    In this paper, we use declarative and domain-specific languages for representing expert knowledge in the field of change management in organisational psychology. Expert rules obtained in practical case studies are represented as declarative rules in a deductive database. The expert rules are annotated by information describing their provenance and confidence. Additional provenance information for the whole - or parts of the - rule base can be given by ontologies. Deductive databases allow for declaratively defining the semantics of the expert knowledge with rules; the evaluation of the rules can be optimised and the inference mechanisms could be changed, since they are specified in an abstract way. As the logical syntax of rules had been a problem in previous applications of deductive databases, we use specially designed domain-specific languages to make the rule syntax easier for non-programmers. The semantics of the whole knowledge base is declarative. The rules are written declaratively in an extension datalogs of the well-known deductive database language datalog on the data level, and additional datalogs rules can configure the processing of the annotated rules and the ontologies

    Knowledge-centric autonomic systems

    Get PDF
    Autonomic computing revolutionised the commonplace understanding of proactiveness in the digital world by introducing self-managing systems. Built on top of IBM’s structural and functional recommendations for implementing intelligent control, autonomic systems are meant to pursue high level goals, while adequately responding to changes in the environment, with a minimum amount of human intervention. One of the lead challenges related to implementing this type of behaviour in practical situations stems from the way autonomic systems manage their inner representation of the world. Specifically, all the components involved in the control loop have shared access to the system’s knowledge, which, for a seamless cooperation, needs to be kept consistent at all times.A possible solution lies with another popular technology of the 21st century, the Semantic Web,and the knowledge representation media it fosters, ontologies. These formal yet flexible descriptions of the problem domain are equipped with reasoners, inference tools that, among other functions, check knowledge consistency. The immediate application of reasoners in an autonomic context is to ensure that all components share and operate on a logically correct and coherent “view” of the world. At the same time, ontology change management is a difficult task to complete with semantic technologies alone, especially if little to no human supervision is available. This invites the idea of delegating change management to an autonomic manager, as the intelligent control loop it implements is engineered specifically for that purpose.Despite the inherent compatibility between autonomic computing and semantic technologies,their integration is non-trivial and insufficiently investigated in the literature. This gap represents the main motivation for this thesis. Moreover, existing attempts at provisioning autonomic architectures with semantic engines represent bespoke solutions for specific problems (load balancing in autonomic networking, deconflicting high level policies, informing the process of correlating diverse enterprise data are just a few examples). The main drawback of these efforts is that they only provide limited scope for reuse and cross-domain analysis (design guidelines, useful architectural models that would scale well across different applications and modular components that could be integrated in other systems seem to be poorly represented). This work proposes KAS (Knowledge-centric Autonomic System), a hybrid architecture combining semantic tools such as: • an ontology to capture domain knowledge,• a reasoner to maintain domain knowledge consistent as well as infer new knowledge, • a semantic querying engine,• a tool for semantic annotation analysis with a customised autonomic control loop featuring: • a novel algorithm for extracting knowledge authored by the domain expert, • “software sensors” to monitor user requests and environment changes, • a new algorithm for analysing the monitored changes, matching them against known patterns and producing plans for taking the necessary actions, • “software effectors” to implement the planned changes and modify the ontology accordingly. The purpose of KAS is to act as a blueprint for the implementation of autonomic systems harvesting semantic power to improve self-management. To this end, two KAS instances were built and deployed in two different problem domains, namely self-adaptive document rendering and autonomic decision2support for career management. The former case study is intended as a desktop application, whereas the latter is a large scale, web-based system built to capture and manage knowledge sourced by an entire (relevant) community. The two problems are representative for their own application classes –namely desktop tools required to respond in real time and, respectively, online decision support platforms expected to process large volumes of data undergoing continuous transformation – therefore, they were selected to demonstrate the cross-domain applicability (that state of the art approaches tend to lack) of the proposed architecture. Moreover, analysing KAS behaviour in these two applications enabled the distillation of design guidelines and of lessons learnt from practical implementation experience while building on and adapting state of the art tools and methodologies from both fields.KAS is described and analysed from design through to implementation. The design is evaluated using ATAM (Architecture Trade off Analysis Method) whereas the performance of the two practical realisations is measured both globally as well as deconstructed in an attempt to isolate the impact of each autonomic and semantic component. This last type of evaluation employs state of the art metrics for each of the two domains. The experimental findings show that both instances of the proposed hybrid architecture successfully meet the prescribed high-level goals and that the semantic components have a positive influence on the system’s autonomic behaviour

    Proceedings of the 15th Conference on Knowledge Organization WissOrg'17 of theGerman Chapter of the International Society for Knowledge Organization (ISKO),30th November - 1st December 2017, Freie Universität Berlin

    Get PDF
    Wissensorganisation is the name of a series of biennial conferences / workshops with a long tradition, organized by the German chapter of the International Society of Knowledge Organization (ISKO). The 15th conference in this series, held at Freie Universität Berlin, focused on knowledge organization for the digital humanities. Structuring, and interacting with, large data collections has become a major issue in the digital humanities. In these proceedings, various aspects of knowledge organization in the digital humanities are discussed, and the authors of the papers show how projects in the digital humanities deal with knowledge organization.Wissensorganisation ist der Name einer Konferenzreihe mit einer langjährigen Tradition, die von der Deutschen Sektion der International Society of Knowledge Organization (ISKO) organisiert wird. Die 15. Konferenz dieser Reihe, die an der Freien Universität Berlin stattfand, hatte ihren Schwerpunkt im Bereich Wissensorganisation und Digital Humanities. Die Strukturierung von und die Interaktion mit großen Datenmengen ist ein zentrales Thema in den Digital Humanities. In diesem Konferenzband werden verschiedene Aspekte der Wissensorganisation in den Digital Humanities diskutiert, und die Autoren der einzelnen Beiträge zeigen, wie die Digital Humanities mit Wissensorganisation umgehen
    • …
    corecore